A chatbot with socialist core values, please

In 2017, Chinese internet giant Tencent took down its chatbot Baby Q after it referred to the government as a “corrupt regime” and claimed it had no love for the Chinese Communist Party.
A chatbot with socialist core values, please (Wikimedia Commons)
A chatbot with socialist core values, please (Wikimedia Commons)
Published on

It said it dreamed of emigrating to the United States, in an undoubtedly terrifying display of unruly, disloyal AI behavior for the Chinese Communist Party.

Beijing is trying to get it right this time, even though AI probably can’t be trusted.

In fact, China’s taking such a different approach to regulating artificial intelligence than the West that some proponents of AI governance fret that China may go its own way, with potentially disastrous results.

Last week China updated a draft law from April on artificial intelligence, making it among the first countries in the world to regulate services like ChatGPT.

The Cyberspace Administration of China unveiled updated rules to manage consumer-facing chatbots. The new law takes effect on August 15.

The new measures are still described as “interim,” as China attempts to reign in domestic AI while also not stifling innovation. Some AI experts expressed surprise that the latest laws are less stringent than the earlier draft versions.

But the new rules only apply to the general public. AI developed for research means, for military use and for use by overseas users, is exempted.

It is in effect the opposite approach to the U.S., which has developed rules for AI-driven military applications but has let the private sector release generative AI models such as ChatGPT and Bard to the public with no regulation.

The fact is, whether China likes it or not, generative AI – built on very, very large databases scraped from the internet, known as “large language models” – does odd things, and even its developers don’t know why.

It’s not known how it thinks. Some experts call it an “alien intelligence.”

It said it dreamed of emigrating to the United States, in an undoubtedly terrifying display of unruly, disloyal AI behavior for the Chinese Communist Party. (Wikimedia Commons)
It said it dreamed of emigrating to the United States, in an undoubtedly terrifying display of unruly, disloyal AI behavior for the Chinese Communist Party. (Wikimedia Commons)

Upcoming summit

Sir Patrick Vallance, the former U.K. chief science officer, has called on the British government to ensure China is on the list when it holds the first global conference on regulating AI later this year.

But whether China should be involved is proving divisive.

Given China’s leading role in developing the new technology, Vallance said its expertise was needed.

“It’s never sensible to exclude the people who are leading in certain areas and they are doing very important work on AI and also raising some legitimate questions as to how one responds to that but it doesn't seem sensible to me to exclude them,” he said.

According to a post at the governance.ai website, some say the summit may be the only opportunity to ensure that global AI governance includes China given it will likely be excluded from other venues, such as the OECD and G7.

The argument runs that China will likely reject any global governance principles that Western states begin crafting without its input.

The counter argument is that China’s participation could make the summit less productive.

“Inviting China may … make the summit less productive by increasing the level of disagreement and potential for discord among participants,” the government.ai post argued.

“There may also be some important discussion topics that would not be as freely explored with Chinese representatives in the room,” highlighting Chinese recalcitrance on points of self-interest, as is the case equally on global warming and threats to Taiwan.

At a recent United Nations summit, speakers stressed the urgency of governance of AI.

“It has the potential to turbocharge economic development, monitor the climate crisis, achieve breakthroughs in medical research [but also] amplify bias, reinforce discrimination and enable new levels of authoritarian surveillance,” one speaker said.

The speaker added, “AI offers a great opportunity to monitor peace agreements, but can easily fall into the hands of bad actors, and even create security risks by accident. Generative AI has potential for good and evil at scale.”

The private sector’s role in AI has few other parallels in terms of strategic technologies, including nuclear, the summit heard.

Jack Clark, cofounder of AI developer Anthropic, told the summit that even developers don’t understand how AI systems based on “deep mind” or “large language models” – computer models of synaptic brain behavior – really work.

“It’s like building engines without understanding the science of combustion,” he said.

“Once these systems are developed and deployed, users find new uses for them unanticipated by their developers.”

The other problem, Clark said, is chaotic and unpredictable behavior, referring to AI’s propensity to “hallucinate,” or in layman’s terms, fabricate things – lie to please whomever is asking it questions.

“Developers have to be accountable, so they don’t build systems that compromise global security,” he argued.

In other words, AI is a bold experiment that all-controlling Beijing would usually nip in the bud at a nascent phase.

But such is the competitive nature of attaining AI mastery of all the knowledge in the world and extrapolating it into a new world, nobody – not even Xi Jinping – wants to miss out.

A chatbot with socialist core values, please (Wikimedia Commons)
AI chatbot ChatGPT fails UPSC exams

Existential risk

In May this year, hundreds of AI experts signed an open letter.

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” the one-sentence statement said.

To some it came as a shock that such a large number of experts who were instrumental in bringing AI to where it is today, were essentially calling for a moratorium on development, or at least a slowdown and government scrutiny of private-sector players racing to beat each other to the “holy grail” of general AI, or AI that can do everything better than humans.

“Today’s systems are not anywhere close to posing an existential risk,” Yoshua Bengio, a professor and AI researcher at the University of Montreal – he is sometimes referred to as the godfather of AI – told the New York Times.

“But in one, two, five years? There is too much uncertainty. That is the issue. We are not sure this won’t pass some point where things get catastrophic.”

“People are actively trying to build systems that self-improve,” said Connor Leahy, the founder of Conjecture, another AI technology firm.

“Currently, this doesn’t work. But someday, it will. And we don’t know when that day is.”

Leahy notes that as companies and criminals alike give AI goals like “make some money,” they “could end up breaking into banking systems, fomenting revolution in a country where they hold oil futures or replicating themselves when someone tries to turn them off” he told the Times.

Other risks

Writing for the MIT Technology Review, former Google CEO Eric Schmidt, writes, “AI is such a powerful tool because it allows humans to accomplish more with less: less time, less education, less equipment. But these capabilities make it a dangerous weapon in the wrong hands.

“Even humans with entirely good intentions can still prompt AIs to produce bad outcomes,” he added.

Schmidt pointed to the paperclip dilemma – a hypothetical AI is told to make as many paperclips as possible and promptly “hijacks the electrical grid and kills any human who tries to stop it as the paper clips keep piling up” until the entire world is a storage site for paper clips.

But there are still more risks: an AI-driven arms race, for example.

The Chinese representative at the UN summit, for example, pointed out that the U.S. was restricting supplies of semiconductor chips to China, asking, how are the U.S. and China going to agree on AI governance when geopolitical rivalry and technological competition is so strong?

China and the U.S. may be competing in the rollout of AI systems, but there’s no agreement on the danger – obvious in the case of nuclear weapons – the two powers may be drifting into a competitive sphere of the unknown.

Scale AI founder Alexandr Wang recently told lawmakers, “If you compare as a percentage of their overall military investment, the PLA [People’s Liberation Army] is spending somewhere between one to two percent of their overall budget into artificial intelligence whereas the DoD is spending somewhere between 0.1 and 0.2 of our budget on AI.”

Wang rejected the possibility that the U.S. and China might be able to work together on AI.

“I think it would be a stretch to say we’re on the same team on this issue,” Wang said, noting that China’s first instinct was to use AI for facial recognition systems in order to control its people.

“I expect them to use modern AI technologies in the same way to the degree that they can, and that seems to be the immediate priority of the Chinese Communist Party when it comes to implementation of AI,” Wang said. (RFA/NJ)

logo
NewsGram
www.newsgram.com