position: EnglishChannel  > Insight > Global Summit Calls for Working Together for AI Safety  

Global Summit Calls for Working Together for AI Safety  

Source: Science and Technology Daily | 2023-11-07 14:52:03 | Author: TANG Zhexiao


Matthias Busch (L), Ingo Siegert (C) and Dominykas Strazdas from the Institute of Information and Communication Technology, Department of Mobile Dialog Systems at Germany 's Otto von Guericke University, shake hands with the humanoid robot "Ari" on October 26,? 2023. (PHOTO: VCG)

Edited by TANG Zhexiao

How to deal with the risks?from?rapidly-developing?artificial intelligence (AI)?has become a?priority worldwide?since?ChatGPT,?an AI-powered language model that can create human-like texts released?to the public last year.

AI Safety Summit 2023, the global meeting convened by the UK?in Bletchley?from?November 1-2, brought together?politicians, AI company representatives and experts to discuss the global future of AI and work toward a shared understanding of its risks.

Sifted, a media site on European start-ups, said the summit was "broadly hailed as a diplomatic success because it managed to bring together senior Chinese and U.S. officials around the same table, and was bolstered by the participation of the European Commission president Ursula von der Leyen and even tech entrepreneur Elon Musk."

Global AI statement inked

At the summit, 28 participating countries and the European Union signed the Bletchley Declaration, acknowledging that AI “should be designed, developed, deployed and used in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible” and agreeing to work together to ensure that.

According to the UK government, the declaration aims to identify AI?safety risks of shared concern, and build risk-based policies across concerned countries to ensure safety in light of such risks respectively.

The declaration noted that particular safety risks arise at the "frontier" of?AI — the highly capable general-purpose?AI?models, including foundation models, that could perform a wide variety of tasks - as well as relevant specific narrow?AI?that could exhibit capabilities that cause harm, matching or exceeding the capabilities of today’s most advanced models.?

UK science minister Michelle Donelan?said?the declaration was “a landmark achievement”?and laid?the foundations for discussions?of the summit.

However, experts think the declaration is not comprehensive enough. Paul Teather, CEO of AI-enabled research firm AMPLYFI, told Euronews Next that bringing major powers together to endorse ethical principles can be viewed as a success, but the undertaking to produce concrete policies and accountability mechanisms must follow swiftly.?

Countries?moving at their own pace

Participants in Bletchley reported their progress in AI governance?and supervision.

UK officials have made it clear that they do not think regulation is needed, or is even possible at this stage given how fast the industry is moving, The Guardian reported.

French economy and finance minister?Bruno Le Maire?emphasized that Europe must innovate before regulating.

The French government fought hard for open source, software which alllows users to? ?develop, modify and distribute the model.?“We shouldn’t discard open source upfront,” said?Jean No?l Barrot, France’s junior minister in charge of digital issues,?Sifted reported.?“What we’ve seen in previous generations of technologies is that open source has been very useful both for transparency and democratic governance of these technologies.”

The U.S. will launch an AI safety institute to evaluate known and emerging risks of the so-called frontier of?AI models, U.S. Secretary of Commerce Gina Raimondo said on November 1.

China ready to enhance AI safety with all sides

The Chinese delegation at the summit emphasized the need for international cooperation on AI safety and governance issues, urging increased representation of developing countries in global AI governance.

According to Wu Zhaohui, China's vice minister of science and technology, China was willing to "enhance dialogue and communication in AI safety with all sides."

American media CNBC reported? Donelan as?saying that?it is a "massive"?gesture that Chinese government officials chose to attend the U.K. AI summit”.

Elon Musk, CEO of?Tesla and SpaceX,?hailed UK Prime Minister Rishi Sunak’s "essential" decision to invite China , said?Politico Europe. “Having them here is essential," Musk reportedly said, "If they’re not participants, it’s pointless."

“There’s no safety without China,”?according to?Sifted, “China was a key participant at the summit, given its role in developing AI. Its involvement in the summit was described as constructive.”

European?Commission vice president for values and transparency Věra Jourová, who visited Beijing?in September to hold talks on AI and international data flows, said: “I think it was important that they were here, also that they heard our determination to work together, and honestly, for the really big global catastrophic risks, we need to have China on board.”

China launched?the Global Artificial Intelligence Governance Initiative on October 18, presenting a constructive approach to addressing universal concerns over AI development and governance.?

Editor: 湯哲梟

Top News

Fruit Flies' First Space Odyssey Aboard Shenzhou-19

The trio will conduct a series of experiments in fields such as life science, fluid physics, combustion science and materials science. Notably, this is the first time that fruit flies have been taken on a Chinese space mission as experimental subjects. What made scientists choose fruit flies? What experiment will they undergo?

抱歉,您使用的瀏覽器版本過低或開啟了瀏覽器兼容模式,這會(huì)影響您正常瀏覽本網(wǎng)頁(yè)

您可以進(jìn)行以下操作:

1.將瀏覽器切換回極速模式

2.點(diǎn)擊下面圖標(biāo)升級(jí)或更換您的瀏覽器

3.暫不升級(jí),繼續(xù)瀏覽

繼續(xù)瀏覽