網路城邦
回本城市首頁 時事論壇
市長:胡卜凱  副市長:
加入本城市推薦本城市加入我的最愛訂閱最新文章
udn城市政治社會政治時事【時事論壇】城市/討論區/
討論區知識和議題 字體:
看回應文章  上一個討論主題 回文章列表 下一個討論主題
人工智慧縱橫談 – 開欄文
 瀏覽5,710|回應21推薦2

胡卜凱
等級:8
留言加入好友
文章推薦人 (2)

亓官先生
胡卜凱

四月開始,由於   ChatGPT    Bing Chat 的上線,網上以及各line群組掀起一陣AI瘋。我當時大概忙於討論《我們的反戰聲明》,沒有湊這個熱鬧。現在轉載幾篇相關文章。也請參考《「人工智慧」研發現況及展望 》一文

有些人擔憂「人工智慧」會成為「人上機器」,操控世界甚至奴役人類。我不懂AI,思考也單純;所以,如果「人工智慧」亂了套,我自認為有一個簡單治它的方法:

拔掉電源插頭。如果這個方法不夠力,炸掉電力傳輸線和緊急發電機;再不行,炸掉發電廠。

本文於 修改第 3 次
回應 回應給此人 推薦文章 列印 加入我的文摘

引用
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7205038
 回應文章 頁/共3頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁
人工智慧可能改變戰爭和世界的六種方式 -- Hal Brands
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

6 Ways AI Will Change War and the World

Hal Brands, Bloomberg Opinion, 06/09/24

Artificial intelligence will change our everyday lives in innumerable ways: how governments serve their citizens; how we drive (and get driven); how we handle and, we hope, protect our finances; how doctors diagnose and treat diseases; even how my students research and write their essays.


But just how revolutionary will AI be? Will it upend the global balance of power? Will it allow autocracies to rule the world? Will it make warfare so fast and ferocious that it becomes uncontrollable? In short, will AI
fundamentally alter the rhythms of world affairs?

It is, of course, too soon to say definitively: The effects of AI will ultimately hinge on the decisions leaders and nations make, and technology sometimes takes surprising turns. But even as we are wowed and worried by the next version of ChatGPT, we need to wrestle with six deeper questions about international affairs in the age of AI. And we need to consider a surprising possibility: Perhaps AI won’t change the world as much as we seem to expect.

1) Will AI make war uncontrollable?

Consider one assertion — that artificial intelligence will make conflict more lethal and harder to constrain. Analysts envision a future in which
machines can pilot fighter jets more skillfully than humans, AI-enabled cyberattacks devastate enemy networks, and advanced algorithms turbocharge the speed of decisions. Some warn that automated decision-making could trigger rapid-fire escalation — even nuclear escalation — that leaves policymakers wondering what happened. If war plans and railway timetables caused World War I, perhaps AI will cause World War III.

That AI will change warfare is undeniable. From enabling predictive maintenance of hardware to facilitating astounding improvements in precision targeting, the possibilities are profound. A single F-35, quarterbacking a swarm of semiautonomous drones, could
wield the firepower of an entire bomber wing. As the National Security Commission on Artificial Intelligence concluded in 2021, a “new era of conflict” will be dominated by the side that masters “new ways of war.”

But there’s nothing fundamentally novel here. The story of warfare through the ages is one in which innovation regularly makes combat faster and more intense. So think twice before accepting the proposition that AI will make escalation uncontrollable.

The US and China have
discussed an agreement not to automate their nuclear command-and-control processes — a pledge Washington has made independently — for the simple reason that states have strong incentives not to relinquish control over weapons whose use could endanger their own survival. Russia’s behavior, including the development of nuclear-armed torpedoes that could eventually operate autonomously, is a greater concern. But even during the Cold War, when Moscow built a system meant to ensure nuclear retaliation even if its leadership was wiped out, it never turned off the human controls. Expect today’s great powers to exploit the military possibilities AI presents aggressively — while trying to keep the most critical decisions in human hands.

In fact, AI could reduce the risk of breakneck escalation, by helping decision makers peer through the fog of crisis and war. The Pentagon believes that AI-enabled intelligence and analytical tools can help humans sift through confusing or fragmentary information regarding an enemy’s preparations for war, or even whether a feared missile attack is indeed underway. This isn’t science fiction: Assistance from AI
reportedly helped US intelligence analysts sniff out Russian President Vladimir Putin’s invasion of Ukraine in 2022.

In this sense, AI can mitigate the uncertainty and fear that pushes people toward extreme reactions. By giving policymakers greater understanding of events, AI might also improve their ability to manage them.

2) Will AI help autocracies like China control the world?

What about a related nightmare — that AI will help the forces of tyranny control the future? Analysts such as Yuval Noah Harari have
warned that artificial intelligence will reduce the costs and increase the returns from repression. AI-equipped intelligence services will need less manpower to decipher the vast amounts of intelligence they gather on their populations — allowing them, for example, to precisely map and remorselessly dismantle protest networks. They will use AI-enabled facial recognition technology to monitor and control their citizens, while employing AI-created disinformation to discredit critics at home and abroad. By making autocracy increasingly efficient, AI could allow the dictators to dominate the dawning age.

This is certainly
what China hopes for. President Xi Jinping’s government has devised a “social credit” system that uses AI, facial recognition and big data to ensure the reliability of its citizens — by regulating their access to everything from low-interest loans to airplane tickets. Ubiquitous, AI-assisted surveillance has turned Xinjiang into a dystopian model of modern repression.

Beijing
intends to seize the “strategic commanding heights” of innovation because it believes AI can bolster its domestic system and its military muscle. It is using the power of the illiberal state to steer money and talent toward advanced technologies.

It’s not a given, though, that the autocracies will come out ahead.

To believe that AI fundamentally favors autocracy is to believe that some of the most vital, longstanding enablers of innovation — such as open flows of information and tolerance for dissent — are no longer so important. Yet autocracy is already limiting China’s potential.

Building powerful large language models requires huge pools of information. But if those inputs are tainted or biased because China’s internet is so heavily
censored, the quality of the outputs will suffer. An increasingly repressive system will also struggle, over time, to attract top talent: It is telling that 38% of top AI researchers in the US are originally from China. And smart technology must still be used by China’s governing institutions, which are getting progressively less smart — that is, less technocratically competent — as the political system becomes ever more subservient to an emperor-for-life.

China will be a formidable technological competitor. But even in the age of AI, Xi and his illiberal brethren may struggle to escape the competitive drag autocracy creates.

3) Will AI favor the best or the rest?

Some technologies narrow the gap between the most and least technologically advanced societies. Nuclear weapons, for instance, allow relative pipsqueaks like North Korea to offset the military and economic advantages a superpower and its allies possess. Others widen the divide: In the 19th century, repeating rifles, machine guns and steamships allowed European societies to subjugate vast areas of the world.

In some respects, AI will empower the weak. US officials worry that large language models might help terrorists with crude science kits to build biological weapons. Rogue states, like Iran, might use AI to coordinate drone swarms against US warships in the Persian Gulf. More benignly, AI could expand access to basic healthcare services in the Global South, creating big payoffs in increased life expectancy and economic productivity.

In other respects, however, AI will be a rich man’s game. Developing state-of-the-art AI is fantastically expensive. Training large language models can
require vast investments and access to a finite quantity of top scientists and engineers — to say nothing of staggering amounts of electricity. Some estimates place the cost of the infrastructure supporting Microsoft’s Bing AI chatbot at $4 billion. Almost anyone can be a taker of AI — but being a maker requires copious resources.

This is why the middle powers making big moves in AI, such as Saudi Arabia and the United Arab Emirates, have very deep pockets. Many of the early leaders in the AI race are either tech titans (Alphabet, Microsoft, Meta, IBM, Nvidia and others) or firms with access to their money (OpenAI). And the US, with its vibrant, well-funded tech sector, still
leads the field.

What’s true in the private sector may also be true in the realm of warfare. At the outset, the military benefits of new technology may flow disproportionately to countries with the generous defense budgets required to develop and field new capabilities at scale.

All this could change: Early leads don’t always translate into enduring advantages. Upstarts, whether firms or countries, have disrupted other fields before. For the time being, however, AI may do more to reinforce than revolutionize the balance of power.

4) Will AI fracture or fortify coalitions?

How artificial intelligence affects the balance of power depends on how it affects global coalitions. As analysts at Georgetown University’s Center for Security and Emerging Technologies have
documented, the US and its allies can vastly outpace China in spending on advanced technologies — but only if they combine their resources. Beijing’s best hope is that the free world fractures over AI.

It could happen. Washington worries that Europe’s emerging approach to generative AI regulation could
choke off innovation: In this sense, AI is underscoring divergent US and European approaches to markets and risk. Another key democracy, India, prefers strategic autonomy to strategic alignment — in technology as in geopolitics, it prefers to go its own way. Meanwhile, some of Washington’s nondemocratic partners, namely Saudi Arabia and the UAE, have explored tighter tech ties to Beijing.

But it’s premature to conclude that AI will fundamentally disrupt US alliances. In some cases, the US is successfully using those alliances as tools of technological competition: Witness how Washington has cajoled Japan and the Netherlands to limit China’s access to high-end semiconductors. The US is also leveraging security partnerships with Saudi Arabia and the UAE to place
limits on their technological relations with Beijing, and to promote AI partnerships between American and Emirati firms. In this sense, geopolitical alignments are shaping the development of AI, rather than vice versa.

More fundamentally, the preferences countries have regarding AI are related to their preferences for domestic and international order. So whatever differences the US and Europe have may pale in comparison to their shared fears of what will happen if China surges to supremacy. Europe and America may eventually find their way into greater alignment on AI issues — just as shared hostility to US power is pushing China and Russia to
cooperate more closely in military applications of the technology today.

5) Will AI tame or inflame great-power rivalry?

Many of these questions relate to how AI will affect the intensity of the competition between the US-led West and the autocratic powers headed by China. No one really knows whether runaway AI could truly endanger humanity. But shared existential risks do sometimes make strange bedfellows.

During the original Cold War, the US and the Soviet Union cooperated to manage the perils associated with nuclear weapons. During the new Cold War, perhaps Washington and Beijing will find common purpose in keeping AI from being used for malevolent purposes such as bioterrorism or otherwise threatening countries on both sides of today’s geopolitical divides.

Yet the analogy cuts both ways, because nuclear weapons also made the Cold War sharper and scarier. Washington and Moscow had to navigate high-stakes showdowns such as the Cuban Missile Crisis and several Berlin crises before a precarious stability settled in. Today, AI arms control seems even more
daunting than nuclear arms control, because AI development is so hard to monitor and the benefits of unilateral advantage are so tantalizing. So even as the US and China start a nascent AI dialogue, technology is turbocharging their competition.

AI is at the heart of a Sino-American tech war, as China uses methods fair and foul to hasten its own development and the US deploys export controls, investment curbs and other measures to block Beijing’s path. If China can’t accelerate its technological progress, says Xi, it
risks being “strangled” by Washington.

AI is also fueling a fight for military superiority in the Western Pacific: The Pentagon’s
Replicator Initiative envisions using thousands of AI-enabled drones to eviscerate a Chinese invasion fleet headed for Taiwan. Dueling powers may eventually find ways of cooperating, perhaps tacitly, on the mutual dangers AI poses. But a transformative technology will intensify many aspects of their rivalry between now and then.

6) Will AI make the private sector superior to the public?

AI will undoubtedly shift the balance of influence between the public and private sectors. Analogies between AI and nuclear weapons can be enlightening, but only to a point: The notion of a Manhattan Project for AI is misleading because it is a field where money, innovation and talent are overwhelmingly found in the private sector.

Firms on the AI frontier are thus becoming potent geopolitical actors — and governments know it. When Elon Musk and other experts
advocated a moratorium on development of advanced AI models in 2023, official Washington urged the tech firms not to stop — because doing so would simply help China catch up. Government policy can speed or slow innovation. But to a remarkable degree, America’s strategic prospects depend on the achievements of private firms.

It’s important not to take this argument too far. China’s civil-military fusion is meant to ensure that the state can direct and exploit innovation by the private sector. Although the US, as a democracy, can’t really
mimic that approach, the concentration of great power in private firms will bring a government response.

Washington is engaging, albeit hesitantly, in a debate about how best to regulate AI so as to foster innovation while limiting malign uses and catastrophic accidents. The long arm of state power is active in other ways, as well: The US would never allow Chinese investors to buy the nation’s leading AI firms, and it is
restricting American investment in the AI sectors of adversary states. And when Silicon Valley Bank, which held the deposits of many firms and investors in the tech sector, spiraled toward insolvency, geopolitical concerns helped initiate a government bailout.

One should also expect, in the coming years, a greater emphasis on helping the Pentagon stimulate development of militarily relevant technologies — and making it easier to turn private-sector innovation into war-winning weapons. The more strategically salient AI is, the less willing governments will be to just let the market do its work.

We can’t predict the future: AI could hit a dead end, or it might accelerate beyond anyone’s expectations. Technology, moreover, is not some autonomous force. Its development and effects will be shaped by decisions in Washington and around the world.

For now, the key is to ask the right questions, because doing so helps us understand the stakes of those decisions. It helps us imagine the various futures AI could shape. Not least, it illustrates that maybe AI won’t cause a geopolitical
earthquake after all.

Sure, there are reasons to fear that AI will make warfare uncontrollable, upend the balance of power, fracture US alliances or fundamentally favor autocracies over democracies. But there are also good reasons to suspect that it won’t.

This isn’t to counsel complacency. Averting more dangerous outcomes will require energetic efforts and smart choices. Indeed, the primary value of this exercise is to show that a very wide range of scenarios is possible — and the worst ones won’t simply foreclose themselves.

Whether AI favors autocracy or democracy depends, in part, on whether the US pursues enlightened immigration policies that help it hoard top talent. Whether AI reinforces or fractures US alliances hinges on whether Washington treats those alliances as assets to be protected or as burdens to be discarded. Whether AI upholds or undermines the existing international hierarchy, and how much it changes the relationship between the private sector and the state, depends on how wisely the US and other countries regulate its development and use.

What’s beyond doubt is that AI opens inspiring vistas and terrible possibilities. America’s goal should be to innovate ruthlessly, and responsibly, enough so that a basically favorable world order doesn’t change fundamentally — even as technology does.


Hal Brands is a Senior Fellow at the AEI.

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7231260
夢醒人工智慧 - Nolen Gertz
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

這篇文章雖然出於哲學教授的筆下在我這個AI門外漢看來,其內容除了「以偏概全」的邏輯謬誤外,還有頗為嚴重的偏執狂加上些許被迫害狂。轉載於此,算是支持「百花齊放」和「言論自由」吧。


The day the AI dream died

Unveiling tech pessimism

Nolen Gertz, 05/31/24

AI's promise was to solve problems, not only of the day but all future obstacles as well. Yet the hype has started to wear off. Amidst the growing disillusionment, Nolen Gertz challenges the prevailing optimism, suggesting that our reliance on AI might be less about solving problems and more about escaping the harsh realities of our time. He questions whether AI is truly our saviour or just a captivating distraction, fuelling capitalist gains and nihilistic diversions from the global crises we face. -- Editor’s Notes


I recently participated in a
HTLGI debate where one of the participants, Kenneth Cukier, who is an editor of The Economist, criticized my view of technology as being unnecessarily pessimistic. He confidently claimed that we should be more optimistic about technological progress because such progress, for example, would help us to solve the climate change crisis. Though Cukier admitted that he might not just be optimistic but even “Panglossian” (幻想式的樂觀) when it comes to technological progress, he nevertheless argued, “I think if you think of all the global challenges that we’re facing, in large part because of the technologies that we’ve created, Industrial Revolution most importantly, gunpowder as well, it’s going to be technology that is going to help us overcome it.”

Cukier admits that technologies are the source of many of the “global challenges that we’re facing,” but he still nevertheless believes that technologies will also be
the solution. The reason for this faith in technological progress is due primarily to the belief that artificial intelligence (AI) is so radically different from and superior to previous technologies that it will not only solve our problems, but solve problems that were caused by previous technological solutions to our problems. But such enthusiasm for AI is now dying down as the hype that had originally surrounded AI is being more and more replaced with disillusionment.

Just as ChatGPT was once the face of AI’s successes, it is now the face of
AI’s failures. Each day journalists help to shed light on the realities behind how ChatGPT works; like the reports about the invisible labor force in Kenya and Pakistan, that helped to train ChatGPT; reports about the massive environmental impact of ChatGPT’s data centers; reports about how ChatGPT repackages the work of writers and artists as its own, without attributing or paying for that work; and reports about how ChatGPT can provide answers to people’s questions that seem to be based on facts but are really based on hallucinations passed off as facts. We have now gone from fearing that ChatGPT would put people out of work to instead fearing how much more work ChatGPT requires in order to use it without getting failed, fired, or sued.

Yet while the hype surrounding AI seems to have disappeared, the AI itself has not. So if we have moved from thinking that AI could do everything, to wondering if AI can do anything, then why have we not simply abandoned AI altogether? The answer would seem to be that AI did not need to live up to the hype in order to be proven effective. So the question we should be asking is not whether AI will ever be successful, but rather: how has it already been successful? If AI has indeed already proved effective enough for people to still invest billions in its development, then what has AI been effective at doing thus far?

One answer would seem to be that AI has already been successful at making rich people richer. If AI is seen as a legitimate investment, then simply suggesting that your company would integrate AI into its products would be enough reason to motivate investors to pour money into your company. Likewise, if AI is seen as capable of putting people out of work, then merely the potential of AI provides sufficient excuse to cut costs, fire employees, and force new hires to accept work for less pay. So whether or not AI ever provides a return on investment, and whether or not AI ever turns out to be capable of actually replacing human labor, AI nevertheless has already proved to be incredibly successful from the perspective of capitalism.

But another important answer to the question of why AI, post-hype, still occupies so much of our time and attention would seem to be the very fact that it occupies so much of our time and attention. Again, whether or not AI is ever capable of solving “global challenges” like climate change, the very idea that it could solve our problems and that it could stop climate change is already enough to help relieve the pressure on corporations to cut down on pollution, relieve the pressure on politicians to seek real solutions to climate change, and relieve the pressure on all of us to face the reality of the climate change crisis. AI might not be able to help us in the way that companies like OpenAI claimed, but nevertheless AI has helped us at the very least by distracting us. In other words, AI has been incredibly successful not only when it comes to capitalism, but also when it comes to
nihilism.

We know the truth about AI, but companies are still pursuing AI because it is still a way to make money, and we are still talking about AI because it gives us something to talk about other than climate change. And because we keep talking about it, companies can keep making money off of it. And because companies keep making money off of it, we can keep talking about it. We seem to be stuck in a vicious cycle. So the question we need to ask is not whether we can stop pursuing AI but whether we can stop pursuing nihilistic escapes from having to face reality.


Nolen Gertz is Assistant Professor of Applied Philosophy, University of Twente and author of Nihilism and Technology (Rowman and Littlefield, 2018) and Nihilism (MIT Press, 2020)

Related Posts:

The dangerous illusion of AI consciousnessBy Shannon Vallor
The dangerous illusion of AI consciousness
All-knowing machines are a fantasy
We still don't understand climate change
AI, Moloch, and the race to the bottom

Related Videos:

Biology beyond genes
Electricity creates consciousness
The trouble with string theory
The matrix, myths, and metaphysics

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7230564
人工智慧技術與美國2024大選 -- Julia Mueller
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Fears grow over AI’s impact on the 2024 election

, 12/26/23

The rapid rise of artificial intelligence (AI) is raising concerns about how the technology could impact next year’s election as the start of 2024 primary voting nears.

AI — advanced tech that can generate text, images and audio, and even build deepfake videos — could fuel misinformation in an already polarized political landscape and further erode voter confidence in the country’s election system.

“2024 will be an AI election, much the way that 2016 or 2020 was a social media election,” said Ethan Bueno de Mesquita, interim dean at the University of Chicago Harris School of Public Policy. “We will all be learning as a society about the ways in which this is changing our politics.”

Experts are sounding alarms that AI chatbots could generate misleading information for voters if they use it to get info on ballots, calendars or polling places — and also that AI could be used more nefariously, to create and disseminate misinformation and disinformation against certain candidates or issues.

“I think it could get pretty dark,” said Lisa Bryant, chair of the Department of Political Science at California State University, Fresno and an expert with MIT’s Election lab.

Polling shows the concern about AI doesn’t just come from academics: Americans appear increasingly worried about how the tech could confuse or complicate things during the already contentious 2024 cycle.

U Chicago Harris/AP-NORC poll released in November found a bipartisan majority of U.S. adults are worried about the use of AI “increasing the spread of false information” in the 2024 election.

Morning Consult-Axios survey found an uptick in recent months in the share of U.S. adults who said they think AI will negatively impact trust in candidate advertisements, as well as trust in the outcome of the elections overall.

Nearly 6 in 10 respondents said they think misinformation spread by AI will have an impact on who ultimately wins the 2024 presidential race.

“They are a very powerful tool for doing things like making fake videos, fake pictures, et cetera, that look extremely convincing and are extremely difficult to distinguish from reality — and that is going to be likely to be a tool in political campaigns, and already has been,” said Bueno de Mesquita, who worked on the UChicago poll.

“It’s very likely that that’s going to increase in the ‘24 election — that we’ll have fake content created by AI that’s at least by political campaigns, or at least by political action committees or other actors — that that will affect the voters’ information environment make it hard to know what’s true and false,” he said.


Over the summer, the DeSantis-aligned super PAC Never Back Down reportedly used an AI-generated version of former President Trump’s voice in a television ad.

Just ahead of the third Republican presidential debate, former President Trump’s campaign released a video clip that appeared to imitate the voices of his fellow GOP candidates, introducing themselves by Trump’s favored nicknames.

And earlier this month, the Trump campaign posted an altered version of a report that NBC News’s Garrett Haake gave before the third GOP debate. The clip starts unaltered with Haake’s report but has a voiceover take over, criticizing the former president’s Republican rivals.

“The danger is there, and I think it’s almost unimaginable that we won’t have deepfake videos or whatever as part of our politics going forward,” Bueno de Mesquita said.

The use of AI by political campaigns in particular has prompted tech companies and government officials to consider regulations on the tech.

Google earlier this year announced it would require verified election advertisers to “prominently disclose” when their ads had been digitally generated or altered.

Meta also plans to require disclosure when a political ad uses “photorealistic image or video, or realistic-sounding audio” that was generated or altered to, among other purposes, depict a real person doing or saying something they did not do.

President Biden issued an executive order on AI in October, including new standards for safety and plans for the Commerce Department to craft guidelines on content authentication and watermarking.

“President Biden believes that we have an obligation to harness the power of AI for good, while protecting people from its potentially profound risks,” a senior administration official said at the time.

But lawmakers have largely been left scrambling to try to regulate the industry as it charges ahead with new developments.

Shamaine Daniels, a Democratic candidate for Congress in Pennsylvania, is using an AI-powered voice tool from the startup Civox as a phone-banking tool for her campaign.

“I share everyone’s grave concerns about the possible nefarious uses of AI in politics and elsewhere. But we need to also understand and embrace the opportunities this technology represents,” Daniels said when she announced her campaign would roll out the tech.

Experts say AI could be used for good in election cycles — like informing the public what political candidates they may agree with on issues and helping election officials clean up voter lists to identify duplicate registrations.

But they also warn the tech could worsen problems exposed during the 2016 and 2020 cycles.

Bryant said AI could help disinformation “micro-target” users even further than what social media has already been able to. She said no one is immune from this, pointing to how ads on a platform like Instagram already can influence behavior.

“It really has helped to take this misinformation and really pinpoint what kinds of messages, based on past online behavior, really resonate and work with individuals,” she said.

Bueno de Mesquita said he is not as concerned about micro-targeting from campaigns to manipulate voters, because evidence has shown that social media targeting has not been effective enough to influence elections. Resources should be focused on educating the public about the “information environment” and pointing them to authoritative information, he said.

Nicole Schneidman, a technology policy advocate at the nonprofit watchdog group Protect Democracy, said the organization does not expect AI to produce “novel threats” for the 2024 election but rather potential acceleration of trends that are already affecting election integrity and democracy.

She said a risk exists of overemphasizing the potential of AI in a broader landscape of disinformation affecting the election.

“Certainly, the technology could be used in creative and novel ways, but what underlies those applications are all threats like disinformation campaigns or cyberattacks that we’ve seen before,” Schneidman said. “We should be focusing on mitigation strategies that we know that are responsive to those threats that are amplified, as opposed to spending too much time trying to anticipate every use case of the technology.”

A key solution to grappling with the rapidly developing technology could just be getting users in front of it.

“The best way to become AI literate myself is to spend half an hour playing with the chat bot,” said Bueno de Mesquita.

Respondents in the U Chicago Harris/AP-NORC who reported being more familiar with AI tools were also more likely to say use of the tech could increase the spread of misinformation, suggesting awareness of what the tech can do can increase awareness of its risks.

“I think the good news is that we have strategies both old and new to really bring to the fore here,” Schneidman said.

She said as AI becomes more sophisticated, detection technology may have trouble keeping up despite investments in those tools. Instead, she said “pre-bunking” from election officials can be effective at informing the public before they even potentially come across AI-generated content.

Schneidman said she hopes election officials also increasingly adopt digital signatures to indicate to journalists and the public what information is coming directly from an authoritative source and what might be fake. She said these signatures could also be included in photos and videos a candidate posts to plan for deepfakes.

“Digital signatures are the proactive version of getting ahead of some of the challenges that synthetic content could pose to the caliber of the election information ecosystem,” she said.

She said election officials, political leaders and journalists can get information people need about when and how to vote so they are not confused and voter suppression is limited. She added that narratives surrounding interference in elections are not new, which gives those fighting disinformation from AI content an advantage.

“The advantages that pre-bunking gives us is crafting effective counter messaging that anticipates recurring disinformation narratives and hopefully getting that in the hands and in front of the eyes of voters far in advance of the election, consistently ensuring that message is landing with voters so that they are getting the authoritative information that they need,” Schneidman said.

For the latest news, weather, sports, and streaming video, head to The Hill.


本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7219710
沒有人真正在乎「人工智慧『安全性』」 -- Lucas Ropek
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

這篇文章從牟利」和「(技術)安全性」兩者互斥的角度分析OpenAI的鬧劇(請見本欄前兩篇報導/評論)。但全文有其嚴肅性,值得細讀。對此議題有興趣的朋友,請到原網頁看看上週以來其它和人工智慧」相關訊息以及其他讀者的觀點。

我從這三篇報導/評論/分析,進一步了解到:為什麼「人工智慧」讓人害怕人工智慧安全性」導致的憂慮與焦躁、以及商場高層觀念與賺錢之間鬥爭的實務等等。真的比連續劇還要引人入勝。


After OpenAI's Blowup, It Seems Pretty Clear That 'AI Safety' Isn't a Real Thing

As OpenAI's chaos comes to an end, AI development will never be the same.

Lucas Ropek, 11/22/23

Welcome to AI This Week, Gizmodo’s weekly roundup where we do a deep dive on what’s been happening in artificial intelligence.

Well, holy shit. As far as the tech industry goes, it’s hard to say whether there’s ever been a more shocking series of events than the ones that took place over the last several days. The palace intrigue and boardroom drama of Sam Altman’s ousting by the OpenAI board (and his victorious reinstatement earlier today) will doubtlessly go down in history as one of the most explosive episodes to ever befall Silicon Valley. That said, the long-term fallout from this gripping incident is bound to be a lot less enjoyable than the initial spectacle of it.

The “coup,” as many have referred to it, has largely been attributed to an ideological rift between Sam and the OpenAI board over the pace of technological development at the company. So, this narrative goes, the board, which is supposed to have ultimate say over the direction of the organization, was concerned about the rate at which Altman was pushing to commercialize the technology, and decided to eject him with extreme prejudice. Altman, who was subsequently backed by OpenAI’s powerful partner and funder, Microsoft, as well as a majority of the startup’s staff, subsequently led a counter-coup, pushing out the traitors and re-instating himself as the leader of the company.

So much of the drama of the episode seems to revolve around this argument between Altman and the board over “AI safety.” Indeed, this fraught chapter in the company’s history seems like a flare up of OpenAI’s two opposing personalities—one based around research and responsible technological development, and the other based around making shitloads of money. One side decidedly overpowered the other (hint: it was the money side).

Other writers have already offered break downs about how OpenAI’s unique organizational structure seems to have set it on a collision course with itself. Maybe you’ve seen the startup’s org chart floating around the web but, in case you haven’t, here’s a quick recap: Unlike pretty much every other technology business that exists, OpenAI is actually a non-profit, governed wholly by its board, that operates and controls a for-profit company. This design is supposed to prioritize the organization’s mission of pursuing the public good over money. OpenAI’s own self-description promotes this idealistic notion—that it’s main aim is to make the world a better place, not make money:

We designed OpenAI’s structure—a partnership between our original Nonprofit and a new capped profit arm—as a chassis for OpenAI’s mission: to build artificial general intelligence (AGI) that is safe and benefits all of humanity.

Indeed, the board’s charter owes its allegiance to “humanity,” not to its shareholders. So, despite the fact that Microsoft has poured a megaton of money and resources into OpenAI, the startup’s board is still (hypothetically) supposed to have final say over what happens with its products and technology. That said, the company part of the organization is reported to be worth tens of billions of dollars. As many have already noted, the organization’s ethical mission seems to have come directly into conflict with the economic interests of those who had invested in the organization. As per usual, the money won.

All of this said, you could make the case that we shouldn’t fully endorse this interpretation of the weekend’s events yet, since the actual reasons for Altman’s ousting have still not been made public. For the most part, members of the company either aren’t talking about the reasons Sam was pushed out or have flatly denied that his ousting had anything to do with AI safety. Alternate theories have swirled in the meantime, with some suggesting that the real reasons for Altman’s aggressive exit were decidedly more colorful—like accusations he pursued additional funding via autocratic Mideast regimes.

But to get too bogged down in speculating about the specific catalysts for OpenAI’s drama is to ignore what the whole episode has revealed: as far as the real world is concerned, “AI safety” in Silicon Valley is pretty much null and void. Indeed, we now know that despite its supposedly bullet-proof organizational structure and its stated mission of responsible AI development, OpenAI was never going to be allowed to actually put ethics before money.

To be clear, AI safety is a really important field, and, were it to be actually practiced by corporate America, that would be one thing. That said, the version of it that existed at OpenAI—arguably one of the companies that has done the most to pursue a “safety” oriented model—doesn’t seem to have been much of a match for the realpolitik machinations of the tech industry. In even more frank terms, the folks who were supposed to be defending us from runaway AI (i.e., the board members)—the ones who were ordained with responsible stewardship over this powerful technology—don’t seem to have known what they were doing. They don’t seem to have understood that Sam had all the industry connections, the friends in high places, was well-liked, and that moving against him in a world where that kind of social capital is everything amounted to career suicide. If you come at the king, you best not miss.

In short: If the point of corporate AI safety is to protect humanity from runaway AI, then, as an effective strategy for doing that, it has effectively just flunked its first big test. That’s because it’s sorta hard to put your faith in a group of people who weren’t even capable of predicting the very predictable outcome that would occur when they fired their boss. How, exactly, can such a group be trusted with overseeing a supposedly “super-intelligent,world-shattering technology? If you can’t outfox a gaggle of outraged investors, then you probably can’t outfox the Skynet-type entity you claim to be building. That said, I would argue we also can’t trust the craven, money-obsessed C-suite that has now reasserted its dominance. Imo, they’re obviously not going to do the right thing. So, effectively, humanity is stuck between a rock and a hard place.

As the conflict from the OpenAI dustup settles, it seems like the company is well positioned to get back to business as usual. After jettisoning the only two women on its board, the company added fiscal goon Larry Summers. Altman is back at the company (as is former company president Greg Brockman, who stepped down in solidarity with Altman), and Microsoft’s top executive, Satya Nadella, has said that he is “encouraged by the changes to OpenAI board” and said it’s a “first essential step on a path to more stable, well-informed, and effective governance.”

With the board’s failure, it seems clear that OpenAI’s do-gooders may have not only set back their own “safety” mission, but might have also kicked off a backlash against the AI ethics movement writ large. Case in point: This weekend’s drama seems to have further radicalized an already pretty radical anti-safety ideology that had been circulating the business. The “effective accelerationists” (abbreviated “e/acc”) believe that stuff like additional government regulations, “tech ethics” and “AI safety” are all cumbersome obstacles to true technological development and exponential profit. Over the weekend, as the narrative about “AI safety” emerged, some of the more fervent adherents of this belief system took to X to decry what they perceived to be an attack on the true victim of the episode (capitalism, of course).

To some degree, the whole point of the tech industry’s embrace of “ethics” and “safety” is about reassurance. Companies realize that the technologies they are selling can be disconcerting and disruptive; they want to reassure the public that they’re doing their best to protect consumers and society. At the end of the day, though, we now know there’s no reason to believe that those efforts will ever make a difference if the company’s “ethics” end up conflicting with its money. And when have those two things ever not conflicted?

本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7217597
導致OpenAI大地震的「人工智慧」研發新突破 – 路透社
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

下文就本欄上一篇威脅論」幕後的訊息有詳細報導再度聲明我是人工智慧」領域的白痴。如果上一篇威脅論」的論述基於下文所報導的突破」;根據我的邏輯和常識,我會認為該文作者誇大其辭。


OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say

Anna TongJeffrey Dastin and Krystal Hu, 11/

Nov 22 (Reuters) - Ahead of OpenAI CEO 
Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.

In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.

"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.


Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker

Our Standards: The Thomson Reuters Trust Principles.

Reporters


Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University. Contact:4152373211

Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.

Krystal Hu reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.


本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7217583
開放人工智慧研究中心大地震和人類史上最大威脅 –--- Tomas Pueyo
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

看起來這是一個巨無霸型的超級威脅(請參看本欄《人類可能在2031面臨「終結時刻」 -- Tim Newcomb)。我沒有能力翻譯和評論;我能做的是替各位找到一些相關連接並(盡量)依據原則做了翻譯請看官們自行理解和玩味。

此文也回答了我「拔掉插頭」的幼稚想法。


索引(11/26修正請指正)

AI alignment人工智慧研發必須符合人類指令、需求、和福利等目標
Artificial intelligence人工智慧入門介紹
BardGoogle研發的聊天器
Bing ChatMicrosoft研發的聊天器 
ChatBot (chatterbot)
聊天器  
ChatGPT
OpenAI研發的訓練後自動產生聊天能力轉換器 (Chat Generative Pre-trained Transformer)
foom (like the word boom)人工智慧」達到突破性效能的一刻;A sudden increase in artificial intelligence such that an AI system becomes extremely powerful. 
FOOM:幫助人工智慧」達到突破性效能這個目標的過程與步驟
OpenAI
開放人工智慧研究中心


OpenAI and the Biggest Threat in the History of Humanity

We don’t know how to contain or align a FOOMing AGI

TOMAS PUEYO, Uncharted Territories, 11/22/23

Last weekend, there was massive drama at the board of OpenAI, the non-profit/company that makes ChatGPT, which has grown from nothing to $1B revenue per year in a matter of months.

Sam Altman, the CEO of the company, was fired by the board of the non-profit arm. The president, Greg Brockman, stepped down immediately after learning Altman had been let go. 

Satya Nadella, the CEO of Microsoft—who owns 49% of the OpenAI company—told OpenAI he still believed in the company, while hiring Greg and Sam on the spot for Microsoft, and giving them free rein to hire and spend as much as they needed, which will likely include the vast majority of OpenAI employees.

This drama, worthy of the show Succession, is at the heart of the most important problem in the history of humanity.

Board members seldom fire CEOs, because founding CEOs are the single most powerful force of a company. If that company is a rocketship like OpenAI, worth $80B, you don’t touch it. So why did the OpenAI board fire Sam? This is what they said:

No standard startup board member cares about this in a rocketship. But OpenAI’s board is not standard. In fact, it was designed to do exactly what it did. This is the board structure of OpenAI: 

To simplify this, let’s focus on who owns OpenAI the company, at the bottom (Global LLC):

OpenAI the charity has a big ownership of the company.
Some employees and investors also do.
And Microsoft owns 49% of it.

Everything here is normal, except for the charity at the top. What is it, and what does it do?

OpenAI the charity is structured to not make a profit because it has a specific goal that is not financial: To make sure that humanity, and everything in the observable universe, doesn’t disappear.

What is that humongous threatThe impossibility to contain a misaligned, FOOMing AGI. What does that mean? (Skip this next section if you understand that sentence fully.)

FOOM AGI Can’t Be Contained

AGI FOOM

AGI is Artificial General Intelligence: a machine that can do nearly anything any human can do: anything mental, and through robots, anything physical. This includes deciding what it wants to do and then executing it, with the thoughtfulness of a human, at the speed and precision of a machine

Here’s the issue: If you can do anything that a human can do, that includes working on computer engineering to improve yourself. And since you’re a machine, you can do it at the speed and precision of a machine, not a human. You don’t need to go to pee, sleep, or eat. You can create 50 versions of yourself, and have them talk to each other not with words, but with data flows that go thousands of times faster. So in a matter of days—maybe hours, or seconds—you will not be as intelligent as a human anymore, but slightly more intelligent. Since you’re more intelligent, you can improve yourself slightly faster, and become even more intelligent. The more you improve yourself, the faster you improve yourself. Within a few cycles, you develop the intelligence of a God

This is the FOOM process: The moment an AI reaches a level close to AGI, it will be able to improve itself so fast that it will pass our intelligence quickly, and become extremely intelligent. Once FOOM happens, we will reach the singularity: a moment when so many things change so fast that we can’t predict what will happen beyond that point.

Here’s an example of this process in action in the past:

Alpha Zero blew past all accumulated human knowledge about Go after a day or so of self-play, with no reliance on human playbooks or sample games.

Alpha Zero learned by playing with itself, and this experience was enough to work better than any human who ever played, and all previous iterations of Alpha Go. The idea is that an AGI can do the same with general intelligence.

Here’s another example: A year ago, Google’s DeepMind found a new, more efficient way to multiply matrices. Matrix multiplication is a very fundamental process in all computer processing, and humans had not found a new solution to this problem in 50 years.

Do people think an AGI FOOM is possible? Bets vary. In Metaculus, people opine that the process would take nearly two years from weak AI to superintelligence. Others think it might be a matter of hours

Note that Weak AI has many different definitions and nobody is clear what it means. Generally, it means it’s human-level good for one narrow type of task. So it makes sense that it would take 22 months to go from that to AGI, because maybe that narrow task has nothing to do with self-improvement. The key here is self-improvement. I fear the moment an AI reaches human-level ability to self-improve, it will become superintelligent in a matter of hours, days, or weeks. If we’re lucky, months. Not years.

Maybe this is good? Why would we want to stop this runaway intelligence improvement?

Misaligned Paperclips

This idea was first illustrated by Nick Bostrom:

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

Easy: Tell the AGI to optimize for things that humans like before it becomes an AGI? This is called alignment and is impossible so far. 

Not all humans want the same things. We’ve been at war for thousands of years. We still debate moral issues on a daily basis. We just don’t know what it is that we want, so how could we make a machine know that?  

Even if we could, what would prevent the AGI from changing its goals? Indeed, we might be telling it “please humans to get 10 points”, but if it can tinker with itself, it could change that rule to anything else, and all bets are off. So alignment is hard. 

What happens with an AGI that is not fully aligned? Put yourself in the shoes of a god-like AGI. What are some of the first things you would do, if you have any goal that is not exactly “do what’s best for all humans as a whole”?

You would cancel the ability of any potential enemy to stop you, because that would jeopardize your mission the most. So you would probably create a virus that preempts any other AGI that appears. Since you’re like a god, this would be easy. You’d infect all computers in the world in a way that can’t be detected.

The other big potential obstacle to reaching your objectives might be humans shutting you down. So you will quickly take this ability away from humans. This might be by spreading over the Internet, creating physical instances of yourself, or simply eliminating all humans. Neutralizing humans would probably be at the top of the priority list of an AGI the moment it reaches AGI.

Of course, since an AGI is not dumb, she1 would know that appearing too intelligent or self-improving too fast would be perceived by humans as threatening. So she would have all the incentives to appear dumb and hide her intelligence and self-improvement. Humans wouldn’t notice she’s intelligent until it’s too late

If that sounds weird, think about all the times you’ve talked with an AI and it has lied to you (the politically correct word is “hallucinate”). Or when a simulated AI committed insider trading and lied about it. And these are not very intelligent AIs! It is very possible that an AI would lie to be undetected and reach AGI status.

So shutting down an AGI after it escapes is impossible, and shutting it down before might be too hard because we wouldn’t know it’s superintelligent. Whether it is making paperclips, or solving the Riemann hypothesis, or any other goal, neutralizing humans and other computers would be a top priority, and seeming dumb before developing the capacity to achieve that would be a cornerstone of the AGI’s strategy.

This concept is called instrumental convergence: Whatever you optimize for, you will have some consistent goals like accessing resources and fending off threats. 

OK, so we want to catch an AI that is becoming intelligent fast, even if it tries to lie to us. This sounds easy, doesn’t it? Let’s just contain it.

Except you can’t.

The Problem of Containment

In Ex Machina (SPOILERS AHEAD), a tech visionary invites an engineer to his complex to interact with a contained AI. The AI is beautiful, sensitive, delicate, intelligent, curious, and falls in love with the engineer. 

She then shares how the evil tech visionary is keeping her miserably contained in a box. Eventually, the engineer helps free her, at which point she locks and kills humans, and escapes.

—-END OF SPOILERS––

This is close to the scenario that happens in nearly every situation ever imagined to try to contain an AGI. If it has access to the Internet, it can spread infinitely. So it shouldn’t access it.2 Even if it doesn’t, it could generate electromagnetic waves in a pattern that could hack into other systems that are connected to the Internet.

If that sounds crazy, consider that there is malware that can communicate through the vibrations of a computer’s fan (
風扇):

Lead cyber-security researcher Mordechai Guri at Ben-Gurion University of the Negev said data encoded by hackers into fan vibrations could be transmitted to a smartphone placed in the vicinity of the targeted computer.

So, you need this AGI to be secluded from any other electronic thing, and even air, in a storage room with a faraday cage3 and void, with no electronics anywhere near it. No cable should go in and out of the cage. And this assumes that there’s no other physical way that an AGI can spread, which we don’t know since we don’t know all the laws of physics—something an AGI might be able to figure out.

Even in a scenario where we can physically contain the AGI, humans would have to interact with it, because otherwise who would be using the AGI and studying it? They would have to enter the box where the AGI is contained. At that point, they become the weak link in the chain. The AGI would hitchhike on the human.

You’d imagine that an intelligent engineer, trained to be very careful about not carrying an AGI outside, would be able to fend off any escape attempt. This is not what experience says.

In real-life experiments, humans posing as AGIs have consistently been able to be released by gatekeepers who knew they had to keep the AGI inside. This makes sense: an AGI would be so intelligent that we’re like an ant to it. It would be easy to manipulate us. Even casual conversation with a human guard could allow an AGI to deploy psychological tricks, ranging from befriending to blackmail, to convince him that it is in his interest to let the AI loose. The AI might offer a gatekeeper a recipe for perfect health or immortality, or threaten to do horrific things to the gatekeeper and his family once it inevitably escapes. 

For example, a Google AI engineer (the type of person you’d think are mindful of this type of problem), working on a more basic LLM (Large Language Model, the sort of AI that ChatGPT belongs to) than ChatGPT called LaMDA, thought it had reached consciousness and tried to give it legal rights. 

So this is the fear:

*  An AGI could become very intelligent very fast.
*  Being so intelligent, it would be impossible to contain it.
*  Once it is loose, it has a strong incentive to neutralize humans in order to optimize whatever its goal is.
*  The only way out of this is making sure this AI wants exactly the same thing as humans, but we have no idea how to achieve that.

Not only do we need to figure out alignment, but we need it on our first attempt. We can’t learn from our mistakes, because the very first time an AI reaches superintelligence is likely to be the last time too. We must solve a problem we don’t understand, which we’ve been dealing with for thousands of years without a solution, and we need to do it in the coming years before AGI appears.

Also, we need to do it quickly, because AGI is approaching. People think we will be able to build self-improving AIs within 3 years:

以下是:

Predictions from Metaculus, where experts bet on outcomes of specific events. They tend to reasonably reflect what humans know at the time, just like stock markets reflect the public knowledge about a company’s value.

我就不轉載了;有興趣的網友請至原網頁繼續閱讀原文和讀者們的評論。
 
正文附註:

1  I find it useful to anthropomorphize it, and since the word intelligence is feminine in both French and Spanish, I imagine AGI as female. Most AI voices are females. That’s because people prefer listening to females, among other things.
2  It could be created without the Internet by downloading gazillions of petabytes of Internet data and training the AI with it.
3  This is a cage that prevents any electromagnetic signal to go in or out.


本文於 修改第 3 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7217547
《人類和人工智慧之戰的結果》讀後
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

此文發表於今年九月底(本欄上一篇),我一直沒有時間細讀,也就沒有介紹。這幾天「人工智慧『安全』」或「人工智慧『威脅』」議題再度上了頭版頭條(以下三篇),因此把它轉載於此。

該文標題的意思是在美國國防部的空戰模擬中,「人工智慧飛行員」大敗「人類飛行員150

該文主旨則在指出人工智慧」在「決策」上的「效能」遠遠超過人類;從而,誰能先把「人工智慧」應用到戰場的指揮統御上,誰就能真正的「決勝於千里之外」!

有趣的是,該文子標題中提及我「拔掉插頭」來對付「人工智慧」的想法(開欄文);可惜該文作者沒有對其「可行性」正面置評。從他語氣看來,我相信他認為:「不行」!

本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7217538
人類和人工智慧之戰的結果 -- Stephen Kelly
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Here's how a war between AI and humanity would actually end

There’s no need to worry about a robot uprising. We can always just pull the plug, right…? RIGHT?

Stephen Kelly, the Science Focus, 09/29/23

 New science-fiction movie The Creator imagines a future in which humanity is at war with artificial intelligence (AI). Hardly a novel concept for sci-fi, but the key difference here – as opposed to, say, The Terminator – is that it arrives at a time when the prospect is starting to feel more like science fact than fiction.  

The last few months, for instance, have seen numerous warnings about the ‘existential threat’ posed by AI. For not only could it one day write this column better than I can (unlikely, I’m sure you’ll agree), but it could also lead to frightening developments in warfare – developments that could spiral out of control.

The most obvious concern is a future in which AI is used to autonomously operate weaponry in place of humans. 
Paul Scharre, author of Four Battlegrounds: Power in the Age of Artificial Intelligence, and vice president of the Center for a New American Security, cites the recent example of DARPA’s (the Defense Advanced Research Projects Agency) AlphaDogfight challenge – an aerial simulator that pitted a human pilot against an AI.

“Not only did the AI crush the pilot 15 to zero,” says Scharre, “but it made moves that humans can’t make; specifically, very high-precision, split-second gunshots.”

Yet the prospect of giving AI the power to make life or death decisions raises uncomfortable questions. For instance, what would happen if an AI made a mistake and accidentally killed a civilian? “That would be a war crime,” says Scharre. “And the difficulty is that there might not be anyone to hold accountable.”

In the near future, however, the most likely use of AI in warfare will be in tactics and analysis. “AI can help process information better and make militaries more efficient,” says Scharre.

 “I think militaries are going to feel compelled to turn over more and more decision-making to AI, because the military is a ruthlessly competitive environment.

If there’s an advantage to be gained, and your adversary takes it and you don’t, you’re at a huge disadvantage.” This, says Scharre, could lead to an AI arms race, akin to the one for nuclear weapons.

 “Some Chinese scholars have hypothesised about a singularity on the battlefield,” he says. “[That’s the] point when the pace of AI-driven decision-making eclipses the speed of a human’s ability to understand and humans effectively have to turn over the keys to autonomous systems to make decisions on the battlefield.”

Of course, in such a scenario, it doesn’t feel impossible for us to lose control of that AI – or even for it to turn against us. Hence why it’s US policy that humans are always in the loop regarding any decision to use nuclear weapons.

“But we haven’t seen anything similar from countries like Russia and China,” says Scharre. “So, it’s an area where there’s valid concern.” If the worst was to happen, and an AI did declare war, Scharre is not optimistic about our chances.

“I mean, could chimpanzees win a war against humans?” he says, laughing. “Top chessplaying AIs aren’t just as good as grandmasters; the top grandmasters can’t remotely compete with them. And that happened pretty quickly.

It’s only five years ago that that wasn’t the case. “We’re building increasingly powerful AI systems that we don’t understand and can’t control, and are deploying them in the real world. I think if we’re actually able to build machines that are smarter than us, then we’ll have a lot of problems.”

About our expert, Paul Scharre

Scharre is the Executive Vice President and Director of studies at the Center for a New Amercian Security (CNAS). He has written multiple books on the topic of artificial intelligence and warfare and was named one of the 100 most influential people in AI in 2023 by TIME magazine.

Read more:

The end of ageing? A new AI is developing drugs to fight your biological clock
An AI friend will always be on your side... until it threatens your free will


本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7217532
人類可能在2031面臨「終結時刻」 -- Tim Newcomb
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

終結時刻」:工智慧機器脫離人類控制取得自主能力的時間點


A Scientist Says the Singularity Will Happen by 2031

Maybe even sooner. Are you ready?

, 11/09/23

*  “The singularity,” the moment where AI is no longer under human control, is less than a decade away—according to one AI expert.
More resources than ever are being poured into the pursuit of artificial general intelligence and speeding the growth of AI.
Development of AI is also coming from a variety of sectors, pushing the technology forward faster than ever before.

There’s at least one expert who believes that “
the singularity”—the moment when artificial intelligence surpasses the control of humans—could be just a few years away. That’s a lot shorter than current predictions regarding the timeline of AI dominance, especially considering that AI dominance is not exactly guaranteed in the first place.

Ben Goertzel, CEO of SingularityNET—who holds a Ph.D. from Temple University and has worked as a leader of Humanity+ and the Artificial General Intelligence Society—
told Decrypt that he believes artificial general intelligence (AGI) is three to eight years away. AGI is the term for AI that can truly perform tasks just as well has humans, and it’s a prerequisite for the singularity soon following.

Whether you believe him or not, there’s no sign of the AI push slowing down any time soon. Large language models from the likes of Meta and OpenAI, along with the AGI focus of 
Elon Musk’s xAI, are all pushing hard towards growing AI.

“These systems have greatly increased the enthusiasm of the world for AGI,” Goertzel told Decrypt, “so you’ll have more resources, both
money and just human energy—more smart young people want to plunge into work and working on AGI.”

When the concept of AI started first emerged—as early as the 1950s—Goertzel says that its development was driven by the United States
military and seen primarily as a potential national defense tool. Recently, however, progress in the field has been propelled by a variety of drivers with a variety of motives. “Now the ‘why’ is making money for companies,” he says, “but also interestingly, for artists or musicians, it gives you cool tools to play with.”

Getting to the singularity, though, will require a significant leap from the current point of AI development. While today’s AI typically focuses on specific tasks, the push towards AGI is intended to give the 
technology a more human-like understanding of the world and open up its abilities. As AI continues to broaden its understanding, it steadily moves closer to AGI—which some say is just one step away from the singularity.

The technology isn’t there yet, and some experts 
caution we are truly a lot further from it than we think—if we get there at all. But the quest is underway regardless. Musk, for example, created xAI in the summer of 2023 and just recently launched the chatbot Grok to “assist humanity in its quest for understanding and knowledge,” according to Reuters. Musk also called AI “the most disruptive force in history.”

With many of the most influential tech giants—
Google, Meta and Musk—pursuing the advancement of AI, the rise of AGI may be closer than it appears. Only time will tell if we will get there, and if the singularity will follow.


本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7216634
人工智慧發展小史 - Donovan Johnson
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

以下短文只是小菜一碟;但對科普或技普有興趣的朋友,此部落格可以不時去蹓躂蹓躂例如下面這三碟小菜:

The Difference Between Generative AI And Traditional AI: An Easy Explanation For Anyone
How Pulsed Laser Deposition Systems are Revolutionizing the Tech Industry
Top Data Annotation Tools to Watch: Revolutionizing the Telecommunications and Internet Industries


The History of Artificial Intelligence

Donovan Johnson, 07/23/23

Artificial intelligence (AI) has a long history that dates back to ancient times. The idea of machines or devices that can imitate human behavior and intelligence has intrigued humans for centuries. However, the field of AI as we know it today began to take shape in the mid-20th century.

During World War II, researchers began to explore the possibilities of creating machines that could simulate human thinking and problem-solving. The concept of AI was formalized in 1956 when a group of researchers organized the Dartmouth Conference, where they discussed the potential of creating intelligent machines.

In the following years, AI research experienced significant advancements. Researchers developed algorithms and programming languages that could facilitate machine learning and problem-solving. They also started to build computers and software systems that could perform tasks traditionally associated with human intelligence.

One of the key milestones in AI history was the development of expert systems in the 1980s. These systems were designed to mimic the decision-making processes of human experts in specific domains. They proved to be useful in areas such as medicine and finance.

In the 1990s, AI research shifted towards probabilistic reasoning and machine learning. Scientists began to explore the potential of neural networks and genetic algorithms to create intelligent systems capable of learning from data and improving their performance over time.

Today, AI has become an integral part of our daily lives. It powers virtual assistants, recommendation systems, autonomous vehicles, and many other applications. AI continues to evolve and advance, with ongoing research in areas such as deep learning, natural language processing, and computer vision.

The history of AI is characterized by significant achievements and breakthroughs. From its early beginnings as a concept to its current status as a transformative technology, AI has come a long way. As researchers and scientists continue to push the boundaries of what is possible, we can expect even more exciting developments and applications of AI in the future.


本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7210324
頁/共3頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁