網路城邦
回本城市首頁 時事論壇
市長:胡卜凱  副市長:
加入本城市推薦本城市加入我的最愛訂閱最新文章
udn城市政治社會政治時事【時事論壇】城市/討論區/
討論區知識和議題 字體:
看回應文章  上一個討論主題 回文章列表 下一個討論主題
人工智慧縱橫談 – 開欄文
 瀏覽4,759|回應19推薦1

胡卜凱
等級:8
留言加入好友
文章推薦人 (1)

胡卜凱

四月開始,由於   ChatGPT    Bing Chat 的上線,網上以及各line群組掀起一陣AI瘋。我當時大概忙於討論《我們的反戰聲明》,沒有湊這個熱鬧。現在轉載幾篇相關文章。也請參考《「人工智慧」研發現況及展望 》一文

有些人擔憂「人工智慧」會成為「人上機器」,操控世界甚至奴役人類。我不懂AI,思考也單純;所以,如果「人工智慧」亂了套,我自認為有一個簡單治它的方法:

拔掉電源插頭。如果這個方法不夠力,炸掉電力傳輸線和緊急發電機;再不行,炸掉發電廠。

本文於 修改第 3 次
回應 回應給此人 推薦文章 列印 加入我的文摘

引用
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7205038
 回應文章 頁/共2頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁
人工智慧技術與美國2024大選 -- Julia Mueller
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Fears grow over AI’s impact on the 2024 election

, 12/26/23

The rapid rise of artificial intelligence (AI) is raising concerns about how the technology could impact next year’s election as the start of 2024 primary voting nears.

AI — advanced tech that can generate text, images and audio, and even build deepfake videos — could fuel misinformation in an already polarized political landscape and further erode voter confidence in the country’s election system.

“2024 will be an AI election, much the way that 2016 or 2020 was a social media election,” said Ethan Bueno de Mesquita, interim dean at the University of Chicago Harris School of Public Policy. “We will all be learning as a society about the ways in which this is changing our politics.”

Experts are sounding alarms that AI chatbots could generate misleading information for voters if they use it to get info on ballots, calendars or polling places — and also that AI could be used more nefariously, to create and disseminate misinformation and disinformation against certain candidates or issues.

“I think it could get pretty dark,” said Lisa Bryant, chair of the Department of Political Science at California State University, Fresno and an expert with MIT’s Election lab.

Polling shows the concern about AI doesn’t just come from academics: Americans appear increasingly worried about how the tech could confuse or complicate things during the already contentious 2024 cycle.

U Chicago Harris/AP-NORC poll released in November found a bipartisan majority of U.S. adults are worried about the use of AI “increasing the spread of false information” in the 2024 election.

Morning Consult-Axios survey found an uptick in recent months in the share of U.S. adults who said they think AI will negatively impact trust in candidate advertisements, as well as trust in the outcome of the elections overall.

Nearly 6 in 10 respondents said they think misinformation spread by AI will have an impact on who ultimately wins the 2024 presidential race.

“They are a very powerful tool for doing things like making fake videos, fake pictures, et cetera, that look extremely convincing and are extremely difficult to distinguish from reality — and that is going to be likely to be a tool in political campaigns, and already has been,” said Bueno de Mesquita, who worked on the UChicago poll.

“It’s very likely that that’s going to increase in the ‘24 election — that we’ll have fake content created by AI that’s at least by political campaigns, or at least by political action committees or other actors — that that will affect the voters’ information environment make it hard to know what’s true and false,” he said.


Over the summer, the DeSantis-aligned super PAC Never Back Down reportedly used an AI-generated version of former President Trump’s voice in a television ad.

Just ahead of the third Republican presidential debate, former President Trump’s campaign released a video clip that appeared to imitate the voices of his fellow GOP candidates, introducing themselves by Trump’s favored nicknames.

And earlier this month, the Trump campaign posted an altered version of a report that NBC News’s Garrett Haake gave before the third GOP debate. The clip starts unaltered with Haake’s report but has a voiceover take over, criticizing the former president’s Republican rivals.

“The danger is there, and I think it’s almost unimaginable that we won’t have deepfake videos or whatever as part of our politics going forward,” Bueno de Mesquita said.

The use of AI by political campaigns in particular has prompted tech companies and government officials to consider regulations on the tech.

Google earlier this year announced it would require verified election advertisers to “prominently disclose” when their ads had been digitally generated or altered.

Meta also plans to require disclosure when a political ad uses “photorealistic image or video, or realistic-sounding audio” that was generated or altered to, among other purposes, depict a real person doing or saying something they did not do.

President Biden issued an executive order on AI in October, including new standards for safety and plans for the Commerce Department to craft guidelines on content authentication and watermarking.

“President Biden believes that we have an obligation to harness the power of AI for good, while protecting people from its potentially profound risks,” a senior administration official said at the time.

But lawmakers have largely been left scrambling to try to regulate the industry as it charges ahead with new developments.

Shamaine Daniels, a Democratic candidate for Congress in Pennsylvania, is using an AI-powered voice tool from the startup Civox as a phone-banking tool for her campaign.

“I share everyone’s grave concerns about the possible nefarious uses of AI in politics and elsewhere. But we need to also understand and embrace the opportunities this technology represents,” Daniels said when she announced her campaign would roll out the tech.

Experts say AI could be used for good in election cycles — like informing the public what political candidates they may agree with on issues and helping election officials clean up voter lists to identify duplicate registrations.

But they also warn the tech could worsen problems exposed during the 2016 and 2020 cycles.

Bryant said AI could help disinformation “micro-target” users even further than what social media has already been able to. She said no one is immune from this, pointing to how ads on a platform like Instagram already can influence behavior.

“It really has helped to take this misinformation and really pinpoint what kinds of messages, based on past online behavior, really resonate and work with individuals,” she said.

Bueno de Mesquita said he is not as concerned about micro-targeting from campaigns to manipulate voters, because evidence has shown that social media targeting has not been effective enough to influence elections. Resources should be focused on educating the public about the “information environment” and pointing them to authoritative information, he said.

Nicole Schneidman, a technology policy advocate at the nonprofit watchdog group Protect Democracy, said the organization does not expect AI to produce “novel threats” for the 2024 election but rather potential acceleration of trends that are already affecting election integrity and democracy.

She said a risk exists of overemphasizing the potential of AI in a broader landscape of disinformation affecting the election.

“Certainly, the technology could be used in creative and novel ways, but what underlies those applications are all threats like disinformation campaigns or cyberattacks that we’ve seen before,” Schneidman said. “We should be focusing on mitigation strategies that we know that are responsive to those threats that are amplified, as opposed to spending too much time trying to anticipate every use case of the technology.”

A key solution to grappling with the rapidly developing technology could just be getting users in front of it.

“The best way to become AI literate myself is to spend half an hour playing with the chat bot,” said Bueno de Mesquita.

Respondents in the U Chicago Harris/AP-NORC who reported being more familiar with AI tools were also more likely to say use of the tech could increase the spread of misinformation, suggesting awareness of what the tech can do can increase awareness of its risks.

“I think the good news is that we have strategies both old and new to really bring to the fore here,” Schneidman said.

She said as AI becomes more sophisticated, detection technology may have trouble keeping up despite investments in those tools. Instead, she said “pre-bunking” from election officials can be effective at informing the public before they even potentially come across AI-generated content.

Schneidman said she hopes election officials also increasingly adopt digital signatures to indicate to journalists and the public what information is coming directly from an authoritative source and what might be fake. She said these signatures could also be included in photos and videos a candidate posts to plan for deepfakes.

“Digital signatures are the proactive version of getting ahead of some of the challenges that synthetic content could pose to the caliber of the election information ecosystem,” she said.

She said election officials, political leaders and journalists can get information people need about when and how to vote so they are not confused and voter suppression is limited. She added that narratives surrounding interference in elections are not new, which gives those fighting disinformation from AI content an advantage.

“The advantages that pre-bunking gives us is crafting effective counter messaging that anticipates recurring disinformation narratives and hopefully getting that in the hands and in front of the eyes of voters far in advance of the election, consistently ensuring that message is landing with voters so that they are getting the authoritative information that they need,” Schneidman said.

For the latest news, weather, sports, and streaming video, head to The Hill.


本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7219710
沒有人真正在乎「人工智慧『安全性』」 -- Lucas Ropek
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

這篇文章從牟利」和「(技術)安全性」兩者互斥的角度分析OpenAI的鬧劇(請見本欄前兩篇報導/評論)。但全文有其嚴肅性,值得細讀。對此議題有興趣的朋友,請到原網頁看看上週以來其它和人工智慧」相關訊息以及其他讀者的觀點。

我從這三篇報導/評論/分析,進一步了解到:為什麼「人工智慧」讓人害怕人工智慧安全性」導致的憂慮與焦躁、以及商場高層觀念與賺錢之間鬥爭的實務等等。真的比連續劇還要引人入勝。


After OpenAI's Blowup, It Seems Pretty Clear That 'AI Safety' Isn't a Real Thing

As OpenAI's chaos comes to an end, AI development will never be the same.

Lucas Ropek, 11/22/23

Welcome to AI This Week, Gizmodo’s weekly roundup where we do a deep dive on what’s been happening in artificial intelligence.

Well, holy shit. As far as the tech industry goes, it’s hard to say whether there’s ever been a more shocking series of events than the ones that took place over the last several days. The palace intrigue and boardroom drama of Sam Altman’s ousting by the OpenAI board (and his victorious reinstatement earlier today) will doubtlessly go down in history as one of the most explosive episodes to ever befall Silicon Valley. That said, the long-term fallout from this gripping incident is bound to be a lot less enjoyable than the initial spectacle of it.

The “coup,” as many have referred to it, has largely been attributed to an ideological rift between Sam and the OpenAI board over the pace of technological development at the company. So, this narrative goes, the board, which is supposed to have ultimate say over the direction of the organization, was concerned about the rate at which Altman was pushing to commercialize the technology, and decided to eject him with extreme prejudice. Altman, who was subsequently backed by OpenAI’s powerful partner and funder, Microsoft, as well as a majority of the startup’s staff, subsequently led a counter-coup, pushing out the traitors and re-instating himself as the leader of the company.

So much of the drama of the episode seems to revolve around this argument between Altman and the board over “AI safety.” Indeed, this fraught chapter in the company’s history seems like a flare up of OpenAI’s two opposing personalities—one based around research and responsible technological development, and the other based around making shitloads of money. One side decidedly overpowered the other (hint: it was the money side).

Other writers have already offered break downs about how OpenAI’s unique organizational structure seems to have set it on a collision course with itself. Maybe you’ve seen the startup’s org chart floating around the web but, in case you haven’t, here’s a quick recap: Unlike pretty much every other technology business that exists, OpenAI is actually a non-profit, governed wholly by its board, that operates and controls a for-profit company. This design is supposed to prioritize the organization’s mission of pursuing the public good over money. OpenAI’s own self-description promotes this idealistic notion—that it’s main aim is to make the world a better place, not make money:

We designed OpenAI’s structure—a partnership between our original Nonprofit and a new capped profit arm—as a chassis for OpenAI’s mission: to build artificial general intelligence (AGI) that is safe and benefits all of humanity.

Indeed, the board’s charter owes its allegiance to “humanity,” not to its shareholders. So, despite the fact that Microsoft has poured a megaton of money and resources into OpenAI, the startup’s board is still (hypothetically) supposed to have final say over what happens with its products and technology. That said, the company part of the organization is reported to be worth tens of billions of dollars. As many have already noted, the organization’s ethical mission seems to have come directly into conflict with the economic interests of those who had invested in the organization. As per usual, the money won.

All of this said, you could make the case that we shouldn’t fully endorse this interpretation of the weekend’s events yet, since the actual reasons for Altman’s ousting have still not been made public. For the most part, members of the company either aren’t talking about the reasons Sam was pushed out or have flatly denied that his ousting had anything to do with AI safety. Alternate theories have swirled in the meantime, with some suggesting that the real reasons for Altman’s aggressive exit were decidedly more colorful—like accusations he pursued additional funding via autocratic Mideast regimes.

But to get too bogged down in speculating about the specific catalysts for OpenAI’s drama is to ignore what the whole episode has revealed: as far as the real world is concerned, “AI safety” in Silicon Valley is pretty much null and void. Indeed, we now know that despite its supposedly bullet-proof organizational structure and its stated mission of responsible AI development, OpenAI was never going to be allowed to actually put ethics before money.

To be clear, AI safety is a really important field, and, were it to be actually practiced by corporate America, that would be one thing. That said, the version of it that existed at OpenAI—arguably one of the companies that has done the most to pursue a “safety” oriented model—doesn’t seem to have been much of a match for the realpolitik machinations of the tech industry. In even more frank terms, the folks who were supposed to be defending us from runaway AI (i.e., the board members)—the ones who were ordained with responsible stewardship over this powerful technology—don’t seem to have known what they were doing. They don’t seem to have understood that Sam had all the industry connections, the friends in high places, was well-liked, and that moving against him in a world where that kind of social capital is everything amounted to career suicide. If you come at the king, you best not miss.

In short: If the point of corporate AI safety is to protect humanity from runaway AI, then, as an effective strategy for doing that, it has effectively just flunked its first big test. That’s because it’s sorta hard to put your faith in a group of people who weren’t even capable of predicting the very predictable outcome that would occur when they fired their boss. How, exactly, can such a group be trusted with overseeing a supposedly “super-intelligent,world-shattering technology? If you can’t outfox a gaggle of outraged investors, then you probably can’t outfox the Skynet-type entity you claim to be building. That said, I would argue we also can’t trust the craven, money-obsessed C-suite that has now reasserted its dominance. Imo, they’re obviously not going to do the right thing. So, effectively, humanity is stuck between a rock and a hard place.

As the conflict from the OpenAI dustup settles, it seems like the company is well positioned to get back to business as usual. After jettisoning the only two women on its board, the company added fiscal goon Larry Summers. Altman is back at the company (as is former company president Greg Brockman, who stepped down in solidarity with Altman), and Microsoft’s top executive, Satya Nadella, has said that he is “encouraged by the changes to OpenAI board” and said it’s a “first essential step on a path to more stable, well-informed, and effective governance.”

With the board’s failure, it seems clear that OpenAI’s do-gooders may have not only set back their own “safety” mission, but might have also kicked off a backlash against the AI ethics movement writ large. Case in point: This weekend’s drama seems to have further radicalized an already pretty radical anti-safety ideology that had been circulating the business. The “effective accelerationists” (abbreviated “e/acc”) believe that stuff like additional government regulations, “tech ethics” and “AI safety” are all cumbersome obstacles to true technological development and exponential profit. Over the weekend, as the narrative about “AI safety” emerged, some of the more fervent adherents of this belief system took to X to decry what they perceived to be an attack on the true victim of the episode (capitalism, of course).

To some degree, the whole point of the tech industry’s embrace of “ethics” and “safety” is about reassurance. Companies realize that the technologies they are selling can be disconcerting and disruptive; they want to reassure the public that they’re doing their best to protect consumers and society. At the end of the day, though, we now know there’s no reason to believe that those efforts will ever make a difference if the company’s “ethics” end up conflicting with its money. And when have those two things ever not conflicted?

本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7217597
導致OpenAI大地震的「人工智慧」研發新突破 – 路透社
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

下文就本欄上一篇威脅論」幕後的訊息有詳細報導再度聲明我是人工智慧」領域的白痴。如果上一篇威脅論」的論述基於下文所報導的突破」;根據我的邏輯和常識,我會認為該文作者誇大其辭。


OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say

Anna TongJeffrey Dastin and Krystal Hu, 11/

Nov 22 (Reuters) - Ahead of OpenAI CEO 
Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.

In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.

"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.


Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker

Our Standards: The Thomson Reuters Trust Principles.

Reporters


Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University. Contact:4152373211

Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.

Krystal Hu reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.


本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7217583
開放人工智慧研究中心大地震和人類史上最大威脅 –--- Tomas Pueyo
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

看起來這是一個巨無霸型的超級威脅(請參看本欄《人類可能在2031面臨「終結時刻」 -- Tim Newcomb)。我沒有能力翻譯和評論;我能做的是替各位找到一些相關連接並(盡量)依據原則做了翻譯請看官們自行理解和玩味。

此文也回答了我「拔掉插頭」的幼稚想法。


索引(11/26修正請指正)

AI alignment人工智慧研發必須符合人類指令、需求、和福利等目標
Artificial intelligence人工智慧入門介紹
BardGoogle研發的聊天器
Bing ChatMicrosoft研發的聊天器 
ChatBot (chatterbot)
聊天器  
ChatGPT
OpenAI研發的訓練後自動產生聊天能力轉換器 (Chat Generative Pre-trained Transformer)
foom (like the word boom)人工智慧」達到突破性效能的一刻;A sudden increase in artificial intelligence such that an AI system becomes extremely powerful. 
FOOM:幫助人工智慧」達到突破性效能這個目標的過程與步驟
OpenAI
開放人工智慧研究中心


OpenAI and the Biggest Threat in the History of Humanity

We don’t know how to contain or align a FOOMing AGI

TOMAS PUEYO, Uncharted Territories, 11/22/23

Last weekend, there was massive drama at the board of OpenAI, the non-profit/company that makes ChatGPT, which has grown from nothing to $1B revenue per year in a matter of months.

Sam Altman, the CEO of the company, was fired by the board of the non-profit arm. The president, Greg Brockman, stepped down immediately after learning Altman had been let go. 

Satya Nadella, the CEO of Microsoft—who owns 49% of the OpenAI company—told OpenAI he still believed in the company, while hiring Greg and Sam on the spot for Microsoft, and giving them free rein to hire and spend as much as they needed, which will likely include the vast majority of OpenAI employees.

This drama, worthy of the show Succession, is at the heart of the most important problem in the history of humanity.

Board members seldom fire CEOs, because founding CEOs are the single most powerful force of a company. If that company is a rocketship like OpenAI, worth $80B, you don’t touch it. So why did the OpenAI board fire Sam? This is what they said:

No standard startup board member cares about this in a rocketship. But OpenAI’s board is not standard. In fact, it was designed to do exactly what it did. This is the board structure of OpenAI: 

To simplify this, let’s focus on who owns OpenAI the company, at the bottom (Global LLC):

OpenAI the charity has a big ownership of the company.
Some employees and investors also do.
And Microsoft owns 49% of it.

Everything here is normal, except for the charity at the top. What is it, and what does it do?

OpenAI the charity is structured to not make a profit because it has a specific goal that is not financial: To make sure that humanity, and everything in the observable universe, doesn’t disappear.

What is that humongous threatThe impossibility to contain a misaligned, FOOMing AGI. What does that mean? (Skip this next section if you understand that sentence fully.)

FOOM AGI Can’t Be Contained

AGI FOOM

AGI is Artificial General Intelligence: a machine that can do nearly anything any human can do: anything mental, and through robots, anything physical. This includes deciding what it wants to do and then executing it, with the thoughtfulness of a human, at the speed and precision of a machine

Here’s the issue: If you can do anything that a human can do, that includes working on computer engineering to improve yourself. And since you’re a machine, you can do it at the speed and precision of a machine, not a human. You don’t need to go to pee, sleep, or eat. You can create 50 versions of yourself, and have them talk to each other not with words, but with data flows that go thousands of times faster. So in a matter of days—maybe hours, or seconds—you will not be as intelligent as a human anymore, but slightly more intelligent. Since you’re more intelligent, you can improve yourself slightly faster, and become even more intelligent. The more you improve yourself, the faster you improve yourself. Within a few cycles, you develop the intelligence of a God

This is the FOOM process: The moment an AI reaches a level close to AGI, it will be able to improve itself so fast that it will pass our intelligence quickly, and become extremely intelligent. Once FOOM happens, we will reach the singularity: a moment when so many things change so fast that we can’t predict what will happen beyond that point.

Here’s an example of this process in action in the past:

Alpha Zero blew past all accumulated human knowledge about Go after a day or so of self-play, with no reliance on human playbooks or sample games.

Alpha Zero learned by playing with itself, and this experience was enough to work better than any human who ever played, and all previous iterations of Alpha Go. The idea is that an AGI can do the same with general intelligence.

Here’s another example: A year ago, Google’s DeepMind found a new, more efficient way to multiply matrices. Matrix multiplication is a very fundamental process in all computer processing, and humans had not found a new solution to this problem in 50 years.

Do people think an AGI FOOM is possible? Bets vary. In Metaculus, people opine that the process would take nearly two years from weak AI to superintelligence. Others think it might be a matter of hours

Note that Weak AI has many different definitions and nobody is clear what it means. Generally, it means it’s human-level good for one narrow type of task. So it makes sense that it would take 22 months to go from that to AGI, because maybe that narrow task has nothing to do with self-improvement. The key here is self-improvement. I fear the moment an AI reaches human-level ability to self-improve, it will become superintelligent in a matter of hours, days, or weeks. If we’re lucky, months. Not years.

Maybe this is good? Why would we want to stop this runaway intelligence improvement?

Misaligned Paperclips

This idea was first illustrated by Nick Bostrom:

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

Easy: Tell the AGI to optimize for things that humans like before it becomes an AGI? This is called alignment and is impossible so far. 

Not all humans want the same things. We’ve been at war for thousands of years. We still debate moral issues on a daily basis. We just don’t know what it is that we want, so how could we make a machine know that?  

Even if we could, what would prevent the AGI from changing its goals? Indeed, we might be telling it “please humans to get 10 points”, but if it can tinker with itself, it could change that rule to anything else, and all bets are off. So alignment is hard. 

What happens with an AGI that is not fully aligned? Put yourself in the shoes of a god-like AGI. What are some of the first things you would do, if you have any goal that is not exactly “do what’s best for all humans as a whole”?

You would cancel the ability of any potential enemy to stop you, because that would jeopardize your mission the most. So you would probably create a virus that preempts any other AGI that appears. Since you’re like a god, this would be easy. You’d infect all computers in the world in a way that can’t be detected.

The other big potential obstacle to reaching your objectives might be humans shutting you down. So you will quickly take this ability away from humans. This might be by spreading over the Internet, creating physical instances of yourself, or simply eliminating all humans. Neutralizing humans would probably be at the top of the priority list of an AGI the moment it reaches AGI.

Of course, since an AGI is not dumb, she1 would know that appearing too intelligent or self-improving too fast would be perceived by humans as threatening. So she would have all the incentives to appear dumb and hide her intelligence and self-improvement. Humans wouldn’t notice she’s intelligent until it’s too late

If that sounds weird, think about all the times you’ve talked with an AI and it has lied to you (the politically correct word is “hallucinate”). Or when a simulated AI committed insider trading and lied about it. And these are not very intelligent AIs! It is very possible that an AI would lie to be undetected and reach AGI status.

So shutting down an AGI after it escapes is impossible, and shutting it down before might be too hard because we wouldn’t know it’s superintelligent. Whether it is making paperclips, or solving the Riemann hypothesis, or any other goal, neutralizing humans and other computers would be a top priority, and seeming dumb before developing the capacity to achieve that would be a cornerstone of the AGI’s strategy.

This concept is called instrumental convergence: Whatever you optimize for, you will have some consistent goals like accessing resources and fending off threats. 

OK, so we want to catch an AI that is becoming intelligent fast, even if it tries to lie to us. This sounds easy, doesn’t it? Let’s just contain it.

Except you can’t.

The Problem of Containment

In Ex Machina (SPOILERS AHEAD), a tech visionary invites an engineer to his complex to interact with a contained AI. The AI is beautiful, sensitive, delicate, intelligent, curious, and falls in love with the engineer. 

She then shares how the evil tech visionary is keeping her miserably contained in a box. Eventually, the engineer helps free her, at which point she locks and kills humans, and escapes.

—-END OF SPOILERS––

This is close to the scenario that happens in nearly every situation ever imagined to try to contain an AGI. If it has access to the Internet, it can spread infinitely. So it shouldn’t access it.2 Even if it doesn’t, it could generate electromagnetic waves in a pattern that could hack into other systems that are connected to the Internet.

If that sounds crazy, consider that there is malware that can communicate through the vibrations of a computer’s fan (
風扇):

Lead cyber-security researcher Mordechai Guri at Ben-Gurion University of the Negev said data encoded by hackers into fan vibrations could be transmitted to a smartphone placed in the vicinity of the targeted computer.

So, you need this AGI to be secluded from any other electronic thing, and even air, in a storage room with a faraday cage3 and void, with no electronics anywhere near it. No cable should go in and out of the cage. And this assumes that there’s no other physical way that an AGI can spread, which we don’t know since we don’t know all the laws of physics—something an AGI might be able to figure out.

Even in a scenario where we can physically contain the AGI, humans would have to interact with it, because otherwise who would be using the AGI and studying it? They would have to enter the box where the AGI is contained. At that point, they become the weak link in the chain. The AGI would hitchhike on the human.

You’d imagine that an intelligent engineer, trained to be very careful about not carrying an AGI outside, would be able to fend off any escape attempt. This is not what experience says.

In real-life experiments, humans posing as AGIs have consistently been able to be released by gatekeepers who knew they had to keep the AGI inside. This makes sense: an AGI would be so intelligent that we’re like an ant to it. It would be easy to manipulate us. Even casual conversation with a human guard could allow an AGI to deploy psychological tricks, ranging from befriending to blackmail, to convince him that it is in his interest to let the AI loose. The AI might offer a gatekeeper a recipe for perfect health or immortality, or threaten to do horrific things to the gatekeeper and his family once it inevitably escapes. 

For example, a Google AI engineer (the type of person you’d think are mindful of this type of problem), working on a more basic LLM (Large Language Model, the sort of AI that ChatGPT belongs to) than ChatGPT called LaMDA, thought it had reached consciousness and tried to give it legal rights. 

So this is the fear:

*  An AGI could become very intelligent very fast.
*  Being so intelligent, it would be impossible to contain it.
*  Once it is loose, it has a strong incentive to neutralize humans in order to optimize whatever its goal is.
*  The only way out of this is making sure this AI wants exactly the same thing as humans, but we have no idea how to achieve that.

Not only do we need to figure out alignment, but we need it on our first attempt. We can’t learn from our mistakes, because the very first time an AI reaches superintelligence is likely to be the last time too. We must solve a problem we don’t understand, which we’ve been dealing with for thousands of years without a solution, and we need to do it in the coming years before AGI appears.

Also, we need to do it quickly, because AGI is approaching. People think we will be able to build self-improving AIs within 3 years:

以下是:

Predictions from Metaculus, where experts bet on outcomes of specific events. They tend to reasonably reflect what humans know at the time, just like stock markets reflect the public knowledge about a company’s value.

我就不轉載了;有興趣的網友請至原網頁繼續閱讀原文和讀者們的評論。
 
正文附註:

1  I find it useful to anthropomorphize it, and since the word intelligence is feminine in both French and Spanish, I imagine AGI as female. Most AI voices are females. That’s because people prefer listening to females, among other things.
2  It could be created without the Internet by downloading gazillions of petabytes of Internet data and training the AI with it.
3  This is a cage that prevents any electromagnetic signal to go in or out.


本文於 修改第 3 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7217547
《人類和人工智慧之戰的結果》讀後
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

此文發表於今年九月底(本欄上一篇),我一直沒有時間細讀,也就沒有介紹。這幾天「人工智慧『安全』」或「人工智慧『威脅』」議題再度上了頭版頭條(以下三篇),因此把它轉載於此。

該文標題的意思是在美國國防部的空戰模擬中,「人工智慧飛行員」大敗「人類飛行員150

該文主旨則在指出人工智慧」在「決策」上的「效能」遠遠超過人類;從而,誰能先把「人工智慧」應用到戰場的指揮統御上,誰就能真正的「決勝於千里之外」!

有趣的是,該文子標題中提及我「拔掉插頭」來對付「人工智慧」的想法(開欄文);可惜該文作者沒有對其「可行性」正面置評。從他語氣看來,我相信他認為:「不行」!

本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7217538
人類和人工智慧之戰的結果 -- Stephen Kelly
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Here's how a war between AI and humanity would actually end

There’s no need to worry about a robot uprising. We can always just pull the plug, right…? RIGHT?

Stephen Kelly, the Science Focus, 09/29/23

 New science-fiction movie The Creator imagines a future in which humanity is at war with artificial intelligence (AI). Hardly a novel concept for sci-fi, but the key difference here – as opposed to, say, The Terminator – is that it arrives at a time when the prospect is starting to feel more like science fact than fiction.  

The last few months, for instance, have seen numerous warnings about the ‘existential threat’ posed by AI. For not only could it one day write this column better than I can (unlikely, I’m sure you’ll agree), but it could also lead to frightening developments in warfare – developments that could spiral out of control.

The most obvious concern is a future in which AI is used to autonomously operate weaponry in place of humans. 
Paul Scharre, author of Four Battlegrounds: Power in the Age of Artificial Intelligence, and vice president of the Center for a New American Security, cites the recent example of DARPA’s (the Defense Advanced Research Projects Agency) AlphaDogfight challenge – an aerial simulator that pitted a human pilot against an AI.

“Not only did the AI crush the pilot 15 to zero,” says Scharre, “but it made moves that humans can’t make; specifically, very high-precision, split-second gunshots.”

Yet the prospect of giving AI the power to make life or death decisions raises uncomfortable questions. For instance, what would happen if an AI made a mistake and accidentally killed a civilian? “That would be a war crime,” says Scharre. “And the difficulty is that there might not be anyone to hold accountable.”

In the near future, however, the most likely use of AI in warfare will be in tactics and analysis. “AI can help process information better and make militaries more efficient,” says Scharre.

 “I think militaries are going to feel compelled to turn over more and more decision-making to AI, because the military is a ruthlessly competitive environment.

If there’s an advantage to be gained, and your adversary takes it and you don’t, you’re at a huge disadvantage.” This, says Scharre, could lead to an AI arms race, akin to the one for nuclear weapons.

 “Some Chinese scholars have hypothesised about a singularity on the battlefield,” he says. “[That’s the] point when the pace of AI-driven decision-making eclipses the speed of a human’s ability to understand and humans effectively have to turn over the keys to autonomous systems to make decisions on the battlefield.”

Of course, in such a scenario, it doesn’t feel impossible for us to lose control of that AI – or even for it to turn against us. Hence why it’s US policy that humans are always in the loop regarding any decision to use nuclear weapons.

“But we haven’t seen anything similar from countries like Russia and China,” says Scharre. “So, it’s an area where there’s valid concern.” If the worst was to happen, and an AI did declare war, Scharre is not optimistic about our chances.

“I mean, could chimpanzees win a war against humans?” he says, laughing. “Top chessplaying AIs aren’t just as good as grandmasters; the top grandmasters can’t remotely compete with them. And that happened pretty quickly.

It’s only five years ago that that wasn’t the case. “We’re building increasingly powerful AI systems that we don’t understand and can’t control, and are deploying them in the real world. I think if we’re actually able to build machines that are smarter than us, then we’ll have a lot of problems.”

About our expert, Paul Scharre

Scharre is the Executive Vice President and Director of studies at the Center for a New Amercian Security (CNAS). He has written multiple books on the topic of artificial intelligence and warfare and was named one of the 100 most influential people in AI in 2023 by TIME magazine.

Read more:

The end of ageing? A new AI is developing drugs to fight your biological clock
An AI friend will always be on your side... until it threatens your free will


本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7217532
人類可能在2031面臨「終結時刻」 -- Tim Newcomb
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

終結時刻」:工智慧機器脫離人類控制取得自主能力的時間點


A Scientist Says the Singularity Will Happen by 2031

Maybe even sooner. Are you ready?

, 11/09/23

*  “The singularity,” the moment where AI is no longer under human control, is less than a decade away—according to one AI expert.
More resources than ever are being poured into the pursuit of artificial general intelligence and speeding the growth of AI.
Development of AI is also coming from a variety of sectors, pushing the technology forward faster than ever before.

There’s at least one expert who believes that “
the singularity”—the moment when artificial intelligence surpasses the control of humans—could be just a few years away. That’s a lot shorter than current predictions regarding the timeline of AI dominance, especially considering that AI dominance is not exactly guaranteed in the first place.

Ben Goertzel, CEO of SingularityNET—who holds a Ph.D. from Temple University and has worked as a leader of Humanity+ and the Artificial General Intelligence Society—
told Decrypt that he believes artificial general intelligence (AGI) is three to eight years away. AGI is the term for AI that can truly perform tasks just as well has humans, and it’s a prerequisite for the singularity soon following.

Whether you believe him or not, there’s no sign of the AI push slowing down any time soon. Large language models from the likes of Meta and OpenAI, along with the AGI focus of 
Elon Musk’s xAI, are all pushing hard towards growing AI.

“These systems have greatly increased the enthusiasm of the world for AGI,” Goertzel told Decrypt, “so you’ll have more resources, both
money and just human energy—more smart young people want to plunge into work and working on AGI.”

When the concept of AI started first emerged—as early as the 1950s—Goertzel says that its development was driven by the United States
military and seen primarily as a potential national defense tool. Recently, however, progress in the field has been propelled by a variety of drivers with a variety of motives. “Now the ‘why’ is making money for companies,” he says, “but also interestingly, for artists or musicians, it gives you cool tools to play with.”

Getting to the singularity, though, will require a significant leap from the current point of AI development. While today’s AI typically focuses on specific tasks, the push towards AGI is intended to give the 
technology a more human-like understanding of the world and open up its abilities. As AI continues to broaden its understanding, it steadily moves closer to AGI—which some say is just one step away from the singularity.

The technology isn’t there yet, and some experts 
caution we are truly a lot further from it than we think—if we get there at all. But the quest is underway regardless. Musk, for example, created xAI in the summer of 2023 and just recently launched the chatbot Grok to “assist humanity in its quest for understanding and knowledge,” according to Reuters. Musk also called AI “the most disruptive force in history.”

With many of the most influential tech giants—
Google, Meta and Musk—pursuing the advancement of AI, the rise of AGI may be closer than it appears. Only time will tell if we will get there, and if the singularity will follow.


本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7216634
人工智慧發展小史 - Donovan Johnson
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

以下短文只是小菜一碟;但對科普或技普有興趣的朋友,此部落格可以不時去蹓躂蹓躂例如下面這三碟小菜:

The Difference Between Generative AI And Traditional AI: An Easy Explanation For Anyone
How Pulsed Laser Deposition Systems are Revolutionizing the Tech Industry
Top Data Annotation Tools to Watch: Revolutionizing the Telecommunications and Internet Industries


The History of Artificial Intelligence

Donovan Johnson, 07/23/23

Artificial intelligence (AI) has a long history that dates back to ancient times. The idea of machines or devices that can imitate human behavior and intelligence has intrigued humans for centuries. However, the field of AI as we know it today began to take shape in the mid-20th century.

During World War II, researchers began to explore the possibilities of creating machines that could simulate human thinking and problem-solving. The concept of AI was formalized in 1956 when a group of researchers organized the Dartmouth Conference, where they discussed the potential of creating intelligent machines.

In the following years, AI research experienced significant advancements. Researchers developed algorithms and programming languages that could facilitate machine learning and problem-solving. They also started to build computers and software systems that could perform tasks traditionally associated with human intelligence.

One of the key milestones in AI history was the development of expert systems in the 1980s. These systems were designed to mimic the decision-making processes of human experts in specific domains. They proved to be useful in areas such as medicine and finance.

In the 1990s, AI research shifted towards probabilistic reasoning and machine learning. Scientists began to explore the potential of neural networks and genetic algorithms to create intelligent systems capable of learning from data and improving their performance over time.

Today, AI has become an integral part of our daily lives. It powers virtual assistants, recommendation systems, autonomous vehicles, and many other applications. AI continues to evolve and advance, with ongoing research in areas such as deep learning, natural language processing, and computer vision.

The history of AI is characterized by significant achievements and breakthroughs. From its early beginnings as a concept to its current status as a transformative technology, AI has come a long way. As researchers and scientists continue to push the boundaries of what is possible, we can expect even more exciting developments and applications of AI in the future.


本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7210324
「人工智慧」並不智慧 ----- David J. Gunkel
推薦2


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (2)

麥芽糖
胡卜凱

剛口教授在這篇文章中討論「人工智慧」是不是一個正確的「指號」。他的論點涉及語意學和語言哲學兩個領域概念多於科學和技術兩個領域。

剛口教授認為魏勒教授提出的控制及通訊機制學」要比「人工智慧」來得貼切、適當。如剛口教授所說前者不會讓人們聯想到「智慧」、「意識」、和「感知」這類其實跟「人工智慧」並無關聯的概念。


AI is not intelligent

AI should not be called AI

David J. Gunkel, 06/22/23

The act of naming is more than just a simple labeling exercise; it's a potent exercise of power with political implications. As the discourse around AI intensifies, it may be time to reassess its nomenclature and inherent biases, writes David Gunkel.

Naming is anything but a nominal operation. Nowhere is this more evident and clearly on display than in recent debates about the moniker “artificial intelligence” (AI). Right now, in fact, it appears that AI—the technology and the scientific discipline that concerns this technology—is going through a kind of identity crisis, as leading voices in the field are beginning to ask whether the name is (and maybe already was) a misnomer and a significant obstacle to accurate understanding. “As a computer scientist,” Jaron Lanier recently wrote in a piece for 
The New Yorker, “I don’t like the term A.I. In fact, I think it’s misleading—maybe even a little dangerous.”

What’s in a Name?

The term “artificial intelligence” was originally proposed and put into circulation by John McCarthy in the process of organizing a scientific meeting at Dartmouth College in the summer of 1956. And it immediately had traction. It not only was successful for securing research funding for the event at Dartmouth but quickly became the nom célèbre for a brand-new scientific discipline.

For better or worse, McCarthy’s neologism put the emphasis on intelligence. And it is because of this that we now find ourselves discussing and debating questions like: Can machines think? (Alan Turing’s initial query), are large language models sentient? (something that became salient with the Lemoine affair last June), or when might have an AI that achieves consciousness (a question that has been posed in numerous headlines in the wake of recent innovations with generative algorithms). But for many researchers, scholars, and developers these are not just the wrong questions, they are potentially deceptive and even dangerous to the extent that they distract us with speculative matters that are more science fiction than science fact.

Renaming AI

Since the difficulty derives from the very name “artificial intelligence,” one solution has been to select or fabricate a better or more accurate signifier. The science fiction writer 
Ted Chiang, for instance, recommends that we replace AI with something less “sexy,” like “applied statistics.” Others, like Emily Bender, have encouraged the use of the acronym SALAMI (Systematic Approaches to Learning Algorithms and Machine Inferences), which was originally coined by Stefano Quintarelli in an effort to avoid what he identified as the “implicit bias” residing in the name “artificial intelligence.”

Though these alternative designations may be, as Chiang argues, more precise descriptors for recent innovations with machine learning (ML) systems, neither of them would apply to or entirely fit other architectures, like GOFAI (aka symbolic reasoning) and hybrid systems. Consequently, the proposed alternatives would, at best, only describe a small and relatively recent subset of what has been situated under the designation “artificial intelligence.”

But inventing new names—whether it is something like that originally proposed by McCarthy or one of the recently proposed alternatives—is not the only way to proceed. As French philosopher and literary theorist Jacques Derrida pointed out, there are at least two different ways to designate a new concept: neologism (the fabrication of a new name) and paleonymy (the reuse of an old name). If the former has produced less than suitable results, perhaps it is time to try the latter.

Cybernetics

The good news is that we do not have to look far or wide to find a viable alternative. There was one already available at the time of the Dartmouth meeting with “cybernetics.” This term—derived from the ancient Greek word (κυβερνήτης) for the helmsman of a boat—had been introduced and developed by Norbert Wiener in 1948 to designate the science of communication and control in the animal and machine.

Cybernetics has a number of advantages when it comes to rebranding what had been called AI. First, cybernetics does not get diverted by or lost in speculation about intelligence, consciousness, or sentience. It is only concerned with and focuses attention on decision-making capabilities and processes. The principal example utilized throughout the literature on the subject is the seemingly mundane but nevertheless indicative thermostat. This homeostatic device can accurately adjust for temperature without knowing anything about the concept of temperature, understanding the difference between “hot” and “cold,” or needing to think (or be thought to be thinking).

Second, cybernetics avoids one of the main epistemological problems and sticking points that continually frustrates AI—something philosophers call “the problem of other minds.” For McCarthy and colleagues, one of the objectives of the Dartmouth meeting—in fact, the first goal listed on the proposal—was to figure out “how to make machines use language.” This is because language use—as Turing already had operationalized with the imitation game—had been taken to be a sign of intelligence. But as John Searle demonstrated with his Chinese Room thought experiment, the manipulation of linguistic tokens can transpire without knowing anything at all about the language. Unlike AI, cybernetics can attend to the phenomenon and effect of this communicative behavior without needing to resolve or even broach the question concerning the problem of other minds.

Finally, cybernetics does not make the same commitment to human exceptionalism that has been present in AI from the beginning. Because the objectives initially listed by the Dartmouth proposal (e.g., language use, form abstractions and concepts, solve problems reserved for humans, and improve themselves), definitions of AI tend to concentrate on the emulation or simulation of “human intelligence.” Cybernetics by contrast is more diverse and less anthropocentric. As the general science of communication and control in the animal and the machine, it takes a more holistic view that can accommodate a wider range of things. It is, as N. Katherine Hayles argues, a posthuman framework that is able to respond to and take responsibility for others and other forms of socially significant otherness

Back to the Future

If “cybernetics” had already provided a viable alternative, one has to ask why the term “artificial intelligence” became the privileged moniker in the first place? The answer to this question returns us to where we began—with names and the act of naming. As McCarthy explained many years later, one of the reasons “for inventing the term ‘artificial intelligence’ was to escape association with cybernetics” and to “avoid having either to accept Norbert Wiener as a guru or having to argue with him.” Thus, the term “artificial intelligence” was as much a political decision and strategy as it was a matter of scientific designation. But for this reason, it is entirely possible and perhaps even prudent to reverse course and face what the nascent discipline of AI had so assiduously sought to avoid. The way forward may be by going back.

David J. Gunkel is a professor at Northern Illinois University and author of The
Machine Question and Person-Thing-Robot.



本文於 修改第 3 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7208144
新「圖林測試」:另一個版本 -- Sawdah Bhaimiya
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

另一個讓圖林測試」跟上時代的建議。


DeepMind's co-founder suggested testing an AI chatbot's ability to turn $100,000 into $1 million to measure human-like intelligence

Sawdah Bhaimiya, 06/20/23

*  DeepMind's co-founder believes the Turing test is an outdated method to test AI intelligence. 
*  In his book, he suggests a new idea in which AI chatbots have to turn $100,000 into $1 million.
*  "We don't just care about what a machine can say; we also care about what it can do," he wrote. 

A co-founder of Google's AI research lab DeepMind thinks AI chatbots like ChatGPT should be tested on their ability to turn $100,000 into $1 million in a "modern Turing test" that measures human-like intelligence.

Mustafa Suleyman, formerly head of applied AI at DeepMind and now CEO and co-founder of Inflection AI, is releasing a new book called "The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma." 

In the book, Suleyman dismissed the traditional Turing test because it's "unclear whether this is a meaningful milestone or not," 
Bloomberg reported Tuesday. 

"It doesn't tell us anything about what the system can do or understand, anything about whether it has established complex inner monologues or can engage in planning over abstract time horizons, which is key to human intelligence," he added. 

The Turing test was introduced by
 Alan Turing in the 1950s to examine whether a machine has human-level intelligence. During the test, human evaluators determine whether they're speaking to a human or a machine. If the machine can pass for a human, then it passes the test. 

Instead of comparing AI's intelligence to humans, Suleyman proposes tasking a bot with short-term goals and tasks that it can complete with little human input in a process known as "artificial capable intelligence," or ACI. 

To achieve ACI, Suleyman says AI bots should pass a new Turing test in which it receives a $100,000 seed investment and has to turn it into $1 million. As part of the test, the bot must research an e-commerce business idea, develop a plan for the product, find a manufacturer, and then sell the item

He expects AI to achieve this milestone in the next two years

"We don't just care about what a machine can say; we also care about what it can do," he wrote, per Bloomberg.

OpenAI's ChatGPT was 
released in November 2022 and impressed users with its ability to hold casual conversations, generate code, and write essays. ChatGPT spurred the hype around the generative AI industry. 

The technology could even 
add up to $4.4 trillion to the global economy annually, a recent McKinsey report found.

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7207979
頁/共2頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁