網路城邦
回本城市首頁 時事論壇
市長:胡卜凱  副市長:
加入本城市推薦本城市加入我的最愛訂閱最新文章
udn城市政治社會政治時事【時事論壇】城市/討論區/
討論區知識和議題 字體:
看回應文章  上一個討論主題 回文章列表 下一個討論主題
人工智能縱橫談 – 開欄文
 瀏覽7,140|回應27推薦2

胡卜凱
等級:8
留言加入好友
文章推薦人 (2)

亓官先生
胡卜凱

四月開始,由於   ChatGPT    Bing Chat 的上線,網上以及各line群組掀起一陣AI瘋。我當時大概忙於討論《我們的反戰聲明》,沒有湊這個熱鬧。現在轉載幾篇相關文章。也請參考《「人工智慧」研發現況及展望 》一文

有些人擔憂「人工智慧」會成為「人上機器」,操控世界甚至奴役人類。我不懂AI,思考也單純;所以,如果「人工智慧」亂了套,我自認為有一個簡單治它的方法:

拔掉電源插頭。如果這個方法不夠力,炸掉電力傳輸線和緊急發電機;再不行,炸掉發電廠。

本文於 修改第 4 次
回應 回應給此人 推薦文章 列印 加入我的文摘

引用
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7205038
 回應文章 頁/共3頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁
風風火火的人工智慧 充其量只是「自動化」 離智慧遠的了
    回應給: 胡卜凱(jamesbkh) 推薦0


麥芽糖
等級:8
留言加入好友

 
風風火火的人工智慧
充其量只是「自動化」
離智慧遠的了


回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7207687
「人工智慧」:最危險情況 -- 《本週》週刊
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

本週》週刊這篇文章分析和報導「人工智慧」未來發展可能產生的各種最危險情況。

如我在開欄文中所說:我不懂AI但我認為擔憂AIGAI的專家學者們有些杞人憂天自己嚇自己


AI: The worst-case scenario

, 06/17/23

Artificial intelligence's architects warn it could cause human "extinction." How might that happen? Here's everything you need to know:

What are AI experts afraid of?

They fear that AI will become so superintelligent and powerful that it becomes autonomous and causes mass social disruption or even the eradication of the human race. More than 350 AI researchers and engineers recently issued a warning that AI poses risks comparable to those of "pandemics and nuclear war." In a 2022 survey of AI experts, the median odds they placed on AI causing extinction or the "severe disempowerment of the human species" were 1 in 10. "This is not science fiction," said Geoffrey Hinton, often called the "godfather of AI," who recently left Google so he could sound a warning about AI's risks. "A lot of smart people should be putting a lot of effort into figuring out how we deal with the possibility of AI taking over."

When might this happen?

Hinton used to think the danger was at least 30 years away, but says AI is evolving into a superintelligence so rapidly that it may be smarter than humans in as little as five years. AI-powered ChatGPT and Bing's Chatbot already can pass the bar and medical licensing exams, including essay sections, and on IQ tests score in the 99th percentile — genius level. Hinton and other doomsayers fear the moment when "artificial general intelligence," or AGI, can outperform humans on almost every task. Some AI experts liken that eventuality to the sudden arrival on our planet of a superior alien race. You have "no idea what they're going to do when they get here, except that they're going to take over the world," said computer scientist Stuart Russell, another pioneering AI researcher.

How might AI actually harm us?


One scenario is that malevolent actors will harness its powers to create novel bioweapons more deadly than natural pandemics. As AI becomes increasingly integrated into the systems that run the world, terrorists or rogue dictators could use AI to shut down financial markets, power grids, and other vital infrastructure, such as water supplies. The global economy could grind to a halt. Authoritarian leaders could use highly realistic AI-generated propaganda and Deep Fakes to stoke civil war or nuclear war between nations. In some scenarios, AI itself could go rogue and decide to free itself from the control of its creators. To rid itself of humans, AI could trick a nation's leaders into believing an enemy has launched nuclear missiles so that they launch their own. Some say AI could design and create machines or biological organisms like the Terminator from the film series to act out its instructions in the real world. It's also possible that AI could wipe out humans without malice, as it seeks other goals.

How would that work?

AI creators themselves don't fully understand how the programs arrive at their determinations, and an AI tasked with a goal might try to meet it in unpredictable and destructive ways. A theoretical scenario often cited to illustrate that concept is an AI instructed to make as many paper clips as possible. It could commandeer virtually all human resources to the making of paper clips, and when humans try to intervene to stop it, the AI could decide eliminating people is necessary to achieve its goal. A more plausible real-world scenario is that an AI tasked with solving climate change decides that the fastest way to halt carbon emissions is to extinguish humanity. "It does exactly what you wanted it to do, but not in the way you wanted it to," explained Tom Chivers, author of a book on the AI threat.

Are these scenarios far-fetched?

Some AI experts are highly skeptical AI could cause an apocalypse. They say that our ability to harness AI will evolve as AI does, and that the idea that algorithms and machines will develop a will of their own is an overblown fear influenced by science fiction, not a pragmatic assessment of the technology's risks. But those sounding the alarm argue that it's impossible to envision exactly what AI systems far more sophisticated than today's might do, and that it's shortsighted and imprudent to dismiss the worst-case scenarios.

So, what should we do?

That's a matter of fervent debate among AI experts and public officials. The most extreme Cassandras call for shutting down AI research entirely. There are calls for moratoriums on its development, a government agency that would regulate AI, and an international regulatory body. AI's mind-boggling ability to tie together all human knowledge, perceive patterns and correlations, and come up with creative solutions is very likely to do much good in the world, from curing diseases to fighting climate change. But creating an intelligence greater than our own also could lead to darker outcomes. "The stakes couldn't be higher," said Russell. "How do you maintain power over entities more powerful than you forever? If we don't control our own civilization, we have no say in whether we continue to exist."

A fear envisioned in fiction

Fear of AI vanquishing humans may be novel as a real-world concern, but it's a long-running theme in novels and movies. In 1818's "Franken­stein," Mary Shelley wrote of a scientist who brings to life an intelligent creature who can read and understand human emotions — and eventually destroys his creator. In Isaac Asimov's 1950 short-story collection "I, Robot," humans live among sentient robots guided by three Laws of Robotics, the first of which is to never injure a human. Stanley Kubrick's 1968 film "A Space Odyssey" depicts HAL, a spaceship supercomputer that kills astronauts who decide to disconnect it. Then there's the "Terminator" franchise and its Skynet, an AI defense system that comes to see humanity as a threat and tries to destroy it in a nuclear attack. No doubt many more AI-inspired projects are on the way. AI pioneer Stuart Russell reports being contacted by a director who wanted his help depicting how a hero programmer could save humanity by outwitting AI. No human could possibly be that smart, Russell told him. "It's like, I can't help you with that, sorry," he said.

This article was first published in the latest issue of The Week magazine. If you want to read more like it, you can try six risk-free issues of the magazine 
here.

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7207660
人工智慧:跟人一樣聰明或比人更聰明?Andrew Romano
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

雷馬婁先生在下文中首先對人工智慧」、「一般性人工智慧」、和「大規模語言學習模型」做了簡單扼要地說明;然後他解釋何以有些人擔憂「一般性人工智慧(在未來)可能取得的能力;他最後列舉了專家們對「一般性人工智慧」未來發展前景和可能需要多長時間的預測。

我是這個領域的門外漢,就不做翻譯工作請自行閱讀參考


Will AI soon be as smart as — or smarter than — humans?

“The 360” shows you diverse perspectives on the day’s top stories and debates.

Andrew Romano·West Coast Correspondent, Yahoo News 360, 06/13/23

What’s happening

At an Air Force Academy commencement address earlier this month, President Biden issued his most direct warning to date about the power of artificial intelligence, predicting that the technology could “overtake human thinking” in the not-so-distant future.

“It’s not going to be easy,” Biden said, citing a recent Oval Office meeting with “eight leading scientists in the area of AI.”

“We’ve got a lot to deal with,” he continued. “An incredible opportunity, but a lot [to] deal with.”

To any civilian who has toyed around with OpenAI’s ChatGPT-4 — or Microsoft’s Bing, or Google’s Bard — the president’s stark forecast probably sounded more like science fiction than actual science.

Sure, the latest round of generative AI chatbots are neat, a skeptic might say. They can help you plan a family vacation, rehearse challenging real-life conversations, summarize dense academic papers and “explain fractional reserve banking at a high school level.”

But “overtake human thinking”? That’s a leap.

In recent weeks, however, some of the world’s most prominent AI experts — people who know a lot more about the subject than, say, Biden — have started to sound the alarm about what comes next.  


Today, the technology powering ChatGPT is what’s known as a large language model (LLM). Trained to recognize patterns in mind-boggling amounts of text — the majority of everything on the internet — these systems process any sequence of words they’re given and predict which words come next. They’re a cutting-edge example of “artificial intelligence”: a model created to solve a specific problem or provide a particular service. In this case, LLMs are learning how to chat better — but they can’t learn other tasks.

Or can they?

For decades, researchers have theorized about a higher form of machine learning known as “artificial general intelligence,” or AGI: software that’s capable of learning any task or subject. Also called “strong AI,” AGI is shorthand for a machine that can do whatever the human brain can do.

In March, a group of Microsoft computer scientists published a 155-page research paper claiming that one of their new experimental AI systems was exhibiting “sparks of artificial general intelligence.” How else (as the New York Times recently paraphrased their conclusion) to explain the way it kept “coming up with humanlike answers and ideas that weren’t programmed into it?

In April, computer scientist Geoffrey Hinton — a neural network pioneer known as one of the “Godfathers of AI” — quit his job at Google so he could speak freely about the dangers of AGI.

And in May, a group of industry leaders (including Hinton) released a one-sentence statement warning that AGI could represent an existential threat to humanity on par with “pandemics and nuclear war” if we don't ensure that its objectives align with ours.

“The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton told the New York Times. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Each of these doomsaying moments has been controversial, of course. (More on that in a minute.) But together they’ve amplified one of the tech world’s deepest debates: Are machines that can outthink the human brain impossible or inevitable? And could we actually be a lot closer to opening Pandora’s box than most people realize?

Why there’s debate

There are two reasons that concerns about AGI have become more plausible — and pressing — all of a sudden.

The first is the unexpected speed of recent AI advances. “Look at how it was five years ago and how it is now,” Hinton told the New York Times. “Take the difference and propagate it forwards. That’s scary.”

The second is uncertainty. When CNN asked Stuart Russell — a computer science professor at the University of California, Berkeley and co-author of Artificial Intelligence: A Modern Approach to explain the inner workings of today’s LLMs, he couldn’t.

“That sounds weird,” Russell admitted, because “I can tell you how to make one.” But “how they work, we don’t know. We don’t know if they know things. We don’t know if they reason; we don’t know if they have their own internal goals that they’ve learned or what they might be.”

And that, in turn, means no one has any real idea where AI goes from here. Many researchers believe that AI will tip over into AGI at some point. Some think AGI won’t arrive for a long time, if ever, and that overhyping it distracts from more immediate issues, like AI-fueled misinformation or job loss. Others suspect that this evolution may already be taking place. And a smaller group fears that it could escalate exponentially. As the New Yorker recently explained, “a computer system [that] can write code — as ChatGPT already can — ... might eventually learn to improve itself over and over again until computing technology reaches what’s known as “the singularity”: a point at which it escapes our control.”

“My confidence that this wasn’t coming for quite a while has been shaken by the realization that biological intelligence and digital intelligence are very different, and digital intelligence is probably much betterat certain things, Hinton recently told the Guardian. He then predicted that true AGI is about five to 20 years away.

“I’ve got huge uncertainty at present,” Hinton added. “But I wouldn’t rule out a year or two. And I still wouldn’t rule out 100 years. ... I think people who are confident in this situation are crazy.”

Perspectives

Today’s AI just isn’t agile enough to approximate human intelligence

“AI is making progress — synthetic images look more and more realistic, and speech recognition can often work in noisy environments — but we are still likely decades away from general-purpose, human-level AI that can understand the true meanings of articles and videos or deal with unexpected obstacles and interruptions. The field is stuck on precisely the same challenges that academic scientists (including myself) have been pointing out for years: getting AI to be reliable and getting it to cope with unusual circumstances.” — Gary Marcus, Scientific American

New chatbots are impressive, but they haven’t changed the game

“Superintelligent AIs are in our future. ... Once developers can generalize a learning algorithm and run it at the speed of a computer — an accomplishment that could be a decade away or a century away — we’ll have an incredibly powerful AGI. It will be able to do everything that a human brain can, but without any practical limits on the size of its memory or the speed at which it operates. ... [Regardless,] none of the breakthroughs of the past few months have moved us substantially closer to strong AI. Artificial intelligence still doesn’t control the physical world and can’t establish its own goals.” — Bill Gates, GatesNotes

There’s nothing ‘biological’ brains can do that their digital counterparts won’t be able to replicate (eventually)

“I’m often told that AGI and superintelligence won’t happen because it’s impossible: human-level Intelligence is something mysterious that can only exist in brains. Such carbon chauvinism ignores a core insight from the AI revolution: that intelligence is all about information processing, and it doesn’t matter whether the information is processed by carbon atoms in brains or by silicon atoms in computers. AI has been relentlessly overtaking humans on task after task, and I invite carbon chauvinists to stop moving the goal posts and publicly predict which tasks AI will never be able to do.” — Max Tegmark, Time

The biggest — and most dangerous — turning point will come if and when AGI starts to rewrite its own code

“Once AI can improve itself, which may be not more than a few years away, and could in fact already be here now, we have no way of knowing what the AI will do or how we can control it. This is because superintelligent AI (which by definition can surpass humans in a broad range of activities) will — and this is what I worry about the most — be able to run circles around programmers and any other human by manipulating humans to do its will; it will also have the capacity to act in the virtual world through its electronic connections, and to act in the physical world through robot bodies.” — Tamlyn Hunt, Scientific American

Actually, it will be much harder for AGI to trigger ‘the singularity’ than doomers think

“Computer hardware and software are the latest cognitive technologies, and they are powerful aids to innovation, but they can’t generate a technological explosion by themselves. You need people to do that, and the more the better. Giving better hardware and software to one smart individual is helpful, but the real benefits come when everyone has them. Our current technological explosion is a result of billions of people using those cognitive tools. Could A.I. programs take the place of those humans, so that an explosion occurs in the digital realm faster than it does in ours? Possibly, but ... the strategy most likely to succeed would be essentially to duplicate all of human civilization in software, with eight billion human-equivalent A.I.s going about their business. [And] we’re a long way off from being able to create a single human-equivalent A.I., let alone billions of them.” — Ted Chiang, the New Yorker

Maybe AGI is already here — if we think more broadly about what ‘general’ intelligence might mean

“These days my viewpoint is that this is AGI, in that it is a kind of intelligence and it is general — but we have to be a little bit less, you know, hysterical about what AGI means. ... We’re getting this tremendous amount of raw intelligence without it necessarily coming with an ego-viewpoint, goals, or a sense of coherent self. That, to me, is just fascinating.” — Noah Goodman, associate professor of psychology, computer science and linguistics at Stanford University, to Wired

Ultimately, we may never agree on what AGI is — or when we’ve achieved it

“It really is a philosophical question. So, in some ways, it’s a very hard time to be in this field, because we’re a scientific field. ... It’s very unlikely to be a single event where we check it off and say, AGI achieved.” — Sara Hooker, leader of a research lab that focuses on machine learning, to Wired

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7207225
人工智慧的困境 ---- Steven Novella
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

諾維納教授 在【神經邏輯 部落格】上討論「人工智慧」目前遇到的困境和展望未來研究的方向摘要譯述於下


「人工智慧」目前最為人津津樂道的成就是巨大的語言學習模型和所謂的「轉換器」。兩者的原理基本上都是:在學習整個網際網路上所有語言的使用模式後,電腦程式以統計方法來預測和模擬「適當」的交談。這個了不起成就的困境或瓶頸是:電腦程式不能像人腦一樣思考。當碰到一個完全不按牌理出牌的問題時,電腦程式就會給出一個莫名其妙,讓人摸不著頭腦的答案。

類似問題也出現在人工智慧的其它應用上,如「自走車」。一旦碰上事先沒有寫進「自走程式的奇特路況或其他駕駛人的怪異行為「自走車」就會做出人類駕駛不會做的決定

如果不能克服像人腦一樣思考這個瓶頸「轉換器」一類的人工智慧就到了極限。要突破這個極限人工智慧研究還有一個現實的障礙。目前進行這方面的研究經費在一億美元,同時需要用到巨大的設備和能源;這種支出能維持多久本身也是一個問題。

此外,延續過往的研究方法只能得到枝節上的進展。枝節上的進展能不能解決像人腦一樣思考這個根本問題,前景並不看好。我們可能需要一個「研究典範」轉移人工智慧才能取得突破性進展。


Have Current AI Reached Their Limit?

Steven Novella, 06/05/23

We are still very much in the hype phase of the latest crop of artificial intelligence applications, specifically the large language models and so-called “transformers” like Chat GPT. Transformers are a deep learning model that use self-attention to differentially weight the importance of its input, including any recursive use of its own output. This process has been leverage with massive training data, basically the size of the internet. This has produced some amazing results, but as time goes by we are starting to see the real limits of this approach.

I do see a tendency for a false dichotomy when it comes to AI – with AI is all hype at one end of the spectrum, to AI is going to save/destroy the world at the other. This partly follows the typical trend to overestimate short term technological process on the hype end, to underestimating long term progress on the cynical end. I think reality is somewhere in between – this latest crop of AI applications is extremely powerful, but has its limits and will not simply improve indefinitely.

I have been using several AI programs, like Chat GPT and Midjourney, extensively, and the limitations become more clear over time. The biggest limitation of these AI apps is that they cannot think the way people do. They mimic the output of thinking, without any true understanding. They do this by being training on a massive database, and using essentially statistics to predict what comes next (what word fragment or picture element). This produces some amazing results, and it’s still shocking that it works so well, but also creates interesting fails. In Midjourney, for example, (and AI art production application) when creating your prompts that result in image options, you can’t really explain to the application the way you would to a person what it is you want. You are trying to find the right triggers, but those triggers are quirky, and are highly dependent on the training data. Using different words to describe the same thing can trigger wildly different results, based upon how those words were used in the training data.

The same is true of Chat GPT. The more unusual your request the more quirky the result. And the results are not based strictly on reality, just statistically how words are used. This is why there is a problem with so-called hallucinations. You are getting a statistically probable answer, not a real answer, and quirks in the data will produce quirks in the result. The problem has no real understanding of what it’s saying. It’s literally just faking it, mimicking human language by statistically reconstructing what it has learned.

We see this limitation in other areas of AI as well. We were supposed to have self-driving cars by 2020, but we may still be a decade away. This is because the AI driving applications get progressively confused by novel or unusual situations. They are particularly bad at predicting what people will do, something that human drivers are much better at. So they are great in controlled or predictable situations, but can act erratically when thrown a curve-ball – not a good feature in any driver.

But here is the big question – are these current limitations of AI solvable through incremental advances of the existing technology, or fundamental limitations of that technology that will require new innovations to solve? It’s increasingly looking like the latter is closer to the truth.


And this is where we get into overestimating short term progress. People look at the progress in AI of the last decade, even the last few years, and falsely assume this level or progress will continue and then extrapolate linearly into the future. But experts are saying this is not necessarily the case. There are two main reasons for this. The first is the lack of true understanding that I just described. The second is that we are getting to the practical limits of leveraging deep learning on large data sets. Training Chat GPT, for example, cost $100 million, and required a massive infrastructure of hardware and energy usage. There are some practical limits on how much bigger we can get. Further, there appears to be diminishing returns from going larger. Progress from this approach, therefore, may be close to plateauing. Incremental improvements will likely involve greater efficiency and speed, but may not produce significantly better results.


Getting past the “uncanny valley” of almost human may not be possible without a completely new approach. Or it may take orders of magnitude more technological advance than getting to where we are now. There are plenty of examples from past technology that may illuminate this issue. High temperature superconductors when through their hype phase in the 1980s. This produced genuinely useful technology, but everyone assumed we would get to room temperature superconductors quickly. Here we are almost 40 years later and we may be no closer, because the path was ultimately a dead end. Similarly, the current path of AI technology may not lead to general AI with true understanding. A totally different approach may be necessary.

What I think will happen now is that we will enter a period where the marketplace learns how best to leverage this AI technology, and will learn what it is good at and what it is not good at. There will be incremental improvements. New ways of using transformer technology will emerge. But AI, under this technological paradigm, will not forever improve and will reach its limits, mostly imposed by its lack of true understanding. There may be the perception that the AI hype bubble has burst. But then the underestimating of long term progress will kick in. Researchers will come up with new innovations that take AI to the next level, and the process will likely repeat itself.


What is hard to predict is how long this cycle will take. Billions of dollars are being poured into AI research, and there is an entire industry of very smart people working on this. There is no telling what they will come up with. But the lack of true understanding may prove to be a really hard nut to crack, and may put a ceiling on AI capabilities for decades. We will see.


本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7206352
面對人工智慧不用驚慌失措 -- Steven Novella
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

諾維納教授 在【神經邏輯 部落格】上討論「人工智慧」文章應該可以讓琶琶撒格婁博士這類緊張大師安心一些。

諾維納教授提到我們人類的行為和言談模式是社會建構過程的結果因此大多數並非我們自己的獨到見解,而是有跡可循「聊天室」(Chat BOT)或「訓練後自發轉換器」聊天功能(Chat GPT)的原理,也只不過是電腦程式在處理大量例子(> 10億個),根據舉一反三或經驗之談後所產生的一些「言談模式」。它們跟我們人類日常行為言談產生的原則並無二致自然也會產生一些可笑的推論。

對琶琶撒格婁博士的顧慮諾維納教授做了評論和提出解決方案。

諾維納教授大作的另一個重點在指出人工智慧不可能發展出「意識。該文我還沒有仔細讀過,有興趣的朋友可自行前往了解

諾維納教授上面提到人工智慧自行產生資訊和回應的能力;它顯示我在開欄文中就AI所做的一些評論已經趕不上時代。在此做個聲明和修正。


AI – Is It Time to Panic?

Steven Novella, 04/27/23

I’m really excited about the recent developments in artificial intelligence (AI) and their potential as powerful tools. I am also concerned about unintended consequences. As with any really powerful tool, there is the potential for abuse and also disruption. But I also think that the recent calls to 
pause or shutdown AI development, or concerns that AI may become conscious, are misguided and verging on panic.

I don’t think we should pause AI development. In fact, I think further research and development is exactly what we need. Recent AI developments, such as the generative pretrained transformers (GPT) have created a jump in AI capability. They are yet another demonstration of how powerful narrow AI can be, without the need for general AI or anything approaching consciousness. What I think is freaking many people out is how well GPT-based AIs, trained on billions of examples, can mimic human behavior. I think this has as much to do with how much we underestimate how derivative our own behavior is as how powerful these AI are.

Most of our behavior and speech is simply mimicking the culture in which we are embedded. Most of us can get through our day without an original thought, relying entirely on prepackaged phrases and interactions. Perhaps mimicking human speech is a much lower bar than we would like to imagine. But still, these large language models are impressive. They represent a jump in technology, able to produce natural-language interactions with humans that are coherent and grammatically correct. But they remain a little brittle. Spend any significant time chatting with one of these large language models and you will detect how generic and soulless the responses are. It’s like playing a video game – even with really good AI driving the behavior of the NPCs  (
非玩家角色) in the game, they are ultimately predictable and not at all like interacting with an actual sentient being.

There is little doubt that with further development these AI systems will get better. But I think that’s a good thing. Right now they are impressive but flawed. AI driven search engines have a tendency to make stuff up, for example. That is because they are predicting information and generating responses, not just copying or referencing information. The way they make predictions may be ironically hard to predict. They use shortcuts that are really effective, most of the time, but also lead to erroneous results. They are the AI equivalent of heuristics, rules of thumb that sort of work but not really. Figuring out how to prevent errors like this is a good thing.

So what’s the real worry? As far as I can tell from the open letters and articles, it’s just a vague fear that we will lose control, or have already lost control, of these powerful tools. It’s an expression of the precautionary principle, which is fine as far as it goes, but is easily abused or overstated. That’s what I think is happening here.

One concern is that AI will be disruptive in the workplace. I think that ship has sailed, even if it is not out of view yet. I don’t see how a pause will help. The marketplace needs to sort out the ultimate effects.


Another concern is that AI can be abused to spread misinformation. Again, we are already there. However, it is legitimate to be concerned about how much more powerful misinformation will be fueled by more powerful algorithms or deep fakes.

There is concern that AIs will be essentially put in charge of important parts of our society and will fail in unpredictable and catastrophic ways.
 

And finally there are concerns that AI will become conscious, or at least develop emergent behaviors and abilities we don’t understand. I am not concerned about this. AI is not going to just become conscious. It doesn’t work that way, 
for reasons I recently explained.

Part of the panic I think is being driven by a common futurism fallacy – to overestimate short term advance while underestimating long term advance. AI systems just had a breakthrough, and that makes it seem like the advances will continue at this pace. But that is rarely the case. AIs are not about to break the world, or become conscious. They are still dumb and brittle in all the ways that narrow AI is dumb.

Here is what I think we do need to do. First, we need to figure out what this new crop of AI programs are good at and what they are not good at. Like any new technology, if you put it in the hands of millions of people they will quickly sort out how it can be used. The obvious applications are not always the best ones. Microwaves were designed for cooking, which they are terrible for, but they excel at heating. Smart phones are used for more things than just phones, which is almost a secondary function now. So what will GPT AIs really be good for? We will see. I know some people using them to write contracts. Perhaps they will function as personal assistants, or perhaps they will be terrible at that task. We need research and use to sort this out, not a pause.

There may need to be regulation, but I would proceed very carefully here. Some experts warn of the black box problem, and that seems like something that can easily be fixed. Include in the program internal reporting on method, so that it can be reviewed. We also need to sort out the property rights issues, especially with generative art programs. Perhaps artists should have the right to opt out (or not opt in) their art for training data. We may also need quality assurance before AI programs are given control over any system, like approving self-driving cars.

I don’t think any of this needs a pause. I think that will happen naturally – there is already talk of diminishing returns from GPT applications in terms of making them more powerful. Tweaking them, making them better, fixing flaws, and finding new applications will now take time. Don’t stop now, while we are in the messy phase. I also think experts need to be very clear – these systems are not conscious, they are nothing like consciousness, and they are not on a path to become conscious.



本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7205185
「人工智慧」(或「電腦軟體程式」)跟「理性」的概念無關
    回應給: 胡卜凱(jamesbkh) 推薦0


麥芽糖
等級:8
留言加入好友

 
完全同意




回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7205105
人工智慧與理性的末日-Alexis Papazoglou
推薦2


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (2)

胡卜凱
麥芽糖

或許我離開學校太久;或許琶琶撒格婁博士對電腦程式的性質有些外行;總之,我覺得他在《術與思想究學報導這篇文章有點杞人憂天。至少,可以說是言過其實或聳人聽聞。

程式的「演算/決策程序」是設計師或軟體工程師寫的。她/他們自有預先制定的一套「演算/決策」準則才能確保「演算/決策」結果(或「(電腦)輸出」)的一致性。缺乏一致性的軟體程式或「演算/決策程序」不過是一紙廢話或天書。

如果人工智慧」(電腦)的「演算/決策」結果不符合常理,我至少可以想到以下三個可能導致這個情況發生的原因:

a.
軟體設計師或軟體工程師當初的設計不夠周全,或/他沒有考慮到所有輸入數據的各種可能性
b.
軟體設計師或軟體工程師當初的設計不夠完備,或/他沒有考慮到所有可能發生狀況的各種排列組合
c.
電腦的硬體設備有限,或它在預定時間內無法處理所有被輸入的資訊;同時,該程式的設計師或軟體工程師沒有寫進在這個情況下的「退場機制」

我的結論:

1. 2028以前如果「人工智慧」的「演算/決策」結果不符合常理,這是設計團隊的疏忽和/或缺失。(以我的科技知識,我只有預測五年以內發展的信心)
2. 人工智慧」(電腦軟體程式」)理性」的概念無關


AI and the end of reason

Seeing through black boxes

Alexis Papazoglou, 05/24/23

Life changing decisions are increasingly being outsourced to Artificial Intelligence.  The problem is, AI systems are often black boxes, unable to offer explanations for these decisions. Unless regulators insist that AI needs to be explainable and interpretable, we are about to enter an era of the absurd, writes Alexis Papazoglou.

One of the greatest risks that AI poses to humanity is already here.  It’s an aspect of the technology that is affecting human lives today and, if left unchecked, will forever change the shape of our social reality. This is not about AI triggering an event that might end human history, or about the next version of Chat GPT putting millions of people out of a job, or the welter of deep fakes and misinformation war that’s coming. It’s about a feature of current AI that its own designers openly admit to yet remains astonishing when put into words: no one really understands it.

Of course, AI designers understand at an abstract level what products like Chat GPT do: they are pattern recognizers; they predict the next word, image, or sound in a series; they are trained on large data sets; they adjust their own algorithms as they go along, etc. But take any one result, any output, and even the very people who designed it are unable to explain why an AI program produced the results it did. This is why many advanced AI models, particularly deep learning models, are often described as “black boxes”: we know what goes in, the data they are trained on and the prompts, and we know what comes out but have no idea what really goes on inside them.

Often referred to as the issue of interpretability, this is a significant challenge in the field of AI because it means that the system is unable to explain or make clear the reasons behind its decisions, predictions, or actions. This may seem like an innocuous detail when asking Chat GPT to write a love letter in the style of your favorite poet, but less so when AI systems are used to make decisions that have real-world impacts such as whether you get shortlisted for a job, whether you get a bank loan, or whether you go to prison rather than given bail, all of which are decisions already being outsourced to AI. When there’s no possibility of an explanation behind life-changing decisions, when the reasoning of machines (if any) is as opaque to the humans who make them as to the humans whose lives they alter, we are left with a bare “
Computer says noanswer. When we don’t know the reason something’s happened, we can’t argue back, we can’t challenge, and so we enter the Kafkaesque realm of the absurd in which reason and rationality are entirely absent. This is the world we are sleepwalking towards.

If you’re tuned into philosophy debates of the 20th century involving post-modern thinkers like Foucault and Derrida, or even Critical Theory philosophers like Adorno and Horkheimer, perhaps you think the age of “Reason” is already over, or that it never really existed – it was simply another myth of the Enlightenment. The concept of a universal human faculty that dictates rational, logical, upright thought has been criticised and deconstructed many times over. The very idea that Immanuel Kant, a white 18th-century German philosopher, could come up with the rules of universal thought simply through introspection rings all kinds of alarm bells today. Accusations vary from lazy, arm-chair philosophy (though Kant was very aware of that problem), to racism.  But we are currently entering an era that will make even the harshest critics of reason nostalgic for the good old days.

It’s one thing to talk about Reason as a universal, monolithic, philosophical ideal. But small ‘r’ reason, and small ‘r’ rationality is intrinsic to almost all human interaction. Being able to offer justifications for our beliefs and our actions is key to who we are as a social species. It’s something we are taught as children. We have an ability to explain to others why we think what we think and why we do what we do. In other words, we are not black boxes, if asked we can show our reasoning process to others.

Of course, that’s also not completely true. Humans aren’t entirely transparent, even to themselves if we believe Nietzsche and Freud. The real reasons behind our thoughts and actions might be very different from the ones we tell ourselves and give to others. This discrepancy can have deep roots in our own personal history, something that psychoanalysis might attempt to uncover, or it might be due to social reasons, as implied by a concept such as unconscious bias. If that’s the case, one could argue that humans are in fact worse than black boxes – they can offer misleading answers as to why they behave the way they do.

But despite the fact that our reasoning can sometimes be biased, faulty, or misleading, its very existence allows others to engage with it, pick holes in it, challenge us, and ultimately demonstrate why we might be wrong. What is more, being rational means that we can and should adjust our position when given good reason to do so. That’s something black-box AI systems can’t do.

It’s often argued that this opaqueness is an intrinsic feature of AI and can’t be helped. But that’s not the case. Recognizing the issues that arise from outsourcing important decisions to machines without being able to explain how those decisions were arrived at, has led to an effort to produce so-called 
Explainable AI: AI that is capable of explaining the rationale and results of what would otherwise be opaque algorithms.

The currently available versions of Explainable AI, however, are not without their problems. To begin with, the kind of explanations they offer are post-hoc. Explainable AI comes in the form of a second algorithm that attempts to make sense of the results of the black-box algorithm we are interested in the first place. So even though we can be given an account of how the black-box algorithm arrived at the results it did, this is not in fact the way the actual results were reached. This amounts more to an 
audit of algorithms, making sure their results are not compatible with problematic bias. This is closer to what is often referred to as interpretable AI: we can make sense of their results and check that they fulfil certain criteria, even if that’s not how they actually arrived at them.

Another problem with Explainable AI is that it is widely believed that the more explainable an algorithm, the less accurate it is. The argument is that more accurate algorithms tend to be more complex, and hence, by definition, harder to explain. This pitches the virtue of explainability against that of accuracy, and it’s not clear that explainability wins such a clash. It’s important to be able to explain why an algorithm predicts a criminal is likely to reoffend, but it’s arguably more important that such an algorithm doesn’t make mistakes in the first place.

However, computer scientist Cynthia Rudin argues that 
it’s a myth that accuracy and explainability are competing values when it comes to designing algorithms and has demonstrated that the results of black box algorithms can be replicated by much simpler models. Rudin instead suggests that the argument for the epistemological advantage of black boxes hides the fact that the advantage is in fact a monetary one. There is a financial incentive in developing black-box algorithms. The more opaque an algorithm, the easier it is to profit from it and prevent competition from developing something similar. Rudin argues that complexity and opaqueness exist merely as a means to profit from the algorithm that has such features, since their predictions can be replicated by a much simpler, interpretable algorithm that would have been harder to sell given its simplicity.  Furthermore, the cost of developing opaque algorithms might in fact be much lower than developing interpretable ones, since the constraint of making an algorithm transparent to its user can make things harder for the algorithm’s designer.

But beyond the crude financial incentives for developing opaque algorithms lies something deeper: the mystique around black box algorithms is itself part of their allure. The idea that only algorithms we can’t understand have the power to reveal hidden patterns in the data – patterns that mere mortals can’t detect – has a powerful pull, almost theological in nature: the idea of a higher intelligence than ours, one we can’t even begin to understand. Even if this is true, the veil of mystique surrounding AI serves to inadvertently stifle the idea that regulation is even possible.

One of the best examples of our reverence of such artificial intelligence, but also of the absurdity that results from outsourcing important questions to machine processes we don’t fully understand, is found in Douglas Adams’ The Hitchhikers’ Guide to the Galaxy. When a supercomputer named Deep Thought is tasked with finding the answer to “the ultimate question of life, the universe and everything” it takes it 7.5 million years to complete the task. When it finally reveals the answer, it is simply “42”.

If we can’t see properly inside the AI black boxes, we should stop asking them important questions and outsourcing high stake decisions to them. Regulators need to emphasize to developers the need to produce Explainable AI systems, ones whose reasoning we can make sense of. The alternative is to live in a social world without any rhyme or reason, of absurd answers we can’t question.

Suggested Viewing The AI hoax



本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7205039
頁/共3頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁