網路城邦
回本城市首頁 時事論壇
市長:胡卜凱  副市長:
加入本城市推薦本城市加入我的最愛訂閱最新文章
udn城市政治社會政治時事【時事論壇】城市/討論區/
討論區知識和議題 字體:
看回應文章  上一個討論主題 回文章列表 下一個討論主題
人工智能縱橫談 – 開欄文
 瀏覽6,330|回應24推薦2

胡卜凱
等級:8
留言加入好友
文章推薦人 (2)

亓官先生
胡卜凱

四月開始,由於   ChatGPT    Bing Chat 的上線,網上以及各line群組掀起一陣AI瘋。我當時大概忙於討論《我們的反戰聲明》,沒有湊這個熱鬧。現在轉載幾篇相關文章。也請參考《「人工智慧」研發現況及展望 》一文

有些人擔憂「人工智慧」會成為「人上機器」,操控世界甚至奴役人類。我不懂AI,思考也單純;所以,如果「人工智慧」亂了套,我自認為有一個簡單治它的方法:

拔掉電源插頭。如果這個方法不夠力,炸掉電力傳輸線和緊急發電機;再不行,炸掉發電廠。

本文於 修改第 4 次
回應 回應給此人 推薦文章 列印 加入我的文摘

引用
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7205038
 回應文章 頁/共3頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁
人工智慧的困境 ---- Steven Novella
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

諾維納教授 在【神經邏輯 部落格】上討論「人工智慧」目前遇到的困境和展望未來研究的方向摘要譯述於下


「人工智慧」目前最為人津津樂道的成就是巨大的語言學習模型和所謂的「轉換器」。兩者的原理基本上都是:在學習整個網際網路上所有語言的使用模式後,電腦程式以統計方法來預測和模擬「適當」的交談。這個了不起成就的困境或瓶頸是:電腦程式不能像人腦一樣思考。當碰到一個完全不按牌理出牌的問題時,電腦程式就會給出一個莫名其妙,讓人摸不著頭腦的答案。

類似問題也出現在人工智慧的其它應用上,如「自走車」。一旦碰上事先沒有寫進「自走程式的奇特路況或其他駕駛人的怪異行為「自走車」就會做出人類駕駛不會做的決定

如果不能克服像人腦一樣思考這個瓶頸「轉換器」一類的人工智慧就到了極限。要突破這個極限人工智慧研究還有一個現實的障礙。目前進行這方面的研究經費在一億美元,同時需要用到巨大的設備和能源;這種支出能維持多久本身也是一個問題。

此外,延續過往的研究方法只能得到枝節上的進展。枝節上的進展能不能解決像人腦一樣思考這個根本問題,前景並不看好。我們可能需要一個「研究典範」轉移人工智慧才能取得突破性進展。


Have Current AI Reached Their Limit?

Steven Novella, 06/05/23

We are still very much in the hype phase of the latest crop of artificial intelligence applications, specifically the large language models and so-called “transformers” like Chat GPT. Transformers are a deep learning model that use self-attention to differentially weight the importance of its input, including any recursive use of its own output. This process has been leverage with massive training data, basically the size of the internet. This has produced some amazing results, but as time goes by we are starting to see the real limits of this approach.

I do see a tendency for a false dichotomy when it comes to AI – with AI is all hype at one end of the spectrum, to AI is going to save/destroy the world at the other. This partly follows the typical trend to overestimate short term technological process on the hype end, to underestimating long term progress on the cynical end. I think reality is somewhere in between – this latest crop of AI applications is extremely powerful, but has its limits and will not simply improve indefinitely.

I have been using several AI programs, like Chat GPT and Midjourney, extensively, and the limitations become more clear over time. The biggest limitation of these AI apps is that they cannot think the way people do. They mimic the output of thinking, without any true understanding. They do this by being training on a massive database, and using essentially statistics to predict what comes next (what word fragment or picture element). This produces some amazing results, and it’s still shocking that it works so well, but also creates interesting fails. In Midjourney, for example, (and AI art production application) when creating your prompts that result in image options, you can’t really explain to the application the way you would to a person what it is you want. You are trying to find the right triggers, but those triggers are quirky, and are highly dependent on the training data. Using different words to describe the same thing can trigger wildly different results, based upon how those words were used in the training data.

The same is true of Chat GPT. The more unusual your request the more quirky the result. And the results are not based strictly on reality, just statistically how words are used. This is why there is a problem with so-called hallucinations. You are getting a statistically probable answer, not a real answer, and quirks in the data will produce quirks in the result. The problem has no real understanding of what it’s saying. It’s literally just faking it, mimicking human language by statistically reconstructing what it has learned.

We see this limitation in other areas of AI as well. We were supposed to have self-driving cars by 2020, but we may still be a decade away. This is because the AI driving applications get progressively confused by novel or unusual situations. They are particularly bad at predicting what people will do, something that human drivers are much better at. So they are great in controlled or predictable situations, but can act erratically when thrown a curve-ball – not a good feature in any driver.

But here is the big question – are these current limitations of AI solvable through incremental advances of the existing technology, or fundamental limitations of that technology that will require new innovations to solve? It’s increasingly looking like the latter is closer to the truth.


And this is where we get into overestimating short term progress. People look at the progress in AI of the last decade, even the last few years, and falsely assume this level or progress will continue and then extrapolate linearly into the future. But experts are saying this is not necessarily the case. There are two main reasons for this. The first is the lack of true understanding that I just described. The second is that we are getting to the practical limits of leveraging deep learning on large data sets. Training Chat GPT, for example, cost $100 million, and required a massive infrastructure of hardware and energy usage. There are some practical limits on how much bigger we can get. Further, there appears to be diminishing returns from going larger. Progress from this approach, therefore, may be close to plateauing. Incremental improvements will likely involve greater efficiency and speed, but may not produce significantly better results.


Getting past the “uncanny valley” of almost human may not be possible without a completely new approach. Or it may take orders of magnitude more technological advance than getting to where we are now. There are plenty of examples from past technology that may illuminate this issue. High temperature superconductors when through their hype phase in the 1980s. This produced genuinely useful technology, but everyone assumed we would get to room temperature superconductors quickly. Here we are almost 40 years later and we may be no closer, because the path was ultimately a dead end. Similarly, the current path of AI technology may not lead to general AI with true understanding. A totally different approach may be necessary.

What I think will happen now is that we will enter a period where the marketplace learns how best to leverage this AI technology, and will learn what it is good at and what it is not good at. There will be incremental improvements. New ways of using transformer technology will emerge. But AI, under this technological paradigm, will not forever improve and will reach its limits, mostly imposed by its lack of true understanding. There may be the perception that the AI hype bubble has burst. But then the underestimating of long term progress will kick in. Researchers will come up with new innovations that take AI to the next level, and the process will likely repeat itself.


What is hard to predict is how long this cycle will take. Billions of dollars are being poured into AI research, and there is an entire industry of very smart people working on this. There is no telling what they will come up with. But the lack of true understanding may prove to be a really hard nut to crack, and may put a ceiling on AI capabilities for decades. We will see.


本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7206352
面對人工智慧不用驚慌失措 -- Steven Novella
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

諾維納教授 在【神經邏輯 部落格】上討論「人工智慧」文章應該可以讓琶琶撒格婁博士這類緊張大師安心一些。

諾維納教授提到我們人類的行為和言談模式是社會建構過程的結果因此大多數並非我們自己的獨到見解,而是有跡可循「聊天室」(Chat BOT)或「訓練後自發轉換器」聊天功能(Chat GPT)的原理,也只不過是電腦程式在處理大量例子(> 10億個),根據舉一反三或經驗之談後所產生的一些「言談模式」。它們跟我們人類日常行為言談產生的原則並無二致自然也會產生一些可笑的推論。

對琶琶撒格婁博士的顧慮諾維納教授做了評論和提出解決方案。

諾維納教授大作的另一個重點在指出人工智慧不可能發展出「意識。該文我還沒有仔細讀過,有興趣的朋友可自行前往了解

諾維納教授上面提到人工智慧自行產生資訊和回應的能力;它顯示我在開欄文中就AI所做的一些評論已經趕不上時代。在此做個聲明和修正。


AI – Is It Time to Panic?

Steven Novella, 04/27/23

I’m really excited about the recent developments in artificial intelligence (AI) and their potential as powerful tools. I am also concerned about unintended consequences. As with any really powerful tool, there is the potential for abuse and also disruption. But I also think that the recent calls to 
pause or shutdown AI development, or concerns that AI may become conscious, are misguided and verging on panic.

I don’t think we should pause AI development. In fact, I think further research and development is exactly what we need. Recent AI developments, such as the generative pretrained transformers (GPT) have created a jump in AI capability. They are yet another demonstration of how powerful narrow AI can be, without the need for general AI or anything approaching consciousness. What I think is freaking many people out is how well GPT-based AIs, trained on billions of examples, can mimic human behavior. I think this has as much to do with how much we underestimate how derivative our own behavior is as how powerful these AI are.

Most of our behavior and speech is simply mimicking the culture in which we are embedded. Most of us can get through our day without an original thought, relying entirely on prepackaged phrases and interactions. Perhaps mimicking human speech is a much lower bar than we would like to imagine. But still, these large language models are impressive. They represent a jump in technology, able to produce natural-language interactions with humans that are coherent and grammatically correct. But they remain a little brittle. Spend any significant time chatting with one of these large language models and you will detect how generic and soulless the responses are. It’s like playing a video game – even with really good AI driving the behavior of the NPCs  (
非玩家角色) in the game, they are ultimately predictable and not at all like interacting with an actual sentient being.

There is little doubt that with further development these AI systems will get better. But I think that’s a good thing. Right now they are impressive but flawed. AI driven search engines have a tendency to make stuff up, for example. That is because they are predicting information and generating responses, not just copying or referencing information. The way they make predictions may be ironically hard to predict. They use shortcuts that are really effective, most of the time, but also lead to erroneous results. They are the AI equivalent of heuristics, rules of thumb that sort of work but not really. Figuring out how to prevent errors like this is a good thing.

So what’s the real worry? As far as I can tell from the open letters and articles, it’s just a vague fear that we will lose control, or have already lost control, of these powerful tools. It’s an expression of the precautionary principle, which is fine as far as it goes, but is easily abused or overstated. That’s what I think is happening here.

One concern is that AI will be disruptive in the workplace. I think that ship has sailed, even if it is not out of view yet. I don’t see how a pause will help. The marketplace needs to sort out the ultimate effects.


Another concern is that AI can be abused to spread misinformation. Again, we are already there. However, it is legitimate to be concerned about how much more powerful misinformation will be fueled by more powerful algorithms or deep fakes.

There is concern that AIs will be essentially put in charge of important parts of our society and will fail in unpredictable and catastrophic ways.
 

And finally there are concerns that AI will become conscious, or at least develop emergent behaviors and abilities we don’t understand. I am not concerned about this. AI is not going to just become conscious. It doesn’t work that way, 
for reasons I recently explained.

Part of the panic I think is being driven by a common futurism fallacy – to overestimate short term advance while underestimating long term advance. AI systems just had a breakthrough, and that makes it seem like the advances will continue at this pace. But that is rarely the case. AIs are not about to break the world, or become conscious. They are still dumb and brittle in all the ways that narrow AI is dumb.

Here is what I think we do need to do. First, we need to figure out what this new crop of AI programs are good at and what they are not good at. Like any new technology, if you put it in the hands of millions of people they will quickly sort out how it can be used. The obvious applications are not always the best ones. Microwaves were designed for cooking, which they are terrible for, but they excel at heating. Smart phones are used for more things than just phones, which is almost a secondary function now. So what will GPT AIs really be good for? We will see. I know some people using them to write contracts. Perhaps they will function as personal assistants, or perhaps they will be terrible at that task. We need research and use to sort this out, not a pause.

There may need to be regulation, but I would proceed very carefully here. Some experts warn of the black box problem, and that seems like something that can easily be fixed. Include in the program internal reporting on method, so that it can be reviewed. We also need to sort out the property rights issues, especially with generative art programs. Perhaps artists should have the right to opt out (or not opt in) their art for training data. We may also need quality assurance before AI programs are given control over any system, like approving self-driving cars.

I don’t think any of this needs a pause. I think that will happen naturally – there is already talk of diminishing returns from GPT applications in terms of making them more powerful. Tweaking them, making them better, fixing flaws, and finding new applications will now take time. Don’t stop now, while we are in the messy phase. I also think experts need to be very clear – these systems are not conscious, they are nothing like consciousness, and they are not on a path to become conscious.



本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7205185
「人工智慧」(或「電腦軟體程式」)跟「理性」的概念無關
    回應給: 胡卜凱(jamesbkh) 推薦0


麥芽糖
等級:8
留言加入好友

 
完全同意




回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7205105
人工智慧與理性的末日-Alexis Papazoglou
推薦2


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (2)

胡卜凱
麥芽糖

或許我離開學校太久;或許琶琶撒格婁博士對電腦程式的性質有些外行;總之,我覺得他在《術與思想究學報導這篇文章有點杞人憂天。至少,可以說是言過其實或聳人聽聞。

程式的「演算/決策程序」是設計師或軟體工程師寫的。她/他們自有預先制定的一套「演算/決策」準則才能確保「演算/決策」結果(或「(電腦)輸出」)的一致性。缺乏一致性的軟體程式或「演算/決策程序」不過是一紙廢話或天書。

如果人工智慧」(電腦)的「演算/決策」結果不符合常理,我至少可以想到以下三個可能導致這個情況發生的原因:

a.
軟體設計師或軟體工程師當初的設計不夠周全,或/他沒有考慮到所有輸入數據的各種可能性
b.
軟體設計師或軟體工程師當初的設計不夠完備,或/他沒有考慮到所有可能發生狀況的各種排列組合
c.
電腦的硬體設備有限,或它在預定時間內無法處理所有被輸入的資訊;同時,該程式的設計師或軟體工程師沒有寫進在這個情況下的「退場機制」

我的結論:

1. 2028以前如果「人工智慧」的「演算/決策」結果不符合常理,這是設計團隊的疏忽和/或缺失。(以我的科技知識,我只有預測五年以內發展的信心)
2. 人工智慧」(電腦軟體程式」)理性」的概念無關


AI and the end of reason

Seeing through black boxes

Alexis Papazoglou, 05/24/23

Life changing decisions are increasingly being outsourced to Artificial Intelligence.  The problem is, AI systems are often black boxes, unable to offer explanations for these decisions. Unless regulators insist that AI needs to be explainable and interpretable, we are about to enter an era of the absurd, writes Alexis Papazoglou.

One of the greatest risks that AI poses to humanity is already here.  It’s an aspect of the technology that is affecting human lives today and, if left unchecked, will forever change the shape of our social reality. This is not about AI triggering an event that might end human history, or about the next version of Chat GPT putting millions of people out of a job, or the welter of deep fakes and misinformation war that’s coming. It’s about a feature of current AI that its own designers openly admit to yet remains astonishing when put into words: no one really understands it.

Of course, AI designers understand at an abstract level what products like Chat GPT do: they are pattern recognizers; they predict the next word, image, or sound in a series; they are trained on large data sets; they adjust their own algorithms as they go along, etc. But take any one result, any output, and even the very people who designed it are unable to explain why an AI program produced the results it did. This is why many advanced AI models, particularly deep learning models, are often described as “black boxes”: we know what goes in, the data they are trained on and the prompts, and we know what comes out but have no idea what really goes on inside them.

Often referred to as the issue of interpretability, this is a significant challenge in the field of AI because it means that the system is unable to explain or make clear the reasons behind its decisions, predictions, or actions. This may seem like an innocuous detail when asking Chat GPT to write a love letter in the style of your favorite poet, but less so when AI systems are used to make decisions that have real-world impacts such as whether you get shortlisted for a job, whether you get a bank loan, or whether you go to prison rather than given bail, all of which are decisions already being outsourced to AI. When there’s no possibility of an explanation behind life-changing decisions, when the reasoning of machines (if any) is as opaque to the humans who make them as to the humans whose lives they alter, we are left with a bare “
Computer says noanswer. When we don’t know the reason something’s happened, we can’t argue back, we can’t challenge, and so we enter the Kafkaesque realm of the absurd in which reason and rationality are entirely absent. This is the world we are sleepwalking towards.

If you’re tuned into philosophy debates of the 20th century involving post-modern thinkers like Foucault and Derrida, or even Critical Theory philosophers like Adorno and Horkheimer, perhaps you think the age of “Reason” is already over, or that it never really existed – it was simply another myth of the Enlightenment. The concept of a universal human faculty that dictates rational, logical, upright thought has been criticised and deconstructed many times over. The very idea that Immanuel Kant, a white 18th-century German philosopher, could come up with the rules of universal thought simply through introspection rings all kinds of alarm bells today. Accusations vary from lazy, arm-chair philosophy (though Kant was very aware of that problem), to racism.  But we are currently entering an era that will make even the harshest critics of reason nostalgic for the good old days.

It’s one thing to talk about Reason as a universal, monolithic, philosophical ideal. But small ‘r’ reason, and small ‘r’ rationality is intrinsic to almost all human interaction. Being able to offer justifications for our beliefs and our actions is key to who we are as a social species. It’s something we are taught as children. We have an ability to explain to others why we think what we think and why we do what we do. In other words, we are not black boxes, if asked we can show our reasoning process to others.

Of course, that’s also not completely true. Humans aren’t entirely transparent, even to themselves if we believe Nietzsche and Freud. The real reasons behind our thoughts and actions might be very different from the ones we tell ourselves and give to others. This discrepancy can have deep roots in our own personal history, something that psychoanalysis might attempt to uncover, or it might be due to social reasons, as implied by a concept such as unconscious bias. If that’s the case, one could argue that humans are in fact worse than black boxes – they can offer misleading answers as to why they behave the way they do.

But despite the fact that our reasoning can sometimes be biased, faulty, or misleading, its very existence allows others to engage with it, pick holes in it, challenge us, and ultimately demonstrate why we might be wrong. What is more, being rational means that we can and should adjust our position when given good reason to do so. That’s something black-box AI systems can’t do.

It’s often argued that this opaqueness is an intrinsic feature of AI and can’t be helped. But that’s not the case. Recognizing the issues that arise from outsourcing important decisions to machines without being able to explain how those decisions were arrived at, has led to an effort to produce so-called 
Explainable AI: AI that is capable of explaining the rationale and results of what would otherwise be opaque algorithms.

The currently available versions of Explainable AI, however, are not without their problems. To begin with, the kind of explanations they offer are post-hoc. Explainable AI comes in the form of a second algorithm that attempts to make sense of the results of the black-box algorithm we are interested in the first place. So even though we can be given an account of how the black-box algorithm arrived at the results it did, this is not in fact the way the actual results were reached. This amounts more to an 
audit of algorithms, making sure their results are not compatible with problematic bias. This is closer to what is often referred to as interpretable AI: we can make sense of their results and check that they fulfil certain criteria, even if that’s not how they actually arrived at them.

Another problem with Explainable AI is that it is widely believed that the more explainable an algorithm, the less accurate it is. The argument is that more accurate algorithms tend to be more complex, and hence, by definition, harder to explain. This pitches the virtue of explainability against that of accuracy, and it’s not clear that explainability wins such a clash. It’s important to be able to explain why an algorithm predicts a criminal is likely to reoffend, but it’s arguably more important that such an algorithm doesn’t make mistakes in the first place.

However, computer scientist Cynthia Rudin argues that 
it’s a myth that accuracy and explainability are competing values when it comes to designing algorithms and has demonstrated that the results of black box algorithms can be replicated by much simpler models. Rudin instead suggests that the argument for the epistemological advantage of black boxes hides the fact that the advantage is in fact a monetary one. There is a financial incentive in developing black-box algorithms. The more opaque an algorithm, the easier it is to profit from it and prevent competition from developing something similar. Rudin argues that complexity and opaqueness exist merely as a means to profit from the algorithm that has such features, since their predictions can be replicated by a much simpler, interpretable algorithm that would have been harder to sell given its simplicity.  Furthermore, the cost of developing opaque algorithms might in fact be much lower than developing interpretable ones, since the constraint of making an algorithm transparent to its user can make things harder for the algorithm’s designer.

But beyond the crude financial incentives for developing opaque algorithms lies something deeper: the mystique around black box algorithms is itself part of their allure. The idea that only algorithms we can’t understand have the power to reveal hidden patterns in the data – patterns that mere mortals can’t detect – has a powerful pull, almost theological in nature: the idea of a higher intelligence than ours, one we can’t even begin to understand. Even if this is true, the veil of mystique surrounding AI serves to inadvertently stifle the idea that regulation is even possible.

One of the best examples of our reverence of such artificial intelligence, but also of the absurdity that results from outsourcing important questions to machine processes we don’t fully understand, is found in Douglas Adams’ The Hitchhikers’ Guide to the Galaxy. When a supercomputer named Deep Thought is tasked with finding the answer to “the ultimate question of life, the universe and everything” it takes it 7.5 million years to complete the task. When it finally reveals the answer, it is simply “42”.

If we can’t see properly inside the AI black boxes, we should stop asking them important questions and outsourcing high stake decisions to them. Regulators need to emphasize to developers the need to produce Explainable AI systems, ones whose reasoning we can make sense of. The alternative is to live in a social world without any rhyme or reason, of absurd answers we can’t question.

Suggested Viewing The AI hoax



本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7205039
頁/共3頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁