網路城邦
回本城市首頁 時事論壇
市長:胡卜凱  副市長:
加入本城市推薦本城市加入我的最愛訂閱最新文章
udn城市政治社會政治時事【時事論壇】城市/討論區/
討論區知識和議題 字體:
看回應文章  上一個討論主題 回文章列表 下一個討論主題
人工智能縱橫談 – 開欄文
 瀏覽6,783|回應26推薦2

胡卜凱
等級:8
留言加入好友
文章推薦人 (2)

亓官先生
胡卜凱

四月開始,由於   ChatGPT    Bing Chat 的上線,網上以及各line群組掀起一陣AI瘋。我當時大概忙於討論《我們的反戰聲明》,沒有湊這個熱鬧。現在轉載幾篇相關文章。也請參考《「人工智慧」研發現況及展望 》一文

有些人擔憂「人工智慧」會成為「人上機器」,操控世界甚至奴役人類。我不懂AI,思考也單純;所以,如果「人工智慧」亂了套,我自認為有一個簡單治它的方法:

拔掉電源插頭。如果這個方法不夠力,炸掉電力傳輸線和緊急發電機;再不行,炸掉發電廠。

本文於 修改第 4 次
回應 回應給此人 推薦文章 列印 加入我的文摘

引用
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7205038
 回應文章 頁/共3頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁
阿里巴巴的人工智能模擬機型 - Ben Turner
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

我感冒快三個星期,看診抓藥之後,每天昏昏沉沉無精打采;所以沒有能夠同步報導深度求索」的新聞。剛剛看到阿里巴巴發佈的聲明先擠上跟風車;再回頭了解「深度求索」。此外,請參看這則相關分析

Alibaba claims its AI model trounces DeepSeek and OpenAI competitors

By Ben Turner, 01/30/25

Chinese cloud giant Alibaba says that its Qwen2.5-Max artificial intelligence model outperformed its rivals at OpenAI, Meta and DeepSeek.

A smartphone displays Alibaba's AI-powered assistant Tongyi. (Image credit: Getty Images)
請至原網頁觀看照片

Chinese tech company Alibaba has unveiled a new artificial intelligence (AI) model that it claims outperforms its rivals at OpenAI, Meta and DeepSeek.

The announcement of the Qwen2.5-Max model yesterday (Jan. 29) is the second major AI announcement from China this week, after DeepSeek's R1 open-weight model took the world by storm following claims that it performs better and is more cost-effective than its American competitors.

Now, Alibaba claims that Qwen 2.5-Max, which is also partly open-source, is even more impressive — surpassing a number of rival models in various tests run by the company.

New A.I. Finds Hidden Patterns In Numbers. A new artificially intelligent "Ramanujan Machine" can generate hundreds of new mathematical conjectures, which might lead to new math proofs and theorems.
請至原網頁觀看視頻

"In benchmark tests such as Arena-Hard, LiveBench, LiveCodeBench, GPQA-Diamond and MMLU-Pro, Qwen2.5-Max is on par with [Anthropic's] Claude-3.5-Sonnet, and almost completely surpasses [OpenAI's] GPT-4o, DeepSeek-V3 and [Meta's] Llama-3.1-405B," Alibaba representatives wrote Jan. 28 in a translated statement on WeChat.

The news comes at an uncertain time for American tech companies. Following DeepSeek's announcement, the AI chatbot quickly overtook ChatGPT to become the most downloaded free app in Apple's U.S. App Store.

The company's claims that it achieved better results, while training and running its model at a fraction of the cost, sent shockwaves around the world — wiping $1 trillion from the valuations of leading tech companies such as Nvidia, whose loss of $589 billion was the biggest one-day market loss in U.S. history.

DeepSeek's success has also led to a domestic battle among China's top AI companies, triggering TikTok owner ByteDance to update its Doubao model and likely prompting Alibaba to announce its own new model

China's growing competitiveness in AI has become a source of panic for its U.S. counterparts, with OpenAI claiming today (Jan. 29) that DeepSeek plagiarized parts of OpenAI's models to train its own.


Ben Turner is a U.K. based staff writer at Live Science. He covers physics and astronomy, among other topics like tech and climate change. He graduated from University College London with a degree in particle physics before training as a journalist. When he's not writing, Ben enjoys reading literature, playing the guitar and embarrassing himself with chess.

RELATED STORIES

DeepSeek stuns tech industry with new AI image generator that beats OpenAI's DALL-E 3
New glowing molecule, invented by AI, would have taken 500 million years to evolve in nature, scientists say
AI can now replicate itself — a milestone that has experts terrified
AI could crack unsolvable problems — and humans won't be able to understand the results


本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7245479
現境性人工智能簡介 –- P. R. Allison
推薦2


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (2)

亓官先生
胡卜凱

現境性」是我用來翻譯embodied AI” 一詞中 “embodied” 這個字;現境」是「現實情境」的簡稱。此辭原為佛家術語40多年前在「唯識宗」的經典裏讀到它,只是已經記不起是那一本書了。讀完下文應該可以了解我這樣翻譯的理由。

What is embodied AI?

Embodied AI enables robots and autonomous drones to interact with the real world, but how does it work?

Peter Ray Allison, 12/30/24

(Image credit: Yuichiro Chino/Getty Images)
請至原網頁觀看圖片

Artificial intelligence (AI) comes in many forms, from pattern recognition systems to generative AI. However, there's another type of AI that can respond almost instantly to real-word data: embodied AI.

But what exactly is this technology, and how does it work?

Embodied AI typically combines sensors with machine learning to respond to real-world data. Examples include autonomous drones, self-driving cars and factory automation. Robotic vacuum cleaners and lawn mowers use a simplified form of embodied AI.

These autonomous systems use AI to learn to navigate obstacles in the physical world. Most embodied AI uses an algorithmically encoded map that, in many ways, is akin to the mental map of London's labyrinthine network of roads and landmarks used by the city's taxi drivers. In fact, 
research on how London's taxi drivers determine a route has been used to inform the development of such embodied systems.

Some of these systems also incorporate the type of embodied, group intelligence found in swarms of insects, flocks of birds, or herds of animals. These groups synchronize their movements subconsciously. Mimicking this behavior is a useful strategy for developing a network of drones or warehouse vehicles that are
controlled by an embodied AI.

History of embodied AI

The development of embodied AI began in the 1950s, with the 
cybernetic tortoise, which was created by William Grey Walter at the Burden Neurological Institute in the U.K. But it would take decades for embodied AI to come into its own. Whereas cognitive and generative AI learn from large language models, embodied AI learns from its experiences in the physical world, just as humans react to what they see and hear.

However, the sensory inputs of embodied AI are quite different from human senses. Embodied AI may detect X-rays, ultraviolet and infrared light, magnetic fields or GPS data. Computer vision algorithms can then use this sensory data to identify objects and respond to them.

Building a world model

The core element of an embodied AI is its 
world model, which is designed for its operating environment. This world model is similar to our own understanding of the surrounding environment.

The world model is supported by different learning approaches. One example is
reinforcement learning, which uses a policy-based approach to determine a route — for instance, with rules like "always do X when encountering Y."

Another is active inference, which is modeled on how the human brain operates. These models continuously take in data from the environment and update the world model based on this real-time stream - similar to how we react based on what we see and hear. In contrast, some other AI models do not evolve in real time.

Active inference begins with a basic level of understanding of the environment, but it can evolve rapidly. As such, any autonomous vehicle that relies on active inference needs extensive training to be safely deployed on the roads.

Embodied AI could also help chatbots provide a better customer experience by reading a customer's emotional state and adapting its responses accordingly.

Although embodied AI systems are still in their early stages, research is evolving rapidly. Improvements in generative AI will naturally inform the development of embodied AI. Embodied AI will also benefit from improvements in the accuracy and availability of the sensors it uses to determine its surroundings.


Peter Ray Allison is a degree-qualified engineer and experienced freelance journalist, specializing in science, technology and culture. He writes for a variety of publications, including the BBC, Computer Weekly, IT Pro, the Guardian and the Independent. He has worked as a technology journalist for over ten years. Peter has a degree in computer-aided engineering from Sheffield Hallam University. He has worked in both the engineering and architecture sectors, with various companies, including Rolls-Royce and Arup.

RELATED STORIES

AI is transforming every aspect of science. Here's how.
New supercomputing network could lead to AGI, scientists hope, with 1st node coming online within weeks
'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without even knowing it

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7243680
經濟學家看 ”ChatGPT” – D. Acemoglu/S. Johnson
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

這是2024諾貝爾經濟學獎得主中的兩位2023”ChatGPT”所做分析和建議。


What’s Wrong with ChatGPT?

Daron Acemoglu/Simon Johnson, 02/06/23

Artificial intelligence is being designed and deployed by corporate America in ways that will disempower and displace workers and degrade the consumer experience, ultimately disappointing most investors. Yet economic history shows that it does not have to be this way.

CAMBRIDGE – Microsoft is 
reportedly delighted with OpenAI’s ChatGPT, a natural-language artificial-intelligence program capable of generating text that reads as if a human wrote it. Taking advantage of easy access to finance over the past decade, companies and venture-capital funds invested billions in an AI arms race, resulting in a technology that can now be used to replace humans across a wider range of tasks. This could be a disaster not only for workers, but also for consumers and even investors.

The problem for workers is obvious: there will be 
fewer jobs requiring strong communication skills, and thus fewer positions that pay well. Cleaners, drivers, and some other manual workers will keep their jobs, but everyone else should be afraid. Consider customer service. Instead of hiring people to interact with customers, companies will increasingly rely on generative AIs like ChatGPT to placate angry callers with clever and soothing words. Fewer entry-level jobs will mean fewer opportunities to start a career – continuing a trend established by earlier digital technologies.

Consumers, too, will suffer. Chatbots may be fine for handling entirely routine questions, but it is 
not routine questions that generally lead people to call customer service. When there is a real issue – like an airline grinding to a halt or a pipe bursting in your basement – you want to talk to a well-qualified, empathetic professional with the ability to marshal resources and organize timely solutions. You do not want to be put on hold for eight hours, but nor do you want to speak immediately to an eloquent but ultimately useless chatbot.

Of course, in an ideal world, new companies offering better customer service would emerge and seize market share. But in the real world, many barriers to entry make it difficult for new firms to expand quickly. You may love your local bakery or a friendly airline representative or a particular doctor, but think of what it takes to create a new grocery store chain, a new airline, or a new hospital. Existing firms have big advantages, including important forms of market power that allow them to choose which available technologies to adopt and to use them however they want.

More fundamentally, new companies offering better products and services generally require new technologies, such as digital tools that can make workers more effective and help create better customized services for the company’s clientele. But, since AI investments are putting automation first, these kinds of tools are not even being created.

Investors in publicly traded companies will also lose out in the age of ChatGPT. These companies could be improving the services they offer to consumers by investing in new technologies to make their workforces more productive and capable of performing new tasks, and by providing plenty of training for upgrading employees’ skills. But they are not doing so. Many executives remain obsessed with a strategy that ultimately will come to be remembered as self-defeating: paring back
employment and keeping wages as low as possible. Executives pursue these cuts because it is what the smart kids (analysts, consultants, finance professors, other executives) say they should do, and because Wall Street judges their performance relative to other companies that are also squeezing workers as hard as they can.

AI is also poised to amplify the deleterious social effects of 
private equity. Already, vast fortunes can be made by buying up companies, loading them with debt while going private, and then hollowing out their workforces – all while paying high dividends to the new owners. Now, ChatGPT and other AI technologies will make it even easier to squeeze workers as much as possible through workplace surveillance, tougher working conditions, zero-hours contracts, and so forth.

These trends all have dire implications for Americans’ spending power – the engine of the US economy. But as we explain in our forthcoming book, 
Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperitya sputtering economic engine need not lie in our future. After all, the introduction of new machinery and technological breakthroughs has had very different consequences in the past.

Over a century ago, Henry Ford revolutionized car production by investing heavily in new electrical machinery and developing a more efficient 
assembly line. Yes, these new technologies brought some amount of automation, as centralized electricity sources enabled machines to perform more tasks more efficiently. But the reorganization of the factory that accompanied electrification also created new tasks for workers and thousands of new jobs with higher wages, bolstering shared prosperity. Ford led the way in demonstrating that creating human-complementary technology is good business.

Today, AI offers an opportunity to do likewise. AI-powered digital tools can be used to 
help nurses, teachers, and customer-service representatives understand what they are dealing with and what would help improve outcomes for patients, students, and consumers. The predictive power of algorithms could be harnessed to help people, rather than to replace them. If AIs are used to offer recommendations for human consideration, the ability to use such recommendations wisely will be recognized as a valuable human skill. Other AI applications can facilitate better allocation of workers to tasks, or even create completely new markets (think of Airbnb or rideshare apps).

Unfortunately, these opportunities are being neglected, because most US tech leaders continue to spend heavily to develop software that can do what humans already do just fine. They know that they can cash out easily by selling their products to corporations that have developed tunnel vision. Everyone is focused on leveraging AI to cut labor costs, with little concern not only for the immediate customer experience but also for the future of American spending power.

Ford understood that it made no sense to mass-produce cars if the masses couldn’t afford to buy them. Today’s corporate titans, by contrast, are using the new technologies in ways that will ruin our collective future.


Daron Acemoglu, Institute Professor of Economics at MIT, is a co-author (with James A. Robinson) of Why Nations Fail: The Origins of Power, Prosperity and Poverty (Profile, 2019) and a co-author (with Simon Johnson) of Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (PublicAffairs, 2023).

Simon Johnson, a former chief economist at the International Monetary Fund, is a professor at the MIT Sloan School of Management, a co-chair of the COVID-19 Policy Alliance, and a co-chair of the CFA Institute Systemic Risk Council. He is the co-author (with Daron Acemoglu) of Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (PublicAffairs, 2023).

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7239397
經濟學家看「人工智能」的「除舊布新」 – D. Acemoglu
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

這是2024諾貝爾經濟學獎得主之一阿齊模格教授20244發表關於「人工智能」的「除舊布新」評論。


Are We Ready for AI Creative Destruction?

Daron Acemoglu, 04/09/24

Rather than blindly trusting elegant but simplistic theories about the nature of historical change, we urgently need to focus on how the next wave of disruptive innovation could affect our social, democratic, and civic institutions. Leaving it to tech entrepreneurs risks more destruction – and less creation – than we bargained for.

BOSTON – The ancient Chinese concept of 
yin and yang attests to humans’ tendency to see patterns of interlocked opposites in the world around us, a predilection that has lent itself to various theories of natural cycles in social and economic phenomena. Just as the great medieval Arab philosopher Ibn Khaldun saw the path of an empire’s eventual collapse imprinted in its ascent, the twentieth-century economist Nikolai Kondratiev postulated that the modern global economy moves in “long wave” super-cycles.

But no theory has been as popular as the one – 
going back to Karl Marx – that links the destruction of one set of productive relations to the creation of another. Writing in 1913, the German economist Werner Sombart observed that, “from destruction a new spirit of creation arises.”

It was the Austrian economist Joseph Schumpeter who 
popularized and broadened the scope of the argument that new innovations perennially replace previously dominant technologies and topple older industrial behemoths. Many social scientists built on Schumpeter’s idea of “creative destruction” to explain the innovation process and its broader implications. These analyses also identified tensions inherent in the concept. For example, does destruction bring creation, or is it an inevitable by-product of creation? More to the point, is all destruction inevitable?

In economics, Schumpeter’s ideas formed the bedrock of the 
theory of economic growththe product cycle, and international trade. But two related developments have catapulted the concept of creative destruction to an even higher pedestal over the past several decades. The first was the runaway success of Harvard Business School professor Clayton Christensen’s 1997 book, The Innovator’s Dilemma, which advanced the idea of “disruptive innovation.” Disruptive innovations come from new firms pursuing business models that incumbents have deemed unattractive, often because they appeal only to the lower-end of the market. Since incumbents tend to remain committed to their own business models, they miss “the next great wave” of technology.

The second development was the rise of Silicon Valley, where tech entrepreneurs made “disruption” an explicit 
strategy from the start. Google set out to disrupt the business of internet search, and Amazon set out to disrupt the business of book selling, followed by most other areas of retail. Then came Facebook with its mantra of “move fast and break things.” Social media transformed our social relations and how we communicate in one fell swoop, epitomizing both creative destruction and disruption at the same time.

The intellectual allure of these theories lies in transforming destruction and disruption from apparent costs into obvious benefits. But while Schumpeter recognized that the destruction process is painful and potentially dangerous, today’s disruptive innovators see only win-wins. Hence, the venture capitalist and technologist Marc Andreessen
writes: “Productivity growth, powered by technology, is the main driver of economic growth, wage growth, and the creation of new industries and new jobs, as people and capital are continuously freed to do more important, valuable things than in the past.”

Now that hopes for artificial intelligence exceed even those of Facebook in its early days, we would do well to re-evaluate these ideas. Clearly, innovation is sometimes disruptive by nature, and the process of creation can be as destructive as Schumpeter envisaged it. History shows that unrelenting 
resistance to creative destruction leads to economic stagnation. But it doesn’t follow that destruction ought to be celebrated. Instead, we should view it as a cost that can sometimes be reduced, not least by building better institutions to help those who lose out, and sometimes by managing the process of technological change.

Consider globalization. While it creates important economic benefits, it also
destroys firms, jobs, and livelihoods. If our instinct is to celebrate those costs, it may not occur to us to try to mitigate them. And yet, there is much more that we could do to help adversely affected firms (which can invest to branch out into new areas), assist workers who lose their jobs (through retraining and a safety net), and support devastated communities.

Failure to recognize these nuances opened the door for the excessive creative destruction and disruption that Silicon Valley has pushed on us these past few decades. Looking ahead, three principles should guide our approach, especially when it comes to AI.

First, as with globalization, helping those who are adversely affected is of the utmost importance and must not be an afterthought.

Second, we should not assume that disruption is inevitable. As I have
argued previously, AI need not lead to mass job destruction. If those designing and deploying it do so only with automation in mind (as many Silicon Valley titans wish), the technology will create only more misery for working people. But it could take more attractive alternative paths. After all, AI has immense potential to make workers more productive, such as by providing them with better information and equipping them to perform more complex tasks.

The worship of creative destruction must not blind us to these more promising scenarios, or to the distorted path we are currently on. If the market does not channel innovative energy in a socially beneficial direction, public policy and democratic processes can do much to redirect it. Just as many countries have already introduced subsidies to encourage more innovation in renewable energy, more can be done to mitigate the harms from AI and other digital technologies.

Third, we must remember that existing social and economic relations are exceedingly complex. When they are disrupted, all kinds of unforeseen consequences can follow. Facebook and other social-media platforms did not set out to poison our public discourse with extremism, misinformation, and addiction. But in their rush to disrupt how we communicate, they followed their own principle of moving fast and then seeking forgiveness.

We urgently need to pay greater attention to how the next wave of disruptive innovation could affect our social, democratic, and civic institutions. Getting the most out of creative destruction requires a proper balance between pro-innovation public policies and democratic input. If we leave it to tech entrepreneurs to safeguard our institutions, we risk more destruction than we bargained for.


Daron Acemoglu, Institute Professor of Economics at MIT, is a co-author (with James A. Robinson) of Why Nations Fail: The Origins of Power, Prosperity and Poverty (Profile, 2019) and a co-author (with Simon Johnson) of Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (PublicAffairs, 2023). 

本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7239167
經濟學家看「人工智能」前景 – D. Acemoglu/S. Johnson
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

這是2024諾貝爾經濟學獎得主中兩位就「人工智能」前景的建議。


History Already Tells Us the Future of AI

Daron Acemoglu/Simon Johnson, 04/23/24

David Ricardo, one of the founders of modern economics in the early 1800s, understood that machines are not necessarily good or bad. His insight that whether they destroy or create jobs all depends on how we deploy them, and on who makes those choices, could not be more relevant today.

BOSTON – Artificial intelligence and the threat that it poses to good jobs would seem to be an entirely new problem. But we can find useful ideas about how to respond in the work of David Ricardo, a founder of modern economics who observed the British Industrial Revolution firsthand. The 
evolution of his thinking, including some points that he missed, holds many helpful lessons for us today.

Private-sector tech leaders promise us a brighter future of 
less stress at workfewer boring meetingsmore leisure time, and perhaps even a universal basic income. But should we believe them? Many people may simply lose what they regarded as a good job – forcing them to find work at a lower wage. After all, algorithms are already taking over tasks that currently require people’s time and attention.

In his seminal 1817 work, 
On the Principles of Political Economy and Taxation, Ricardo took a positive view of the machinery that had already transformed the spinning of cotton. Following the conventional wisdom of the time, he famously told the House of Commons that “machinery did not lessen the demand for labour.”

Since the 1770s, the automation of spinning had reduced the price of spun cotton and increased demand for the complementary task of weaving spun cotton into finished cloth. And since almost all weaving 
was done by hand prior to the 1810s, this explosion in demand helped turn cotton handweaving into a high-paying artisanal job employing several hundred thousand British men (including many displaced, pre-industrial spinners). This early, positive experience with automation likely informed Ricardo’s initially optimistic view.

But the development of large-scale machinery did not stop with spinning. Soon, steam-powered looms were being deployed in cotton-weaving factories. No longer would artisanal “hand weavers” be making 
good money working five days per week from their own cottages. Instead, they would struggle to feed their families while working much longer hours under strict discipline in factories.

As anxiety and protests spread across northern England, Ricardo changed his mind. In the third edition of his influential book, published in 1821, he added a new chapter, “
On Machinery,” where he hit the nail on the head: “If machinery could do all the work that labour now does, there would be no demand for labour.” The same concern applies today. Algorithms’ takeover of tasks previously performed by workers will not be good news for displaced workers unless they can find well-paid new tasks.

Most of the struggling handweaving artisans during the 1810s and 1820s did not go to work in the new weaving factories, because the machine looms did not need many workers. Whereas the automation of spinning had created opportunities for more people to work as weavers, the automation of weaving did not create compensatory labor demand in other sectors. The British economy overall did not create enough other well-paying new jobs, at least not until railways took off in the 1830s. With few other options, hundreds of thousands of hand weavers 
remained in the occupation, even as wages fell by more than half.

Another key problem, albeit not one that Ricardo himself dwelled upon, was that working in harsh factory conditions – becoming a small cog in the employer-controlled “satanic mills” of the early 1800s – was unappealing to handloom weavers. Many artisanal weavers had operated as independent businesspeople and entrepreneurs who bought spun cotton and then sold their woven products on the market. Obviously, they were not enthusiastic about submitting to longer hours, more discipline, less autonomy, and typically lower wages (at least compared to the heyday of handloom weaving). In testimony collected by various Royal Commissions, weavers spoke bitterly about their refusal to accept such working conditions, or about how horrible their lives became when they were forced (by the lack of other options) into such jobs.

Today’s generative AI has huge potential and has already chalked up some impressive achievements, including in 
scientific research. It could well be used to help workers become more informed, more productive, more independent, and more versatile. Unfortunately, the tech industry seems to have other uses in mind. As we explain in Power and Progress, the big companies developing and deploying AI overwhelmingly favor automation (replacing people) over augmentation (making people more productive).

That means we face the risk of excessive automation: many workers will be displaced, and those who remain employed will be subjected to increasingly demeaning forms of surveillance and control. The principle of “automate first and ask questions later” requires – and thus further encourages – the collection of massive amounts of information in the workplace and across all parts of society, calling into question how much privacy will remain.

Such a future is not inevitable. Regulation of data collection would help protect privacy, and stronger workplace rules could prevent the worst aspects of AI-based surveillance. But the more fundamental task, Ricardo would remind us, is to change the overall narrative about AI. Arguably, the most important lesson from his life and work is that machines are not necessarily good or bad. Whether they destroy or create jobs depends on how we deploy them, and on who makes those choices. In Ricardo’s time, a small cadre of factory owners decided, and those decisions centered on automation and squeezing workers as hard as possible.

Today, an even smaller cadre of tech leaders seem to be taking the same path. But focusing on 
creating new opportunities, new tasks for humans, and respect for all individuals would ensure much better outcomes. It is still possible to have pro-worker AI, but only if we can change the direction of innovation in the tech industry and introduce new regulations and institutions.

As in Ricardo’s day, it would be naive to trust in the benevolence of business and tech leaders. It took major political reforms to create genuine democracy, to legalize trade unions, and to change the direction of technological progress in Britain during the Industrial Revolution. The same basic challenge confronts us today.


Daron Acemoglu, Institute Professor of Economics at MIT, is a co-author (with James A. Robinson) of Why Nations Fail: The Origins of Power, Prosperity and Poverty (Profile, 2019) and a co-author (with Simon Johnson) of Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (PublicAffairs, 2023).

Simon Johnson, a former chief economist at the International Monetary Fund, is a professor at the MIT Sloan School of Management, a co-chair of the COVID-19 Policy Alliance, and a co-chair of the CFA Institute Systemic Risk Council. He is the co-author (with Daron Acemoglu) of Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (PublicAffairs, 2023)

本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7239123
人工智慧可能改變戰爭和世界的六種方式 -- Hal Brands
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

6 Ways AI Will Change War and the World

Hal Brands, Bloomberg Opinion, 06/09/24

Artificial intelligence will change our everyday lives in innumerable ways: how governments serve their citizens; how we drive (and get driven); how we handle and, we hope, protect our finances; how doctors diagnose and treat diseases; even how my students research and write their essays.


But just how revolutionary will AI be? Will it upend the global balance of power? Will it allow autocracies to rule the world? Will it make warfare so fast and ferocious that it becomes uncontrollable? In short, will AI
fundamentally alter the rhythms of world affairs?

It is, of course, too soon to say definitively: The effects of AI will ultimately hinge on the decisions leaders and nations make, and technology sometimes takes surprising turns. But even as we are wowed and worried by the next version of ChatGPT, we need to wrestle with six deeper questions about international affairs in the age of AI. And we need to consider a surprising possibility: Perhaps AI won’t change the world as much as we seem to expect.

1) Will AI make war uncontrollable?

Consider one assertion — that artificial intelligence will make conflict more lethal and harder to constrain. Analysts envision a future in which
machines can pilot fighter jets more skillfully than humans, AI-enabled cyberattacks devastate enemy networks, and advanced algorithms turbocharge the speed of decisions. Some warn that automated decision-making could trigger rapid-fire escalation — even nuclear escalation — that leaves policymakers wondering what happened. If war plans and railway timetables caused World War I, perhaps AI will cause World War III.

That AI will change warfare is undeniable. From enabling predictive maintenance of hardware to facilitating astounding improvements in precision targeting, the possibilities are profound. A single F-35, quarterbacking a swarm of semiautonomous drones, could
wield the firepower of an entire bomber wing. As the National Security Commission on Artificial Intelligence concluded in 2021, a “new era of conflict” will be dominated by the side that masters “new ways of war.”

But there’s nothing fundamentally novel here. The story of warfare through the ages is one in which innovation regularly makes combat faster and more intense. So think twice before accepting the proposition that AI will make escalation uncontrollable.

The US and China have
discussed an agreement not to automate their nuclear command-and-control processes — a pledge Washington has made independently — for the simple reason that states have strong incentives not to relinquish control over weapons whose use could endanger their own survival. Russia’s behavior, including the development of nuclear-armed torpedoes that could eventually operate autonomously, is a greater concern. But even during the Cold War, when Moscow built a system meant to ensure nuclear retaliation even if its leadership was wiped out, it never turned off the human controls. Expect today’s great powers to exploit the military possibilities AI presents aggressively — while trying to keep the most critical decisions in human hands.

In fact, AI could reduce the risk of breakneck escalation, by helping decision makers peer through the fog of crisis and war. The Pentagon believes that AI-enabled intelligence and analytical tools can help humans sift through confusing or fragmentary information regarding an enemy’s preparations for war, or even whether a feared missile attack is indeed underway. This isn’t science fiction: Assistance from AI
reportedly helped US intelligence analysts sniff out Russian President Vladimir Putin’s invasion of Ukraine in 2022.

In this sense, AI can mitigate the uncertainty and fear that pushes people toward extreme reactions. By giving policymakers greater understanding of events, AI might also improve their ability to manage them.

2) Will AI help autocracies like China control the world?

What about a related nightmare — that AI will help the forces of tyranny control the future? Analysts such as Yuval Noah Harari have
warned that artificial intelligence will reduce the costs and increase the returns from repression. AI-equipped intelligence services will need less manpower to decipher the vast amounts of intelligence they gather on their populations — allowing them, for example, to precisely map and remorselessly dismantle protest networks. They will use AI-enabled facial recognition technology to monitor and control their citizens, while employing AI-created disinformation to discredit critics at home and abroad. By making autocracy increasingly efficient, AI could allow the dictators to dominate the dawning age.

This is certainly
what China hopes for. President Xi Jinping’s government has devised a “social credit” system that uses AI, facial recognition and big data to ensure the reliability of its citizens — by regulating their access to everything from low-interest loans to airplane tickets. Ubiquitous, AI-assisted surveillance has turned Xinjiang into a dystopian model of modern repression.

Beijing
intends to seize the “strategic commanding heights” of innovation because it believes AI can bolster its domestic system and its military muscle. It is using the power of the illiberal state to steer money and talent toward advanced technologies.

It’s not a given, though, that the autocracies will come out ahead.

To believe that AI fundamentally favors autocracy is to believe that some of the most vital, longstanding enablers of innovation — such as open flows of information and tolerance for dissent — are no longer so important. Yet autocracy is already limiting China’s potential.

Building powerful large language models requires huge pools of information. But if those inputs are tainted or biased because China’s internet is so heavily
censored, the quality of the outputs will suffer. An increasingly repressive system will also struggle, over time, to attract top talent: It is telling that 38% of top AI researchers in the US are originally from China. And smart technology must still be used by China’s governing institutions, which are getting progressively less smart — that is, less technocratically competent — as the political system becomes ever more subservient to an emperor-for-life.

China will be a formidable technological competitor. But even in the age of AI, Xi and his illiberal brethren may struggle to escape the competitive drag autocracy creates.

3) Will AI favor the best or the rest?

Some technologies narrow the gap between the most and least technologically advanced societies. Nuclear weapons, for instance, allow relative pipsqueaks like North Korea to offset the military and economic advantages a superpower and its allies possess. Others widen the divide: In the 19th century, repeating rifles, machine guns and steamships allowed European societies to subjugate vast areas of the world.

In some respects, AI will empower the weak. US officials worry that large language models might help terrorists with crude science kits to build biological weapons. Rogue states, like Iran, might use AI to coordinate drone swarms against US warships in the Persian Gulf. More benignly, AI could expand access to basic healthcare services in the Global South, creating big payoffs in increased life expectancy and economic productivity.

In other respects, however, AI will be a rich man’s game. Developing state-of-the-art AI is fantastically expensive. Training large language models can
require vast investments and access to a finite quantity of top scientists and engineers — to say nothing of staggering amounts of electricity. Some estimates place the cost of the infrastructure supporting Microsoft’s Bing AI chatbot at $4 billion. Almost anyone can be a taker of AI — but being a maker requires copious resources.

This is why the middle powers making big moves in AI, such as Saudi Arabia and the United Arab Emirates, have very deep pockets. Many of the early leaders in the AI race are either tech titans (Alphabet, Microsoft, Meta, IBM, Nvidia and others) or firms with access to their money (OpenAI). And the US, with its vibrant, well-funded tech sector, still
leads the field.

What’s true in the private sector may also be true in the realm of warfare. At the outset, the military benefits of new technology may flow disproportionately to countries with the generous defense budgets required to develop and field new capabilities at scale.

All this could change: Early leads don’t always translate into enduring advantages. Upstarts, whether firms or countries, have disrupted other fields before. For the time being, however, AI may do more to reinforce than revolutionize the balance of power.

4) Will AI fracture or fortify coalitions?

How artificial intelligence affects the balance of power depends on how it affects global coalitions. As analysts at Georgetown University’s Center for Security and Emerging Technologies have
documented, the US and its allies can vastly outpace China in spending on advanced technologies — but only if they combine their resources. Beijing’s best hope is that the free world fractures over AI.

It could happen. Washington worries that Europe’s emerging approach to generative AI regulation could
choke off innovation: In this sense, AI is underscoring divergent US and European approaches to markets and risk. Another key democracy, India, prefers strategic autonomy to strategic alignment — in technology as in geopolitics, it prefers to go its own way. Meanwhile, some of Washington’s nondemocratic partners, namely Saudi Arabia and the UAE, have explored tighter tech ties to Beijing.

But it’s premature to conclude that AI will fundamentally disrupt US alliances. In some cases, the US is successfully using those alliances as tools of technological competition: Witness how Washington has cajoled Japan and the Netherlands to limit China’s access to high-end semiconductors. The US is also leveraging security partnerships with Saudi Arabia and the UAE to place
limits on their technological relations with Beijing, and to promote AI partnerships between American and Emirati firms. In this sense, geopolitical alignments are shaping the development of AI, rather than vice versa.

More fundamentally, the preferences countries have regarding AI are related to their preferences for domestic and international order. So whatever differences the US and Europe have may pale in comparison to their shared fears of what will happen if China surges to supremacy. Europe and America may eventually find their way into greater alignment on AI issues — just as shared hostility to US power is pushing China and Russia to
cooperate more closely in military applications of the technology today.

5) Will AI tame or inflame great-power rivalry?

Many of these questions relate to how AI will affect the intensity of the competition between the US-led West and the autocratic powers headed by China. No one really knows whether runaway AI could truly endanger humanity. But shared existential risks do sometimes make strange bedfellows.

During the original Cold War, the US and the Soviet Union cooperated to manage the perils associated with nuclear weapons. During the new Cold War, perhaps Washington and Beijing will find common purpose in keeping AI from being used for malevolent purposes such as bioterrorism or otherwise threatening countries on both sides of today’s geopolitical divides.

Yet the analogy cuts both ways, because nuclear weapons also made the Cold War sharper and scarier. Washington and Moscow had to navigate high-stakes showdowns such as the Cuban Missile Crisis and several Berlin crises before a precarious stability settled in. Today, AI arms control seems even more
daunting than nuclear arms control, because AI development is so hard to monitor and the benefits of unilateral advantage are so tantalizing. So even as the US and China start a nascent AI dialogue, technology is turbocharging their competition.

AI is at the heart of a Sino-American tech war, as China uses methods fair and foul to hasten its own development and the US deploys export controls, investment curbs and other measures to block Beijing’s path. If China can’t accelerate its technological progress, says Xi, it
risks being “strangled” by Washington.

AI is also fueling a fight for military superiority in the Western Pacific: The Pentagon’s
Replicator Initiative envisions using thousands of AI-enabled drones to eviscerate a Chinese invasion fleet headed for Taiwan. Dueling powers may eventually find ways of cooperating, perhaps tacitly, on the mutual dangers AI poses. But a transformative technology will intensify many aspects of their rivalry between now and then.

6) Will AI make the private sector superior to the public?

AI will undoubtedly shift the balance of influence between the public and private sectors. Analogies between AI and nuclear weapons can be enlightening, but only to a point: The notion of a Manhattan Project for AI is misleading because it is a field where money, innovation and talent are overwhelmingly found in the private sector.

Firms on the AI frontier are thus becoming potent geopolitical actors — and governments know it. When Elon Musk and other experts
advocated a moratorium on development of advanced AI models in 2023, official Washington urged the tech firms not to stop — because doing so would simply help China catch up. Government policy can speed or slow innovation. But to a remarkable degree, America’s strategic prospects depend on the achievements of private firms.

It’s important not to take this argument too far. China’s civil-military fusion is meant to ensure that the state can direct and exploit innovation by the private sector. Although the US, as a democracy, can’t really
mimic that approach, the concentration of great power in private firms will bring a government response.

Washington is engaging, albeit hesitantly, in a debate about how best to regulate AI so as to foster innovation while limiting malign uses and catastrophic accidents. The long arm of state power is active in other ways, as well: The US would never allow Chinese investors to buy the nation’s leading AI firms, and it is
restricting American investment in the AI sectors of adversary states. And when Silicon Valley Bank, which held the deposits of many firms and investors in the tech sector, spiraled toward insolvency, geopolitical concerns helped initiate a government bailout.

One should also expect, in the coming years, a greater emphasis on helping the Pentagon stimulate development of militarily relevant technologies — and making it easier to turn private-sector innovation into war-winning weapons. The more strategically salient AI is, the less willing governments will be to just let the market do its work.

We can’t predict the future: AI could hit a dead end, or it might accelerate beyond anyone’s expectations. Technology, moreover, is not some autonomous force. Its development and effects will be shaped by decisions in Washington and around the world.

For now, the key is to ask the right questions, because doing so helps us understand the stakes of those decisions. It helps us imagine the various futures AI could shape. Not least, it illustrates that maybe AI won’t cause a geopolitical
earthquake after all.

Sure, there are reasons to fear that AI will make warfare uncontrollable, upend the balance of power, fracture US alliances or fundamentally favor autocracies over democracies. But there are also good reasons to suspect that it won’t.

This isn’t to counsel complacency. Averting more dangerous outcomes will require energetic efforts and smart choices. Indeed, the primary value of this exercise is to show that a very wide range of scenarios is possible — and the worst ones won’t simply foreclose themselves.

Whether AI favors autocracy or democracy depends, in part, on whether the US pursues enlightened immigration policies that help it hoard top talent. Whether AI reinforces or fractures US alliances hinges on whether Washington treats those alliances as assets to be protected or as burdens to be discarded. Whether AI upholds or undermines the existing international hierarchy, and how much it changes the relationship between the private sector and the state, depends on how wisely the US and other countries regulate its development and use.

What’s beyond doubt is that AI opens inspiring vistas and terrible possibilities. America’s goal should be to innovate ruthlessly, and responsibly, enough so that a basically favorable world order doesn’t change fundamentally — even as technology does.


Hal Brands is a Senior Fellow at the AEI.

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7231260
夢醒人工智慧 - Nolen Gertz
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

這篇文章雖然出於哲學教授的筆下在我這個AI門外漢看來,其內容除了「以偏概全」的邏輯謬誤外,還有頗為嚴重的偏執狂加上些許被迫害狂。轉載於此,算是支持「百花齊放」和「言論自由」吧。


The day the AI dream died

Unveiling tech pessimism

Nolen Gertz, 05/31/24

AI's promise was to solve problems, not only of the day but all future obstacles as well. Yet the hype has started to wear off. Amidst the growing disillusionment, Nolen Gertz challenges the prevailing optimism, suggesting that our reliance on AI might be less about solving problems and more about escaping the harsh realities of our time. He questions whether AI is truly our saviour or just a captivating distraction, fuelling capitalist gains and nihilistic diversions from the global crises we face. -- Editor’s Notes


I recently participated in a
HTLGI debate where one of the participants, Kenneth Cukier, who is an editor of The Economist, criticized my view of technology as being unnecessarily pessimistic. He confidently claimed that we should be more optimistic about technological progress because such progress, for example, would help us to solve the climate change crisis. Though Cukier admitted that he might not just be optimistic but even “Panglossian” (幻想式的樂觀) when it comes to technological progress, he nevertheless argued, “I think if you think of all the global challenges that we’re facing, in large part because of the technologies that we’ve created, Industrial Revolution most importantly, gunpowder as well, it’s going to be technology that is going to help us overcome it.”

Cukier admits that technologies are the source of many of the “global challenges that we’re facing,” but he still nevertheless believes that technologies will also be
the solution. The reason for this faith in technological progress is due primarily to the belief that artificial intelligence (AI) is so radically different from and superior to previous technologies that it will not only solve our problems, but solve problems that were caused by previous technological solutions to our problems. But such enthusiasm for AI is now dying down as the hype that had originally surrounded AI is being more and more replaced with disillusionment.

Just as ChatGPT was once the face of AI’s successes, it is now the face of
AI’s failures. Each day journalists help to shed light on the realities behind how ChatGPT works; like the reports about the invisible labor force in Kenya and Pakistan, that helped to train ChatGPT; reports about the massive environmental impact of ChatGPT’s data centers; reports about how ChatGPT repackages the work of writers and artists as its own, without attributing or paying for that work; and reports about how ChatGPT can provide answers to people’s questions that seem to be based on facts but are really based on hallucinations passed off as facts. We have now gone from fearing that ChatGPT would put people out of work to instead fearing how much more work ChatGPT requires in order to use it without getting failed, fired, or sued.

Yet while the hype surrounding AI seems to have disappeared, the AI itself has not. So if we have moved from thinking that AI could do everything, to wondering if AI can do anything, then why have we not simply abandoned AI altogether? The answer would seem to be that AI did not need to live up to the hype in order to be proven effective. So the question we should be asking is not whether AI will ever be successful, but rather: how has it already been successful? If AI has indeed already proved effective enough for people to still invest billions in its development, then what has AI been effective at doing thus far?

One answer would seem to be that AI has already been successful at making rich people richer. If AI is seen as a legitimate investment, then simply suggesting that your company would integrate AI into its products would be enough reason to motivate investors to pour money into your company. Likewise, if AI is seen as capable of putting people out of work, then merely the potential of AI provides sufficient excuse to cut costs, fire employees, and force new hires to accept work for less pay. So whether or not AI ever provides a return on investment, and whether or not AI ever turns out to be capable of actually replacing human labor, AI nevertheless has already proved to be incredibly successful from the perspective of capitalism.

But another important answer to the question of why AI, post-hype, still occupies so much of our time and attention would seem to be the very fact that it occupies so much of our time and attention. Again, whether or not AI is ever capable of solving “global challenges” like climate change, the very idea that it could solve our problems and that it could stop climate change is already enough to help relieve the pressure on corporations to cut down on pollution, relieve the pressure on politicians to seek real solutions to climate change, and relieve the pressure on all of us to face the reality of the climate change crisis. AI might not be able to help us in the way that companies like OpenAI claimed, but nevertheless AI has helped us at the very least by distracting us. In other words, AI has been incredibly successful not only when it comes to capitalism, but also when it comes to
nihilism.

We know the truth about AI, but companies are still pursuing AI because it is still a way to make money, and we are still talking about AI because it gives us something to talk about other than climate change. And because we keep talking about it, companies can keep making money off of it. And because companies keep making money off of it, we can keep talking about it. We seem to be stuck in a vicious cycle. So the question we need to ask is not whether we can stop pursuing AI but whether we can stop pursuing nihilistic escapes from having to face reality.


Nolen Gertz is Assistant Professor of Applied Philosophy, University of Twente and author of Nihilism and Technology (Rowman and Littlefield, 2018) and Nihilism (MIT Press, 2020)

Related Posts:

The dangerous illusion of AI consciousnessBy Shannon Vallor
The dangerous illusion of AI consciousness
All-knowing machines are a fantasy
We still don't understand climate change
AI, Moloch, and the race to the bottom

Related Videos:

Biology beyond genes
Electricity creates consciousness
The trouble with string theory
The matrix, myths, and metaphysics

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7230564
人工智慧技術與美國2024大選 -- Julia Mueller
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Fears grow over AI’s impact on the 2024 election

, 12/26/23

The rapid rise of artificial intelligence (AI) is raising concerns about how the technology could impact next year’s election as the start of 2024 primary voting nears.

AI — advanced tech that can generate text, images and audio, and even build deepfake videos — could fuel misinformation in an already polarized political landscape and further erode voter confidence in the country’s election system.

“2024 will be an AI election, much the way that 2016 or 2020 was a social media election,” said Ethan Bueno de Mesquita, interim dean at the University of Chicago Harris School of Public Policy. “We will all be learning as a society about the ways in which this is changing our politics.”

Experts are sounding alarms that AI chatbots could generate misleading information for voters if they use it to get info on ballots, calendars or polling places — and also that AI could be used more nefariously, to create and disseminate misinformation and disinformation against certain candidates or issues.

“I think it could get pretty dark,” said Lisa Bryant, chair of the Department of Political Science at California State University, Fresno and an expert with MIT’s Election lab.

Polling shows the concern about AI doesn’t just come from academics: Americans appear increasingly worried about how the tech could confuse or complicate things during the already contentious 2024 cycle.

U Chicago Harris/AP-NORC poll released in November found a bipartisan majority of U.S. adults are worried about the use of AI “increasing the spread of false information” in the 2024 election.

Morning Consult-Axios survey found an uptick in recent months in the share of U.S. adults who said they think AI will negatively impact trust in candidate advertisements, as well as trust in the outcome of the elections overall.

Nearly 6 in 10 respondents said they think misinformation spread by AI will have an impact on who ultimately wins the 2024 presidential race.

“They are a very powerful tool for doing things like making fake videos, fake pictures, et cetera, that look extremely convincing and are extremely difficult to distinguish from reality — and that is going to be likely to be a tool in political campaigns, and already has been,” said Bueno de Mesquita, who worked on the UChicago poll.

“It’s very likely that that’s going to increase in the ‘24 election — that we’ll have fake content created by AI that’s at least by political campaigns, or at least by political action committees or other actors — that that will affect the voters’ information environment make it hard to know what’s true and false,” he said.


Over the summer, the DeSantis-aligned super PAC Never Back Down reportedly used an AI-generated version of former President Trump’s voice in a television ad.

Just ahead of the third Republican presidential debate, former President Trump’s campaign released a video clip that appeared to imitate the voices of his fellow GOP candidates, introducing themselves by Trump’s favored nicknames.

And earlier this month, the Trump campaign posted an altered version of a report that NBC News’s Garrett Haake gave before the third GOP debate. The clip starts unaltered with Haake’s report but has a voiceover take over, criticizing the former president’s Republican rivals.

“The danger is there, and I think it’s almost unimaginable that we won’t have deepfake videos or whatever as part of our politics going forward,” Bueno de Mesquita said.

The use of AI by political campaigns in particular has prompted tech companies and government officials to consider regulations on the tech.

Google earlier this year announced it would require verified election advertisers to “prominently disclose” when their ads had been digitally generated or altered.

Meta also plans to require disclosure when a political ad uses “photorealistic image or video, or realistic-sounding audio” that was generated or altered to, among other purposes, depict a real person doing or saying something they did not do.

President Biden issued an executive order on AI in October, including new standards for safety and plans for the Commerce Department to craft guidelines on content authentication and watermarking.

“President Biden believes that we have an obligation to harness the power of AI for good, while protecting people from its potentially profound risks,” a senior administration official said at the time.

But lawmakers have largely been left scrambling to try to regulate the industry as it charges ahead with new developments.

Shamaine Daniels, a Democratic candidate for Congress in Pennsylvania, is using an AI-powered voice tool from the startup Civox as a phone-banking tool for her campaign.

“I share everyone’s grave concerns about the possible nefarious uses of AI in politics and elsewhere. But we need to also understand and embrace the opportunities this technology represents,” Daniels said when she announced her campaign would roll out the tech.

Experts say AI could be used for good in election cycles — like informing the public what political candidates they may agree with on issues and helping election officials clean up voter lists to identify duplicate registrations.

But they also warn the tech could worsen problems exposed during the 2016 and 2020 cycles.

Bryant said AI could help disinformation “micro-target” users even further than what social media has already been able to. She said no one is immune from this, pointing to how ads on a platform like Instagram already can influence behavior.

“It really has helped to take this misinformation and really pinpoint what kinds of messages, based on past online behavior, really resonate and work with individuals,” she said.

Bueno de Mesquita said he is not as concerned about micro-targeting from campaigns to manipulate voters, because evidence has shown that social media targeting has not been effective enough to influence elections. Resources should be focused on educating the public about the “information environment” and pointing them to authoritative information, he said.

Nicole Schneidman, a technology policy advocate at the nonprofit watchdog group Protect Democracy, said the organization does not expect AI to produce “novel threats” for the 2024 election but rather potential acceleration of trends that are already affecting election integrity and democracy.

She said a risk exists of overemphasizing the potential of AI in a broader landscape of disinformation affecting the election.

“Certainly, the technology could be used in creative and novel ways, but what underlies those applications are all threats like disinformation campaigns or cyberattacks that we’ve seen before,” Schneidman said. “We should be focusing on mitigation strategies that we know that are responsive to those threats that are amplified, as opposed to spending too much time trying to anticipate every use case of the technology.”

A key solution to grappling with the rapidly developing technology could just be getting users in front of it.

“The best way to become AI literate myself is to spend half an hour playing with the chat bot,” said Bueno de Mesquita.

Respondents in the U Chicago Harris/AP-NORC who reported being more familiar with AI tools were also more likely to say use of the tech could increase the spread of misinformation, suggesting awareness of what the tech can do can increase awareness of its risks.

“I think the good news is that we have strategies both old and new to really bring to the fore here,” Schneidman said.

She said as AI becomes more sophisticated, detection technology may have trouble keeping up despite investments in those tools. Instead, she said “pre-bunking” from election officials can be effective at informing the public before they even potentially come across AI-generated content.

Schneidman said she hopes election officials also increasingly adopt digital signatures to indicate to journalists and the public what information is coming directly from an authoritative source and what might be fake. She said these signatures could also be included in photos and videos a candidate posts to plan for deepfakes.

“Digital signatures are the proactive version of getting ahead of some of the challenges that synthetic content could pose to the caliber of the election information ecosystem,” she said.

She said election officials, political leaders and journalists can get information people need about when and how to vote so they are not confused and voter suppression is limited. She added that narratives surrounding interference in elections are not new, which gives those fighting disinformation from AI content an advantage.

“The advantages that pre-bunking gives us is crafting effective counter messaging that anticipates recurring disinformation narratives and hopefully getting that in the hands and in front of the eyes of voters far in advance of the election, consistently ensuring that message is landing with voters so that they are getting the authoritative information that they need,” Schneidman said.

For the latest news, weather, sports, and streaming video, head to The Hill.


本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7219710
沒有人真正在乎「人工智慧『安全性』」 -- Lucas Ropek
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

這篇文章從牟利」和「(技術)安全性」兩者互斥的角度分析OpenAI的鬧劇(請見本欄前兩篇報導/評論)。但全文有其嚴肅性,值得細讀。對此議題有興趣的朋友,請到原網頁看看上週以來其它和人工智慧」相關訊息以及其他讀者的觀點。

我從這三篇報導/評論/分析,進一步了解到:為什麼「人工智慧」讓人害怕人工智慧安全性」導致的憂慮與焦躁、以及商場高層觀念與賺錢之間鬥爭的實務等等。真的比連續劇還要引人入勝。


After OpenAI's Blowup, It Seems Pretty Clear That 'AI Safety' Isn't a Real Thing

As OpenAI's chaos comes to an end, AI development will never be the same.

Lucas Ropek, 11/22/23

Welcome to AI This Week, Gizmodo’s weekly roundup where we do a deep dive on what’s been happening in artificial intelligence.

Well, holy shit. As far as the tech industry goes, it’s hard to say whether there’s ever been a more shocking series of events than the ones that took place over the last several days. The palace intrigue and boardroom drama of Sam Altman’s ousting by the OpenAI board (and his victorious reinstatement earlier today) will doubtlessly go down in history as one of the most explosive episodes to ever befall Silicon Valley. That said, the long-term fallout from this gripping incident is bound to be a lot less enjoyable than the initial spectacle of it.

The “coup,” as many have referred to it, has largely been attributed to an ideological rift between Sam and the OpenAI board over the pace of technological development at the company. So, this narrative goes, the board, which is supposed to have ultimate say over the direction of the organization, was concerned about the rate at which Altman was pushing to commercialize the technology, and decided to eject him with extreme prejudice. Altman, who was subsequently backed by OpenAI’s powerful partner and funder, Microsoft, as well as a majority of the startup’s staff, subsequently led a counter-coup, pushing out the traitors and re-instating himself as the leader of the company.

So much of the drama of the episode seems to revolve around this argument between Altman and the board over “AI safety.” Indeed, this fraught chapter in the company’s history seems like a flare up of OpenAI’s two opposing personalities—one based around research and responsible technological development, and the other based around making shitloads of money. One side decidedly overpowered the other (hint: it was the money side).

Other writers have already offered break downs about how OpenAI’s unique organizational structure seems to have set it on a collision course with itself. Maybe you’ve seen the startup’s org chart floating around the web but, in case you haven’t, here’s a quick recap: Unlike pretty much every other technology business that exists, OpenAI is actually a non-profit, governed wholly by its board, that operates and controls a for-profit company. This design is supposed to prioritize the organization’s mission of pursuing the public good over money. OpenAI’s own self-description promotes this idealistic notion—that it’s main aim is to make the world a better place, not make money:

We designed OpenAI’s structure—a partnership between our original Nonprofit and a new capped profit arm—as a chassis for OpenAI’s mission: to build artificial general intelligence (AGI) that is safe and benefits all of humanity.

Indeed, the board’s charter owes its allegiance to “humanity,” not to its shareholders. So, despite the fact that Microsoft has poured a megaton of money and resources into OpenAI, the startup’s board is still (hypothetically) supposed to have final say over what happens with its products and technology. That said, the company part of the organization is reported to be worth tens of billions of dollars. As many have already noted, the organization’s ethical mission seems to have come directly into conflict with the economic interests of those who had invested in the organization. As per usual, the money won.

All of this said, you could make the case that we shouldn’t fully endorse this interpretation of the weekend’s events yet, since the actual reasons for Altman’s ousting have still not been made public. For the most part, members of the company either aren’t talking about the reasons Sam was pushed out or have flatly denied that his ousting had anything to do with AI safety. Alternate theories have swirled in the meantime, with some suggesting that the real reasons for Altman’s aggressive exit were decidedly more colorful—like accusations he pursued additional funding via autocratic Mideast regimes.

But to get too bogged down in speculating about the specific catalysts for OpenAI’s drama is to ignore what the whole episode has revealed: as far as the real world is concerned, “AI safety” in Silicon Valley is pretty much null and void. Indeed, we now know that despite its supposedly bullet-proof organizational structure and its stated mission of responsible AI development, OpenAI was never going to be allowed to actually put ethics before money.

To be clear, AI safety is a really important field, and, were it to be actually practiced by corporate America, that would be one thing. That said, the version of it that existed at OpenAI—arguably one of the companies that has done the most to pursue a “safety” oriented model—doesn’t seem to have been much of a match for the realpolitik machinations of the tech industry. In even more frank terms, the folks who were supposed to be defending us from runaway AI (i.e., the board members)—the ones who were ordained with responsible stewardship over this powerful technology—don’t seem to have known what they were doing. They don’t seem to have understood that Sam had all the industry connections, the friends in high places, was well-liked, and that moving against him in a world where that kind of social capital is everything amounted to career suicide. If you come at the king, you best not miss.

In short: If the point of corporate AI safety is to protect humanity from runaway AI, then, as an effective strategy for doing that, it has effectively just flunked its first big test. That’s because it’s sorta hard to put your faith in a group of people who weren’t even capable of predicting the very predictable outcome that would occur when they fired their boss. How, exactly, can such a group be trusted with overseeing a supposedly “super-intelligent,world-shattering technology? If you can’t outfox a gaggle of outraged investors, then you probably can’t outfox the Skynet-type entity you claim to be building. That said, I would argue we also can’t trust the craven, money-obsessed C-suite that has now reasserted its dominance. Imo, they’re obviously not going to do the right thing. So, effectively, humanity is stuck between a rock and a hard place.

As the conflict from the OpenAI dustup settles, it seems like the company is well positioned to get back to business as usual. After jettisoning the only two women on its board, the company added fiscal goon Larry Summers. Altman is back at the company (as is former company president Greg Brockman, who stepped down in solidarity with Altman), and Microsoft’s top executive, Satya Nadella, has said that he is “encouraged by the changes to OpenAI board” and said it’s a “first essential step on a path to more stable, well-informed, and effective governance.”

With the board’s failure, it seems clear that OpenAI’s do-gooders may have not only set back their own “safety” mission, but might have also kicked off a backlash against the AI ethics movement writ large. Case in point: This weekend’s drama seems to have further radicalized an already pretty radical anti-safety ideology that had been circulating the business. The “effective accelerationists” (abbreviated “e/acc”) believe that stuff like additional government regulations, “tech ethics” and “AI safety” are all cumbersome obstacles to true technological development and exponential profit. Over the weekend, as the narrative about “AI safety” emerged, some of the more fervent adherents of this belief system took to X to decry what they perceived to be an attack on the true victim of the episode (capitalism, of course).

To some degree, the whole point of the tech industry’s embrace of “ethics” and “safety” is about reassurance. Companies realize that the technologies they are selling can be disconcerting and disruptive; they want to reassure the public that they’re doing their best to protect consumers and society. At the end of the day, though, we now know there’s no reason to believe that those efforts will ever make a difference if the company’s “ethics” end up conflicting with its money. And when have those two things ever not conflicted?

本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7217597
導致OpenAI大地震的「人工智慧」研發新突破 – 路透社
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

下文就本欄上一篇威脅論」幕後的訊息有詳細報導再度聲明我是人工智慧」領域的白痴。如果上一篇威脅論」的論述基於下文所報導的突破」;根據我的邏輯和常識,我會認為該文作者誇大其辭。


OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say

Anna TongJeffrey Dastin and Krystal Hu, 11/

Nov 22 (Reuters) - Ahead of OpenAI CEO 
Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.

In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.

"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.


Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker

Our Standards: The Thomson Reuters Trust Principles.

Reporters


Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University. Contact:4152373211

Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.

Krystal Hu reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.


本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7217583
頁/共3頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁