|
人工智能縱橫談 – 開欄文
|
瀏覽6,331|回應24|推薦2 |
|
|
四月開始,由於 ChatGPT 和 Bing Chat 的上線,網上以及各line群組掀起一陣AI瘋。我當時大概忙於討論《我們的反戰聲明》,沒有湊這個熱鬧。現在轉載幾篇相關文章。也請參考《「人工智慧」研發現況及展望 》一文。 有些人擔憂「人工智慧」會成為「人上機器」,操控世界甚至奴役人類。我不懂AI,思考也單純;所以,如果「人工智慧」亂了套,我自認為有一個簡單治它的方法: 拔掉電源插頭。如果這個方法不夠力,炸掉電力傳輸線和緊急發電機;再不行,炸掉發電廠。
本文於 修改第 4 次
|
經濟學家看 ”ChatGPT” – D. Acemoglu/S. Johnson
|
|
推薦1 |
|
|
這是2024諾貝爾經濟學獎得主中的兩位,在2023對”ChatGPT”所做分析和建議。 What’s Wrong with ChatGPT? Daron Acemoglu/Simon Johnson, 02/06/23 Artificial intelligence is being designed and deployed by corporate America in ways that will disempower and displace workers and degrade the consumer experience, ultimately disappointing most investors. Yet economic history shows that it does not have to be this way. CAMBRIDGE – Microsoft is reportedly delighted with OpenAI’s ChatGPT, a natural-language artificial-intelligence program capable of generating text that reads as if a human wrote it. Taking advantage of easy access to finance over the past decade, companies and venture-capital funds invested billions in an AI arms race, resulting in a technology that can now be used to replace humans across a wider range of tasks. This could be a disaster not only for workers, but also for consumers and even investors. The problem for workers is obvious: there will be fewer jobs requiring strong communication skills, and thus fewer positions that pay well. Cleaners, drivers, and some other manual workers will keep their jobs, but everyone else should be afraid. Consider customer service. Instead of hiring people to interact with customers, companies will increasingly rely on generative AIs like ChatGPT to placate angry callers with clever and soothing words. Fewer entry-level jobs will mean fewer opportunities to start a career – continuing a trend established by earlier digital technologies. Consumers, too, will suffer. Chatbots may be fine for handling entirely routine questions, but it is not routine questions that generally lead people to call customer service. When there is a real issue – like an airline grinding to a halt or a pipe bursting in your basement – you want to talk to a well-qualified, empathetic professional with the ability to marshal resources and organize timely solutions. You do not want to be put on hold for eight hours, but nor do you want to speak immediately to an eloquent but ultimately useless chatbot. Of course, in an ideal world, new companies offering better customer service would emerge and seize market share. But in the real world, many barriers to entry make it difficult for new firms to expand quickly. You may love your local bakery or a friendly airline representative or a particular doctor, but think of what it takes to create a new grocery store chain, a new airline, or a new hospital. Existing firms have big advantages, including important forms of market power that allow them to choose which available technologies to adopt and to use them however they want. More fundamentally, new companies offering better products and services generally require new technologies, such as digital tools that can make workers more effective and help create better customized services for the company’s clientele. But, since AI investments are putting automation first, these kinds of tools are not even being created. Investors in publicly traded companies will also lose out in the age of ChatGPT. These companies could be improving the services they offer to consumers by investing in new technologies to make their workforces more productive and capable of performing new tasks, and by providing plenty of training for upgrading employees’ skills. But they are not doing so. Many executives remain obsessed with a strategy that ultimately will come to be remembered as self-defeating: paring back employment and keeping wages as low as possible. Executives pursue these cuts because it is what the smart kids (analysts, consultants, finance professors, other executives) say they should do, and because Wall Street judges their performance relative to other companies that are also squeezing workers as hard as they can. AI is also poised to amplify the deleterious social effects of private equity. Already, vast fortunes can be made by buying up companies, loading them with debt while going private, and then hollowing out their workforces – all while paying high dividends to the new owners. Now, ChatGPT and other AI technologies will make it even easier to squeeze workers as much as possible through workplace surveillance, tougher working conditions, zero-hours contracts, and so forth. These trends all have dire implications for Americans’ spending power – the engine of the US economy. But as we explain in our forthcoming book, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity, a sputtering economic engine need not lie in our future. After all, the introduction of new machinery and technological breakthroughs has had very different consequences in the past. Over a century ago, Henry Ford revolutionized car production by investing heavily in new electrical machinery and developing a more efficient assembly line. Yes, these new technologies brought some amount of automation, as centralized electricity sources enabled machines to perform more tasks more efficiently. But the reorganization of the factory that accompanied electrification also created new tasks for workers and thousands of new jobs with higher wages, bolstering shared prosperity. Ford led the way in demonstrating that creating human-complementary technology is good business. Today, AI offers an opportunity to do likewise. AI-powered digital tools can be used to help nurses, teachers, and customer-service representatives understand what they are dealing with and what would help improve outcomes for patients, students, and consumers. The predictive power of algorithms could be harnessed to help people, rather than to replace them. If AIs are used to offer recommendations for human consideration, the ability to use such recommendations wisely will be recognized as a valuable human skill. Other AI applications can facilitate better allocation of workers to tasks, or even create completely new markets (think of Airbnb or rideshare apps). Unfortunately, these opportunities are being neglected, because most US tech leaders continue to spend heavily to develop software that can do what humans already do just fine. They know that they can cash out easily by selling their products to corporations that have developed tunnel vision. Everyone is focused on leveraging AI to cut labor costs, with little concern not only for the immediate customer experience but also for the future of American spending power. Ford understood that it made no sense to mass-produce cars if the masses couldn’t afford to buy them. Today’s corporate titans, by contrast, are using the new technologies in ways that will ruin our collective future. Daron Acemoglu, Institute Professor of Economics at MIT, is a co-author (with James A. Robinson) of Why Nations Fail: The Origins of Power, Prosperity and Poverty (Profile, 2019) and a co-author (with Simon Johnson) of Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (PublicAffairs, 2023). Simon Johnson, a former chief economist at the International Monetary Fund, is a professor at the MIT Sloan School of Management, a co-chair of the COVID-19 Policy Alliance, and a co-chair of the CFA Institute Systemic Risk Council. He is the co-author (with Daron Acemoglu) of Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (PublicAffairs, 2023).
本文於 修改第 1 次
|
經濟學家看「人工智能」的「除舊布新」 – D. Acemoglu
|
|
推薦1 |
|
|
這是2024諾貝爾經濟學獎得主之一阿齊模格教授在2024年4月發表關於「人工智能」的「除舊布新」評論。 Are We Ready for AI Creative Destruction? Daron Acemoglu, 04/09/24 Rather than blindly trusting elegant but simplistic theories about the nature of historical change, we urgently need to focus on how the next wave of disruptive innovation could affect our social, democratic, and civic institutions. Leaving it to tech entrepreneurs risks more destruction – and less creation – than we bargained for. BOSTON – The ancient Chinese concept of yin and yang attests to humans’ tendency to see patterns of interlocked opposites in the world around us, a predilection that has lent itself to various theories of natural cycles in social and economic phenomena. Just as the great medieval Arab philosopher Ibn Khaldun saw the path of an empire’s eventual collapse imprinted in its ascent, the twentieth-century economist Nikolai Kondratiev postulated that the modern global economy moves in “long wave” super-cycles. But no theory has been as popular as the one – going back to Karl Marx – that links the destruction of one set of productive relations to the creation of another. Writing in 1913, the German economist Werner Sombart observed that, “from destruction a new spirit of creation arises.” It was the Austrian economist Joseph Schumpeter who popularized and broadened the scope of the argument that new innovations perennially replace previously dominant technologies and topple older industrial behemoths. Many social scientists built on Schumpeter’s idea of “creative destruction” to explain the innovation process and its broader implications. These analyses also identified tensions inherent in the concept. For example, does destruction bring creation, or is it an inevitable by-product of creation? More to the point, is all destruction inevitable? In economics, Schumpeter’s ideas formed the bedrock of the theory of economic growth, the product cycle, and international trade. But two related developments have catapulted the concept of creative destruction to an even higher pedestal over the past several decades. The first was the runaway success of Harvard Business School professor Clayton Christensen’s 1997 book, The Innovator’s Dilemma, which advanced the idea of “disruptive innovation.” Disruptive innovations come from new firms pursuing business models that incumbents have deemed unattractive, often because they appeal only to the lower-end of the market. Since incumbents tend to remain committed to their own business models, they miss “the next great wave” of technology. The second development was the rise of Silicon Valley, where tech entrepreneurs made “disruption” an explicit strategy from the start. Google set out to disrupt the business of internet search, and Amazon set out to disrupt the business of book selling, followed by most other areas of retail. Then came Facebook with its mantra of “move fast and break things.” Social media transformed our social relations and how we communicate in one fell swoop, epitomizing both creative destruction and disruption at the same time. The intellectual allure of these theories lies in transforming destruction and disruption from apparent costs into obvious benefits. But while Schumpeter recognized that the destruction process is painful and potentially dangerous, today’s disruptive innovators see only win-wins. Hence, the venture capitalist and technologist Marc Andreessen writes: “Productivity growth, powered by technology, is the main driver of economic growth, wage growth, and the creation of new industries and new jobs, as people and capital are continuously freed to do more important, valuable things than in the past.” Now that hopes for artificial intelligence exceed even those of Facebook in its early days, we would do well to re-evaluate these ideas. Clearly, innovation is sometimes disruptive by nature, and the process of creation can be as destructive as Schumpeter envisaged it. History shows that unrelenting resistance to creative destruction leads to economic stagnation. But it doesn’t follow that destruction ought to be celebrated. Instead, we should view it as a cost that can sometimes be reduced, not least by building better institutions to help those who lose out, and sometimes by managing the process of technological change. Consider globalization. While it creates important economic benefits, it also destroys firms, jobs, and livelihoods. If our instinct is to celebrate those costs, it may not occur to us to try to mitigate them. And yet, there is much more that we could do to help adversely affected firms (which can invest to branch out into new areas), assist workers who lose their jobs (through retraining and a safety net), and support devastated communities. Failure to recognize these nuances opened the door for the excessive creative destruction and disruption that Silicon Valley has pushed on us these past few decades. Looking ahead, three principles should guide our approach, especially when it comes to AI. First, as with globalization, helping those who are adversely affected is of the utmost importance and must not be an afterthought. Second, we should not assume that disruption is inevitable. As I have argued previously, AI need not lead to mass job destruction. If those designing and deploying it do so only with automation in mind (as many Silicon Valley titans wish), the technology will create only more misery for working people. But it could take more attractive alternative paths. After all, AI has immense potential to make workers more productive, such as by providing them with better information and equipping them to perform more complex tasks. The worship of creative destruction must not blind us to these more promising scenarios, or to the distorted path we are currently on. If the market does not channel innovative energy in a socially beneficial direction, public policy and democratic processes can do much to redirect it. Just as many countries have already introduced subsidies to encourage more innovation in renewable energy, more can be done to mitigate the harms from AI and other digital technologies. Third, we must remember that existing social and economic relations are exceedingly complex. When they are disrupted, all kinds of unforeseen consequences can follow. Facebook and other social-media platforms did not set out to poison our public discourse with extremism, misinformation, and addiction. But in their rush to disrupt how we communicate, they followed their own principle of moving fast and then seeking forgiveness. We urgently need to pay greater attention to how the next wave of disruptive innovation could affect our social, democratic, and civic institutions. Getting the most out of creative destruction requires a proper balance between pro-innovation public policies and democratic input. If we leave it to tech entrepreneurs to safeguard our institutions, we risk more destruction than we bargained for. Daron Acemoglu, Institute Professor of Economics at MIT, is a co-author (with James A. Robinson) of Why Nations Fail: The Origins of Power, Prosperity and Poverty (Profile, 2019) and a co-author (with Simon Johnson) of Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (PublicAffairs, 2023).
本文於 修改第 2 次
|
經濟學家看「人工智能」前景 – D. Acemoglu/S. Johnson
|
|
推薦1 |
|
|
這是2024諾貝爾經濟學獎得主中兩位就「人工智能」前景的建議。 History Already Tells Us the Future of AI Daron Acemoglu/Simon Johnson, 04/23/24 David Ricardo, one of the founders of modern economics in the early 1800s, understood that machines are not necessarily good or bad. His insight that whether they destroy or create jobs all depends on how we deploy them, and on who makes those choices, could not be more relevant today. BOSTON – Artificial intelligence and the threat that it poses to good jobs would seem to be an entirely new problem. But we can find useful ideas about how to respond in the work of David Ricardo, a founder of modern economics who observed the British Industrial Revolution firsthand. The evolution of his thinking, including some points that he missed, holds many helpful lessons for us today. Private-sector tech leaders promise us a brighter future of less stress at work, fewer boring meetings, more leisure time, and perhaps even a universal basic income. But should we believe them? Many people may simply lose what they regarded as a good job – forcing them to find work at a lower wage. After all, algorithms are already taking over tasks that currently require people’s time and attention. In his seminal 1817 work, On the Principles of Political Economy and Taxation, Ricardo took a positive view of the machinery that had already transformed the spinning of cotton. Following the conventional wisdom of the time, he famously told the House of Commons that “machinery did not lessen the demand for labour.” Since the 1770s, the automation of spinning had reduced the price of spun cotton and increased demand for the complementary task of weaving spun cotton into finished cloth. And since almost all weaving was done by hand prior to the 1810s, this explosion in demand helped turn cotton handweaving into a high-paying artisanal job employing several hundred thousand British men (including many displaced, pre-industrial spinners). This early, positive experience with automation likely informed Ricardo’s initially optimistic view. But the development of large-scale machinery did not stop with spinning. Soon, steam-powered looms were being deployed in cotton-weaving factories. No longer would artisanal “hand weavers” be making good money working five days per week from their own cottages. Instead, they would struggle to feed their families while working much longer hours under strict discipline in factories. As anxiety and protests spread across northern England, Ricardo changed his mind. In the third edition of his influential book, published in 1821, he added a new chapter, “On Machinery,” where he hit the nail on the head: “If machinery could do all the work that labour now does, there would be no demand for labour.” The same concern applies today. Algorithms’ takeover of tasks previously performed by workers will not be good news for displaced workers unless they can find well-paid new tasks. Most of the struggling handweaving artisans during the 1810s and 1820s did not go to work in the new weaving factories, because the machine looms did not need many workers. Whereas the automation of spinning had created opportunities for more people to work as weavers, the automation of weaving did not create compensatory labor demand in other sectors. The British economy overall did not create enough other well-paying new jobs, at least not until railways took off in the 1830s. With few other options, hundreds of thousands of hand weavers remained in the occupation, even as wages fell by more than half. Another key problem, albeit not one that Ricardo himself dwelled upon, was that working in harsh factory conditions – becoming a small cog in the employer-controlled “satanic mills” of the early 1800s – was unappealing to handloom weavers. Many artisanal weavers had operated as independent businesspeople and entrepreneurs who bought spun cotton and then sold their woven products on the market. Obviously, they were not enthusiastic about submitting to longer hours, more discipline, less autonomy, and typically lower wages (at least compared to the heyday of handloom weaving). In testimony collected by various Royal Commissions, weavers spoke bitterly about their refusal to accept such working conditions, or about how horrible their lives became when they were forced (by the lack of other options) into such jobs. Today’s generative AI has huge potential and has already chalked up some impressive achievements, including in scientific research. It could well be used to help workers become more informed, more productive, more independent, and more versatile. Unfortunately, the tech industry seems to have other uses in mind. As we explain in Power and Progress, the big companies developing and deploying AI overwhelmingly favor automation (replacing people) over augmentation (making people more productive). That means we face the risk of excessive automation: many workers will be displaced, and those who remain employed will be subjected to increasingly demeaning forms of surveillance and control. The principle of “automate first and ask questions later” requires – and thus further encourages – the collection of massive amounts of information in the workplace and across all parts of society, calling into question how much privacy will remain. Such a future is not inevitable. Regulation of data collection would help protect privacy, and stronger workplace rules could prevent the worst aspects of AI-based surveillance. But the more fundamental task, Ricardo would remind us, is to change the overall narrative about AI. Arguably, the most important lesson from his life and work is that machines are not necessarily good or bad. Whether they destroy or create jobs depends on how we deploy them, and on who makes those choices. In Ricardo’s time, a small cadre of factory owners decided, and those decisions centered on automation and squeezing workers as hard as possible. Today, an even smaller cadre of tech leaders seem to be taking the same path. But focusing on creating new opportunities, new tasks for humans, and respect for all individuals would ensure much better outcomes. It is still possible to have pro-worker AI, but only if we can change the direction of innovation in the tech industry and introduce new regulations and institutions. As in Ricardo’s day, it would be naive to trust in the benevolence of business and tech leaders. It took major political reforms to create genuine democracy, to legalize trade unions, and to change the direction of technological progress in Britain during the Industrial Revolution. The same basic challenge confronts us today. Daron Acemoglu, Institute Professor of Economics at MIT, is a co-author (with James A. Robinson) of Why Nations Fail: The Origins of Power, Prosperity and Poverty (Profile, 2019) and a co-author (with Simon Johnson) of Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (PublicAffairs, 2023). Simon Johnson, a former chief economist at the International Monetary Fund, is a professor at the MIT Sloan School of Management, a co-chair of the COVID-19 Policy Alliance, and a co-chair of the CFA Institute Systemic Risk Council. He is the co-author (with Daron Acemoglu) of Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (PublicAffairs, 2023).
本文於 修改第 2 次
|
人工智慧可能改變戰爭和世界的六種方式 -- Hal Brands
|
|
推薦1 |
|
|
6 Ways AI Will Change War and the World Hal Brands, Bloomberg Opinion, 06/09/24 Artificial intelligence will change our everyday lives in innumerable ways: how governments serve their citizens; how we drive (and get driven); how we handle and, we hope, protect our finances; how doctors diagnose and treat diseases; even how my students research and write their essays. But just how revolutionary will AI be? Will it upend the global balance of power? Will it allow autocracies to rule the world? Will it make warfare so fast and ferocious that it becomes uncontrollable? In short, will AI fundamentally alter the rhythms of world affairs? It is, of course, too soon to say definitively: The effects of AI will ultimately hinge on the decisions leaders and nations make, and technology sometimes takes surprising turns. But even as we are wowed and worried by the next version of ChatGPT, we need to wrestle with six deeper questions about international affairs in the age of AI. And we need to consider a surprising possibility: Perhaps AI won’t change the world as much as we seem to expect. 1) Will AI make war uncontrollable? Consider one assertion — that artificial intelligence will make conflict more lethal and harder to constrain. Analysts envision a future in which machines can pilot fighter jets more skillfully than humans, AI-enabled cyberattacks devastate enemy networks, and advanced algorithms turbocharge the speed of decisions. Some warn that automated decision-making could trigger rapid-fire escalation — even nuclear escalation — that leaves policymakers wondering what happened. If war plans and railway timetables caused World War I, perhaps AI will cause World War III. That AI will change warfare is undeniable. From enabling predictive maintenance of hardware to facilitating astounding improvements in precision targeting, the possibilities are profound. A single F-35, quarterbacking a swarm of semiautonomous drones, could wield the firepower of an entire bomber wing. As the National Security Commission on Artificial Intelligence concluded in 2021, a “new era of conflict” will be dominated by the side that masters “new ways of war.” But there’s nothing fundamentally novel here. The story of warfare through the ages is one in which innovation regularly makes combat faster and more intense. So think twice before accepting the proposition that AI will make escalation uncontrollable. The US and China have discussed an agreement not to automate their nuclear command-and-control processes — a pledge Washington has made independently — for the simple reason that states have strong incentives not to relinquish control over weapons whose use could endanger their own survival. Russia’s behavior, including the development of nuclear-armed torpedoes that could eventually operate autonomously, is a greater concern. But even during the Cold War, when Moscow built a system meant to ensure nuclear retaliation even if its leadership was wiped out, it never turned off the human controls. Expect today’s great powers to exploit the military possibilities AI presents aggressively — while trying to keep the most critical decisions in human hands. In fact, AI could reduce the risk of breakneck escalation, by helping decision makers peer through the fog of crisis and war. The Pentagon believes that AI-enabled intelligence and analytical tools can help humans sift through confusing or fragmentary information regarding an enemy’s preparations for war, or even whether a feared missile attack is indeed underway. This isn’t science fiction: Assistance from AI reportedly helped US intelligence analysts sniff out Russian President Vladimir Putin’s invasion of Ukraine in 2022. In this sense, AI can mitigate the uncertainty and fear that pushes people toward extreme reactions. By giving policymakers greater understanding of events, AI might also improve their ability to manage them. 2) Will AI help autocracies like China control the world? What about a related nightmare — that AI will help the forces of tyranny control the future? Analysts such as Yuval Noah Harari have warned that artificial intelligence will reduce the costs and increase the returns from repression. AI-equipped intelligence services will need less manpower to decipher the vast amounts of intelligence they gather on their populations — allowing them, for example, to precisely map and remorselessly dismantle protest networks. They will use AI-enabled facial recognition technology to monitor and control their citizens, while employing AI-created disinformation to discredit critics at home and abroad. By making autocracy increasingly efficient, AI could allow the dictators to dominate the dawning age. This is certainly what China hopes for. President Xi Jinping’s government has devised a “social credit” system that uses AI, facial recognition and big data to ensure the reliability of its citizens — by regulating their access to everything from low-interest loans to airplane tickets. Ubiquitous, AI-assisted surveillance has turned Xinjiang into a dystopian model of modern repression. Beijing intends to seize the “strategic commanding heights” of innovation because it believes AI can bolster its domestic system and its military muscle. It is using the power of the illiberal state to steer money and talent toward advanced technologies. It’s not a given, though, that the autocracies will come out ahead. To believe that AI fundamentally favors autocracy is to believe that some of the most vital, longstanding enablers of innovation — such as open flows of information and tolerance for dissent — are no longer so important. Yet autocracy is already limiting China’s potential. Building powerful large language models requires huge pools of information. But if those inputs are tainted or biased because China’s internet is so heavily censored, the quality of the outputs will suffer. An increasingly repressive system will also struggle, over time, to attract top talent: It is telling that 38% of top AI researchers in the US are originally from China. And smart technology must still be used by China’s governing institutions, which are getting progressively less smart — that is, less technocratically competent — as the political system becomes ever more subservient to an emperor-for-life. China will be a formidable technological competitor. But even in the age of AI, Xi and his illiberal brethren may struggle to escape the competitive drag autocracy creates. 3) Will AI favor the best or the rest? Some technologies narrow the gap between the most and least technologically advanced societies. Nuclear weapons, for instance, allow relative pipsqueaks like North Korea to offset the military and economic advantages a superpower and its allies possess. Others widen the divide: In the 19th century, repeating rifles, machine guns and steamships allowed European societies to subjugate vast areas of the world. In some respects, AI will empower the weak. US officials worry that large language models might help terrorists with crude science kits to build biological weapons. Rogue states, like Iran, might use AI to coordinate drone swarms against US warships in the Persian Gulf. More benignly, AI could expand access to basic healthcare services in the Global South, creating big payoffs in increased life expectancy and economic productivity. In other respects, however, AI will be a rich man’s game. Developing state-of-the-art AI is fantastically expensive. Training large language models can require vast investments and access to a finite quantity of top scientists and engineers — to say nothing of staggering amounts of electricity. Some estimates place the cost of the infrastructure supporting Microsoft’s Bing AI chatbot at $4 billion. Almost anyone can be a taker of AI — but being a maker requires copious resources. This is why the middle powers making big moves in AI, such as Saudi Arabia and the United Arab Emirates, have very deep pockets. Many of the early leaders in the AI race are either tech titans (Alphabet, Microsoft, Meta, IBM, Nvidia and others) or firms with access to their money (OpenAI). And the US, with its vibrant, well-funded tech sector, still leads the field. What’s true in the private sector may also be true in the realm of warfare. At the outset, the military benefits of new technology may flow disproportionately to countries with the generous defense budgets required to develop and field new capabilities at scale. All this could change: Early leads don’t always translate into enduring advantages. Upstarts, whether firms or countries, have disrupted other fields before. For the time being, however, AI may do more to reinforce than revolutionize the balance of power. 4) Will AI fracture or fortify coalitions? How artificial intelligence affects the balance of power depends on how it affects global coalitions. As analysts at Georgetown University’s Center for Security and Emerging Technologies have documented, the US and its allies can vastly outpace China in spending on advanced technologies — but only if they combine their resources. Beijing’s best hope is that the free world fractures over AI. It could happen. Washington worries that Europe’s emerging approach to generative AI regulation could choke off innovation: In this sense, AI is underscoring divergent US and European approaches to markets and risk. Another key democracy, India, prefers strategic autonomy to strategic alignment — in technology as in geopolitics, it prefers to go its own way. Meanwhile, some of Washington’s nondemocratic partners, namely Saudi Arabia and the UAE, have explored tighter tech ties to Beijing. But it’s premature to conclude that AI will fundamentally disrupt US alliances. In some cases, the US is successfully using those alliances as tools of technological competition: Witness how Washington has cajoled Japan and the Netherlands to limit China’s access to high-end semiconductors. The US is also leveraging security partnerships with Saudi Arabia and the UAE to place limits on their technological relations with Beijing, and to promote AI partnerships between American and Emirati firms. In this sense, geopolitical alignments are shaping the development of AI, rather than vice versa. More fundamentally, the preferences countries have regarding AI are related to their preferences for domestic and international order. So whatever differences the US and Europe have may pale in comparison to their shared fears of what will happen if China surges to supremacy. Europe and America may eventually find their way into greater alignment on AI issues — just as shared hostility to US power is pushing China and Russia to cooperate more closely in military applications of the technology today. 5) Will AI tame or inflame great-power rivalry? Many of these questions relate to how AI will affect the intensity of the competition between the US-led West and the autocratic powers headed by China. No one really knows whether runaway AI could truly endanger humanity. But shared existential risks do sometimes make strange bedfellows. During the original Cold War, the US and the Soviet Union cooperated to manage the perils associated with nuclear weapons. During the new Cold War, perhaps Washington and Beijing will find common purpose in keeping AI from being used for malevolent purposes such as bioterrorism or otherwise threatening countries on both sides of today’s geopolitical divides. Yet the analogy cuts both ways, because nuclear weapons also made the Cold War sharper and scarier. Washington and Moscow had to navigate high-stakes showdowns such as the Cuban Missile Crisis and several Berlin crises before a precarious stability settled in. Today, AI arms control seems even more daunting than nuclear arms control, because AI development is so hard to monitor and the benefits of unilateral advantage are so tantalizing. So even as the US and China start a nascent AI dialogue, technology is turbocharging their competition. AI is at the heart of a Sino-American tech war, as China uses methods fair and foul to hasten its own development and the US deploys export controls, investment curbs and other measures to block Beijing’s path. If China can’t accelerate its technological progress, says Xi, it risks being “strangled” by Washington. AI is also fueling a fight for military superiority in the Western Pacific: The Pentagon’s Replicator Initiative envisions using thousands of AI-enabled drones to eviscerate a Chinese invasion fleet headed for Taiwan. Dueling powers may eventually find ways of cooperating, perhaps tacitly, on the mutual dangers AI poses. But a transformative technology will intensify many aspects of their rivalry between now and then. 6) Will AI make the private sector superior to the public? AI will undoubtedly shift the balance of influence between the public and private sectors. Analogies between AI and nuclear weapons can be enlightening, but only to a point: The notion of a Manhattan Project for AI is misleading because it is a field where money, innovation and talent are overwhelmingly found in the private sector. Firms on the AI frontier are thus becoming potent geopolitical actors — and governments know it. When Elon Musk and other experts advocated a moratorium on development of advanced AI models in 2023, official Washington urged the tech firms not to stop — because doing so would simply help China catch up. Government policy can speed or slow innovation. But to a remarkable degree, America’s strategic prospects depend on the achievements of private firms. It’s important not to take this argument too far. China’s civil-military fusion is meant to ensure that the state can direct and exploit innovation by the private sector. Although the US, as a democracy, can’t really mimic that approach, the concentration of great power in private firms will bring a government response. Washington is engaging, albeit hesitantly, in a debate about how best to regulate AI so as to foster innovation while limiting malign uses and catastrophic accidents. The long arm of state power is active in other ways, as well: The US would never allow Chinese investors to buy the nation’s leading AI firms, and it is restricting American investment in the AI sectors of adversary states. And when Silicon Valley Bank, which held the deposits of many firms and investors in the tech sector, spiraled toward insolvency, geopolitical concerns helped initiate a government bailout. One should also expect, in the coming years, a greater emphasis on helping the Pentagon stimulate development of militarily relevant technologies — and making it easier to turn private-sector innovation into war-winning weapons. The more strategically salient AI is, the less willing governments will be to just let the market do its work. We can’t predict the future: AI could hit a dead end, or it might accelerate beyond anyone’s expectations. Technology, moreover, is not some autonomous force. Its development and effects will be shaped by decisions in Washington and around the world. For now, the key is to ask the right questions, because doing so helps us understand the stakes of those decisions. It helps us imagine the various futures AI could shape. Not least, it illustrates that maybe AI won’t cause a geopolitical earthquake after all. Sure, there are reasons to fear that AI will make warfare uncontrollable, upend the balance of power, fracture US alliances or fundamentally favor autocracies over democracies. But there are also good reasons to suspect that it won’t. This isn’t to counsel complacency. Averting more dangerous outcomes will require energetic efforts and smart choices. Indeed, the primary value of this exercise is to show that a very wide range of scenarios is possible — and the worst ones won’t simply foreclose themselves. Whether AI favors autocracy or democracy depends, in part, on whether the US pursues enlightened immigration policies that help it hoard top talent. Whether AI reinforces or fractures US alliances hinges on whether Washington treats those alliances as assets to be protected or as burdens to be discarded. Whether AI upholds or undermines the existing international hierarchy, and how much it changes the relationship between the private sector and the state, depends on how wisely the US and other countries regulate its development and use. What’s beyond doubt is that AI opens inspiring vistas and terrible possibilities. America’s goal should be to innovate ruthlessly, and responsibly, enough so that a basically favorable world order doesn’t change fundamentally — even as technology does. Hal Brands is a Senior Fellow at the AEI.
本文於 修改第 1 次
|
夢醒人工智慧 - Nolen Gertz
|
|
推薦1 |
|
|
這篇文章雖然出於哲學教授的筆下,在我這個AI門外漢看來,其內容除了「以偏概全」的邏輯謬誤外,還有頗為嚴重的偏執狂加上些許被迫害狂。轉載於此,算是支持「百花齊放」和「言論自由」吧。 The day the AI dream died Unveiling tech pessimism Nolen Gertz, 05/31/24 AI's promise was to solve problems, not only of the day but all future obstacles as well. Yet the hype has started to wear off. Amidst the growing disillusionment, Nolen Gertz challenges the prevailing optimism, suggesting that our reliance on AI might be less about solving problems and more about escaping the harsh realities of our time. He questions whether AI is truly our saviour or just a captivating distraction, fuelling capitalist gains and nihilistic diversions from the global crises we face. -- Editor’s Notes I recently participated in a HTLGI debate where one of the participants, Kenneth Cukier, who is an editor of The Economist, criticized my view of technology as being unnecessarily pessimistic. He confidently claimed that we should be more optimistic about technological progress because such progress, for example, would help us to solve the climate change crisis. Though Cukier admitted that he might not just be optimistic but even “Panglossian” (幻想式的樂觀) when it comes to technological progress, he nevertheless argued, “I think if you think of all the global challenges that we’re facing, in large part because of the technologies that we’ve created, Industrial Revolution most importantly, gunpowder as well, it’s going to be technology that is going to help us overcome it.” Cukier admits that technologies are the source of many of the “global challenges that we’re facing,” but he still nevertheless believes that technologies will also be the solution. The reason for this faith in technological progress is due primarily to the belief that artificial intelligence (AI) is so radically different from and superior to previous technologies that it will not only solve our problems, but solve problems that were caused by previous technological solutions to our problems. But such enthusiasm for AI is now dying down as the hype that had originally surrounded AI is being more and more replaced with disillusionment. Just as ChatGPT was once the face of AI’s successes, it is now the face of AI’s failures. Each day journalists help to shed light on the realities behind how ChatGPT works; like the reports about the invisible labor force in Kenya and Pakistan, that helped to train ChatGPT; reports about the massive environmental impact of ChatGPT’s data centers; reports about how ChatGPT repackages the work of writers and artists as its own, without attributing or paying for that work; and reports about how ChatGPT can provide answers to people’s questions that seem to be based on facts but are really based on hallucinations passed off as facts. We have now gone from fearing that ChatGPT would put people out of work to instead fearing how much more work ChatGPT requires in order to use it without getting failed, fired, or sued. Yet while the hype surrounding AI seems to have disappeared, the AI itself has not. So if we have moved from thinking that AI could do everything, to wondering if AI can do anything, then why have we not simply abandoned AI altogether? The answer would seem to be that AI did not need to live up to the hype in order to be proven effective. So the question we should be asking is not whether AI will ever be successful, but rather: how has it already been successful? If AI has indeed already proved effective enough for people to still invest billions in its development, then what has AI been effective at doing thus far? One answer would seem to be that AI has already been successful at making rich people richer. If AI is seen as a legitimate investment, then simply suggesting that your company would integrate AI into its products would be enough reason to motivate investors to pour money into your company. Likewise, if AI is seen as capable of putting people out of work, then merely the potential of AI provides sufficient excuse to cut costs, fire employees, and force new hires to accept work for less pay. So whether or not AI ever provides a return on investment, and whether or not AI ever turns out to be capable of actually replacing human labor, AI nevertheless has already proved to be incredibly successful from the perspective of capitalism. But another important answer to the question of why AI, post-hype, still occupies so much of our time and attention would seem to be the very fact that it occupies so much of our time and attention. Again, whether or not AI is ever capable of solving “global challenges” like climate change, the very idea that it could solve our problems and that it could stop climate change is already enough to help relieve the pressure on corporations to cut down on pollution, relieve the pressure on politicians to seek real solutions to climate change, and relieve the pressure on all of us to face the reality of the climate change crisis. AI might not be able to help us in the way that companies like OpenAI claimed, but nevertheless AI has helped us at the very least by distracting us. In other words, AI has been incredibly successful not only when it comes to capitalism, but also when it comes to nihilism. We know the truth about AI, but companies are still pursuing AI because it is still a way to make money, and we are still talking about AI because it gives us something to talk about other than climate change. And because we keep talking about it, companies can keep making money off of it. And because companies keep making money off of it, we can keep talking about it. We seem to be stuck in a vicious cycle. So the question we need to ask is not whether we can stop pursuing AI but whether we can stop pursuing nihilistic escapes from having to face reality. Nolen Gertz is Assistant Professor of Applied Philosophy, University of Twente and author of Nihilism and Technology (Rowman and Littlefield, 2018) and Nihilism (MIT Press, 2020) Related Posts: The dangerous illusion of AI consciousnessBy Shannon Vallor The dangerous illusion of AI consciousness All-knowing machines are a fantasy We still don't understand climate change AI, Moloch, and the race to the bottom Related Videos: Biology beyond genes Electricity creates consciousness The trouble with string theory The matrix, myths, and metaphysics
本文於 修改第 1 次
|
人工智慧技術與美國2024大選 -- Julia Mueller
|
|
推薦1 |
|
|
Fears grow over AI’s impact on the 2024 election Julia Mueller, 12/26/23 The rapid rise of artificial intelligence (AI) is raising concerns about how the technology could impact next year’s election as the start of 2024 primary voting nears. AI — advanced tech that can generate text, images and audio, and even build deepfake videos — could fuel misinformation in an already polarized political landscape and further erode voter confidence in the country’s election system. “2024 will be an AI election, much the way that 2016 or 2020 was a social media election,” said Ethan Bueno de Mesquita, interim dean at the University of Chicago Harris School of Public Policy. “We will all be learning as a society about the ways in which this is changing our politics.” Experts are sounding alarms that AI chatbots could generate misleading information for voters if they use it to get info on ballots, calendars or polling places — and also that AI could be used more nefariously, to create and disseminate misinformation and disinformation against certain candidates or issues. “I think it could get pretty dark,” said Lisa Bryant, chair of the Department of Political Science at California State University, Fresno and an expert with MIT’s Election lab. Polling shows the concern about AI doesn’t just come from academics: Americans appear increasingly worried about how the tech could confuse or complicate things during the already contentious 2024 cycle. A U Chicago Harris/AP-NORC poll released in November found a bipartisan majority of U.S. adults are worried about the use of AI “increasing the spread of false information” in the 2024 election. A Morning Consult-Axios survey found an uptick in recent months in the share of U.S. adults who said they think AI will negatively impact trust in candidate advertisements, as well as trust in the outcome of the elections overall. Nearly 6 in 10 respondents said they think misinformation spread by AI will have an impact on who ultimately wins the 2024 presidential race. “They are a very powerful tool for doing things like making fake videos, fake pictures, et cetera, that look extremely convincing and are extremely difficult to distinguish from reality — and that is going to be likely to be a tool in political campaigns, and already has been,” said Bueno de Mesquita, who worked on the UChicago poll. “It’s very likely that that’s going to increase in the ‘24 election — that we’ll have fake content created by AI that’s at least by political campaigns, or at least by political action committees or other actors — that that will affect the voters’ information environment make it hard to know what’s true and false,” he said. Over the summer, the DeSantis-aligned super PAC Never Back Down reportedly used an AI-generated version of former President Trump’s voice in a television ad. Just ahead of the third Republican presidential debate, former President Trump’s campaign released a video clip that appeared to imitate the voices of his fellow GOP candidates, introducing themselves by Trump’s favored nicknames. And earlier this month, the Trump campaign posted an altered version of a report that NBC News’s Garrett Haake gave before the third GOP debate. The clip starts unaltered with Haake’s report but has a voiceover take over, criticizing the former president’s Republican rivals. “The danger is there, and I think it’s almost unimaginable that we won’t have deepfake videos or whatever as part of our politics going forward,” Bueno de Mesquita said. The use of AI by political campaigns in particular has prompted tech companies and government officials to consider regulations on the tech. Google earlier this year announced it would require verified election advertisers to “prominently disclose” when their ads had been digitally generated or altered. Meta also plans to require disclosure when a political ad uses “photorealistic image or video, or realistic-sounding audio” that was generated or altered to, among other purposes, depict a real person doing or saying something they did not do. President Biden issued an executive order on AI in October, including new standards for safety and plans for the Commerce Department to craft guidelines on content authentication and watermarking. “President Biden believes that we have an obligation to harness the power of AI for good, while protecting people from its potentially profound risks,” a senior administration official said at the time. But lawmakers have largely been left scrambling to try to regulate the industry as it charges ahead with new developments. Shamaine Daniels, a Democratic candidate for Congress in Pennsylvania, is using an AI-powered voice tool from the startup Civox as a phone-banking tool for her campaign. “I share everyone’s grave concerns about the possible nefarious uses of AI in politics and elsewhere. But we need to also understand and embrace the opportunities this technology represents,” Daniels said when she announced her campaign would roll out the tech. Experts say AI could be used for good in election cycles — like informing the public what political candidates they may agree with on issues and helping election officials clean up voter lists to identify duplicate registrations. But they also warn the tech could worsen problems exposed during the 2016 and 2020 cycles. Bryant said AI could help disinformation “micro-target” users even further than what social media has already been able to. She said no one is immune from this, pointing to how ads on a platform like Instagram already can influence behavior. “It really has helped to take this misinformation and really pinpoint what kinds of messages, based on past online behavior, really resonate and work with individuals,” she said. Bueno de Mesquita said he is not as concerned about micro-targeting from campaigns to manipulate voters, because evidence has shown that social media targeting has not been effective enough to influence elections. Resources should be focused on educating the public about the “information environment” and pointing them to authoritative information, he said. Nicole Schneidman, a technology policy advocate at the nonprofit watchdog group Protect Democracy, said the organization does not expect AI to produce “novel threats” for the 2024 election but rather potential acceleration of trends that are already affecting election integrity and democracy. She said a risk exists of overemphasizing the potential of AI in a broader landscape of disinformation affecting the election. “Certainly, the technology could be used in creative and novel ways, but what underlies those applications are all threats like disinformation campaigns or cyberattacks that we’ve seen before,” Schneidman said. “We should be focusing on mitigation strategies that we know that are responsive to those threats that are amplified, as opposed to spending too much time trying to anticipate every use case of the technology.” A key solution to grappling with the rapidly developing technology could just be getting users in front of it. “The best way to become AI literate myself is to spend half an hour playing with the chat bot,” said Bueno de Mesquita. Respondents in the U Chicago Harris/AP-NORC who reported being more familiar with AI tools were also more likely to say use of the tech could increase the spread of misinformation, suggesting awareness of what the tech can do can increase awareness of its risks. “I think the good news is that we have strategies both old and new to really bring to the fore here,” Schneidman said. She said as AI becomes more sophisticated, detection technology may have trouble keeping up despite investments in those tools. Instead, she said “pre-bunking” from election officials can be effective at informing the public before they even potentially come across AI-generated content. Schneidman said she hopes election officials also increasingly adopt digital signatures to indicate to journalists and the public what information is coming directly from an authoritative source and what might be fake. She said these signatures could also be included in photos and videos a candidate posts to plan for deepfakes. “Digital signatures are the proactive version of getting ahead of some of the challenges that synthetic content could pose to the caliber of the election information ecosystem,” she said. She said election officials, political leaders and journalists can get information people need about when and how to vote so they are not confused and voter suppression is limited. She added that narratives surrounding interference in elections are not new, which gives those fighting disinformation from AI content an advantage. “The advantages that pre-bunking gives us is crafting effective counter messaging that anticipates recurring disinformation narratives and hopefully getting that in the hands and in front of the eyes of voters far in advance of the election, consistently ensuring that message is landing with voters so that they are getting the authoritative information that they need,” Schneidman said. For the latest news, weather, sports, and streaming video, head to The Hill.
本文於 修改第 1 次
|
沒有人真正在乎「人工智慧『安全性』」 -- Lucas Ropek
|
|
推薦1 |
|
|
這篇文章從「牟利」和「(技術)安全性」兩者互斥的角度,分析OpenAI的鬧劇(請見本欄前兩篇報導/評論)。但全文有其嚴肅性,值得細讀。對此議題有興趣的朋友,請到原網頁看看上週以來其它和「人工智慧」相關訊息,以及其他讀者的觀點。 我從這三篇報導/評論/分析,進一步了解到:為什麼「人工智慧」讓人害怕、「人工智慧『安全性』」導致的憂慮與焦躁、以及商場高層觀念與賺錢之間鬥爭的實務等等。真的比連續劇還要引人入勝。 After OpenAI's Blowup, It Seems Pretty Clear That 'AI Safety' Isn't a Real Thing As OpenAI's chaos comes to an end, AI development will never be the same. Lucas Ropek, 11/22/23 Welcome to AI This Week, Gizmodo’s weekly roundup where we do a deep dive on what’s been happening in artificial intelligence. Well, holy shit. As far as the tech industry goes, it’s hard to say whether there’s ever been a more shocking series of events than the ones that took place over the last several days. The palace intrigue and boardroom drama of Sam Altman’s ousting by the OpenAI board (and his victorious reinstatement earlier today) will doubtlessly go down in history as one of the most explosive episodes to ever befall Silicon Valley. That said, the long-term fallout from this gripping incident is bound to be a lot less enjoyable than the initial spectacle of it. The “coup,” as many have referred to it, has largely been attributed to an ideological rift between Sam and the OpenAI board over the pace of technological development at the company. So, this narrative goes, the board, which is supposed to have ultimate say over the direction of the organization, was concerned about the rate at which Altman was pushing to commercialize the technology, and decided to eject him with extreme prejudice. Altman, who was subsequently backed by OpenAI’s powerful partner and funder, Microsoft, as well as a majority of the startup’s staff, subsequently led a counter-coup, pushing out the traitors and re-instating himself as the leader of the company. So much of the drama of the episode seems to revolve around this argument between Altman and the board over “AI safety.” Indeed, this fraught chapter in the company’s history seems like a flare up of OpenAI’s two opposing personalities—one based around research and responsible technological development, and the other based around making shitloads of money. One side decidedly overpowered the other (hint: it was the money side). Other writers have already offered break downs about how OpenAI’s unique organizational structure seems to have set it on a collision course with itself. Maybe you’ve seen the startup’s org chart floating around the web but, in case you haven’t, here’s a quick recap: Unlike pretty much every other technology business that exists, OpenAI is actually a non-profit, governed wholly by its board, that operates and controls a for-profit company. This design is supposed to prioritize the organization’s mission of pursuing the public good over money. OpenAI’s own self-description promotes this idealistic notion—that it’s main aim is to make the world a better place, not make money: We designed OpenAI’s structure—a partnership between our original Nonprofit and a new capped profit arm—as a chassis for OpenAI’s mission: to build artificial general intelligence (AGI) that is safe and benefits all of humanity. Indeed, the board’s charter owes its allegiance to “humanity,” not to its shareholders. So, despite the fact that Microsoft has poured a megaton of money and resources into OpenAI, the startup’s board is still (hypothetically) supposed to have final say over what happens with its products and technology. That said, the company part of the organization is reported to be worth tens of billions of dollars. As many have already noted, the organization’s ethical mission seems to have come directly into conflict with the economic interests of those who had invested in the organization. As per usual, the money won. All of this said, you could make the case that we shouldn’t fully endorse this interpretation of the weekend’s events yet, since the actual reasons for Altman’s ousting have still not been made public. For the most part, members of the company either aren’t talking about the reasons Sam was pushed out or have flatly denied that his ousting had anything to do with AI safety. Alternate theories have swirled in the meantime, with some suggesting that the real reasons for Altman’s aggressive exit were decidedly more colorful—like accusations he pursued additional funding via autocratic Mideast regimes. But to get too bogged down in speculating about the specific catalysts for OpenAI’s drama is to ignore what the whole episode has revealed: as far as the real world is concerned, “AI safety” in Silicon Valley is pretty much null and void. Indeed, we now know that despite its supposedly bullet-proof organizational structure and its stated mission of responsible AI development, OpenAI was never going to be allowed to actually put ethics before money. To be clear, AI safety is a really important field, and, were it to be actually practiced by corporate America, that would be one thing. That said, the version of it that existed at OpenAI—arguably one of the companies that has done the most to pursue a “safety” oriented model—doesn’t seem to have been much of a match for the realpolitik machinations of the tech industry. In even more frank terms, the folks who were supposed to be defending us from runaway AI (i.e., the board members)—the ones who were ordained with responsible stewardship over this powerful technology—don’t seem to have known what they were doing. They don’t seem to have understood that Sam had all the industry connections, the friends in high places, was well-liked, and that moving against him in a world where that kind of social capital is everything amounted to career suicide. If you come at the king, you best not miss. In short: If the point of corporate AI safety is to protect humanity from runaway AI, then, as an effective strategy for doing that, it has effectively just flunked its first big test. That’s because it’s sorta hard to put your faith in a group of people who weren’t even capable of predicting the very predictable outcome that would occur when they fired their boss. How, exactly, can such a group be trusted with overseeing a supposedly “super-intelligent,” world-shattering technology? If you can’t outfox a gaggle of outraged investors, then you probably can’t outfox the Skynet-type entity you claim to be building. That said, I would argue we also can’t trust the craven, money-obsessed C-suite that has now reasserted its dominance. Imo, they’re obviously not going to do the right thing. So, effectively, humanity is stuck between a rock and a hard place. As the conflict from the OpenAI dustup settles, it seems like the company is well positioned to get back to business as usual. After jettisoning the only two women on its board, the company added fiscal goon Larry Summers. Altman is back at the company (as is former company president Greg Brockman, who stepped down in solidarity with Altman), and Microsoft’s top executive, Satya Nadella, has said that he is “encouraged by the changes to OpenAI board” and said it’s a “first essential step on a path to more stable, well-informed, and effective governance.” With the board’s failure, it seems clear that OpenAI’s do-gooders may have not only set back their own “safety” mission, but might have also kicked off a backlash against the AI ethics movement writ large. Case in point: This weekend’s drama seems to have further radicalized an already pretty radical anti-safety ideology that had been circulating the business. The “effective accelerationists” (abbreviated “e/acc”) believe that stuff like additional government regulations, “tech ethics” and “AI safety” are all cumbersome obstacles to true technological development and exponential profit. Over the weekend, as the narrative about “AI safety” emerged, some of the more fervent adherents of this belief system took to X to decry what they perceived to be an attack on the true victim of the episode (capitalism, of course). To some degree, the whole point of the tech industry’s embrace of “ethics” and “safety” is about reassurance. Companies realize that the technologies they are selling can be disconcerting and disruptive; they want to reassure the public that they’re doing their best to protect consumers and society. At the end of the day, though, we now know there’s no reason to believe that those efforts will ever make a difference if the company’s “ethics” end up conflicting with its money. And when have those two things ever not conflicted?
本文於 修改第 2 次
|
導致OpenAI大地震的「人工智慧」研發新突破 – 路透社
|
|
推薦1 |
|
|
下文就本欄上一篇「威脅論」幕後的訊息有詳細報導。再度聲明:我是「人工智慧」領域的白痴。如果上一篇「威脅論」的論述基於下文所報導的「突破」;根據我的邏輯和常識,我會認為該文作者誇大其辭。 OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say Anna Tong, Jeffrey Dastin and Krystal Hu, 11/23/23 Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters. The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader. The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment. After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy. Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks. Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said. Reuters could not independently verify the capabilities of Q* claimed by the researchers. Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe. Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend. In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest. Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said. Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI. In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight. "Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit. A day later, the board fired Altman. Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker Our Standards: The Thomson Reuters Trust Principles. Reporters: Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University. Contact:4152373211 Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022. Krystal Hu reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.
本文於 修改第 1 次
|
開放人工智慧研究中心大地震和人類史上最大威脅 –--- Tomas Pueyo
|
|
推薦1 |
|
|
看起來這是一個巨無霸型的超級威脅(請參看本欄《人類可能在2031面臨「終結時刻」 -- Tim Newcomb》)。我沒有能力翻譯和評論;我能做的是替各位找到一些相關連接,並(盡量)依據「信、達」原則做了翻譯。請看官們自行理解和玩味。 此文也回答了我「拔掉插頭」的幼稚想法。 索引(11/26修正;請指正): AI alignment:「人工智慧」的研發必須符合人類指令、需求、和福利等目標 Artificial intelligence:「人工智慧」入門介紹 Bard:Google研發的「聊天器」 Bing Chat:Microsoft研發的「聊天器」 ChatBot (chatterbot):聊天器 ChatGPT:OpenAI研發的「訓練後自動產生聊天能力轉換器」 (Chat Generative Pre-trained Transformer) foom (like the word boom):「人工智慧」達到突破性效能的一刻;A sudden increase in artificial intelligence such that an AI system becomes extremely powerful. FOOM:幫助「人工智慧」達到突破性效能這個目標的過程與步驟 OpenAI:開放人工智慧研究中心
OpenAI and the Biggest Threat in the History of Humanity We don’t know how to contain or align a FOOMing AGI TOMAS PUEYO, Uncharted Territories, 11/22/23 Last weekend, there was massive drama at the board of OpenAI, the non-profit/company that makes ChatGPT, which has grown from nothing to $1B revenue per year in a matter of months. Sam Altman, the CEO of the company, was fired by the board of the non-profit arm. The president, Greg Brockman, stepped down immediately after learning Altman had been let go. Satya Nadella, the CEO of Microsoft—who owns 49% of the OpenAI company—told OpenAI he still believed in the company, while hiring Greg and Sam on the spot for Microsoft, and giving them free rein to hire and spend as much as they needed, which will likely include the vast majority of OpenAI employees. This drama, worthy of the show Succession, is at the heart of the most important problem in the history of humanity. Board members seldom fire CEOs, because founding CEOs are the single most powerful force of a company. If that company is a rocketship like OpenAI, worth $80B, you don’t touch it. So why did the OpenAI board fire Sam? This is what they said: No standard startup board member cares about this in a rocketship. But OpenAI’s board is not standard. In fact, it was designed to do exactly what it did. This is the board structure of OpenAI: To simplify this, let’s focus on who owns OpenAI the company, at the bottom (Global LLC): * OpenAI the charity has a big ownership of the company. * Some employees and investors also do. * And Microsoft owns 49% of it. Everything here is normal, except for the charity at the top. What is it, and what does it do? OpenAI the charity is structured to not make a profit because it has a specific goal that is not financial: To make sure that humanity, and everything in the observable universe, doesn’t disappear. What is that humongous threat? The impossibility to contain a misaligned, FOOMing AGI. What does that mean? (Skip this next section if you understand that sentence fully.) FOOM AGI Can’t Be Contained AGI FOOM AGI is Artificial General Intelligence: a machine that can do nearly anything any human can do: anything mental, and through robots, anything physical. This includes deciding what it wants to do and then executing it, with the thoughtfulness of a human, at the speed and precision of a machine. Here’s the issue: If you can do anything that a human can do, that includes working on computer engineering to improve yourself. And since you’re a machine, you can do it at the speed and precision of a machine, not a human. You don’t need to go to pee, sleep, or eat. You can create 50 versions of yourself, and have them talk to each other not with words, but with data flows that go thousands of times faster. So in a matter of days—maybe hours, or seconds—you will not be as intelligent as a human anymore, but slightly more intelligent. Since you’re more intelligent, you can improve yourself slightly faster, and become even more intelligent. The more you improve yourself, the faster you improve yourself. Within a few cycles, you develop the intelligence of a God. This is the FOOM process: The moment an AI reaches a level close to AGI, it will be able to improve itself so fast that it will pass our intelligence quickly, and become extremely intelligent. Once FOOM happens, we will reach the singularity: a moment when so many things change so fast that we can’t predict what will happen beyond that point. Here’s an example of this process in action in the past: Alpha Zero blew past all accumulated human knowledge about Go after a day or so of self-play, with no reliance on human playbooks or sample games. Alpha Zero learned by playing with itself, and this experience was enough to work better than any human who ever played, and all previous iterations of Alpha Go. The idea is that an AGI can do the same with general intelligence. Here’s another example: A year ago, Google’s DeepMind found a new, more efficient way to multiply matrices. Matrix multiplication is a very fundamental process in all computer processing, and humans had not found a new solution to this problem in 50 years. Do people think an AGI FOOM is possible? Bets vary. In Metaculus, people opine that the process would take nearly two years from weak AI to superintelligence. Others think it might be a matter of hours. Note that Weak AI has many different definitions and nobody is clear what it means. Generally, it means it’s human-level good for one narrow type of task. So it makes sense that it would take 22 months to go from that to AGI, because maybe that narrow task has nothing to do with self-improvement. The key here is self-improvement. I fear the moment an AI reaches human-level ability to self-improve, it will become superintelligent in a matter of hours, days, or weeks. If we’re lucky, months. Not years. Maybe this is good? Why would we want to stop this runaway intelligence improvement? Misaligned Paperclips This idea was first illustrated by Nick Bostrom: Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans. Easy: Tell the AGI to optimize for things that humans like before it becomes an AGI? This is called alignment and is impossible so far. Not all humans want the same things. We’ve been at war for thousands of years. We still debate moral issues on a daily basis. We just don’t know what it is that we want, so how could we make a machine know that? Even if we could, what would prevent the AGI from changing its goals? Indeed, we might be telling it “please humans to get 10 points”, but if it can tinker with itself, it could change that rule to anything else, and all bets are off. So alignment is hard. What happens with an AGI that is not fully aligned? Put yourself in the shoes of a god-like AGI. What are some of the first things you would do, if you have any goal that is not exactly “do what’s best for all humans as a whole”? You would cancel the ability of any potential enemy to stop you, because that would jeopardize your mission the most. So you would probably create a virus that preempts any other AGI that appears. Since you’re like a god, this would be easy. You’d infect all computers in the world in a way that can’t be detected. The other big potential obstacle to reaching your objectives might be humans shutting you down. So you will quickly take this ability away from humans. This might be by spreading over the Internet, creating physical instances of yourself, or simply eliminating all humans. Neutralizing humans would probably be at the top of the priority list of an AGI the moment it reaches AGI. Of course, since an AGI is not dumb, she1 would know that appearing too intelligent or self-improving too fast would be perceived by humans as threatening. So she would have all the incentives to appear dumb and hide her intelligence and self-improvement. Humans wouldn’t notice she’s intelligent until it’s too late. If that sounds weird, think about all the times you’ve talked with an AI and it has lied to you (the politically correct word is “hallucinate”). Or when a simulated AI committed insider trading and lied about it. And these are not very intelligent AIs! It is very possible that an AI would lie to be undetected and reach AGI status. So shutting down an AGI after it escapes is impossible, and shutting it down before might be too hard because we wouldn’t know it’s superintelligent. Whether it is making paperclips, or solving the Riemann hypothesis, or any other goal, neutralizing humans and other computers would be a top priority, and seeming dumb before developing the capacity to achieve that would be a cornerstone of the AGI’s strategy. This concept is called instrumental convergence: Whatever you optimize for, you will have some consistent goals like accessing resources and fending off threats. OK, so we want to catch an AI that is becoming intelligent fast, even if it tries to lie to us. This sounds easy, doesn’t it? Let’s just contain it. Except you can’t. The Problem of Containment In Ex Machina (SPOILERS AHEAD), a tech visionary invites an engineer to his complex to interact with a contained AI. The AI is beautiful, sensitive, delicate, intelligent, curious, and falls in love with the engineer. She then shares how the evil tech visionary is keeping her miserably contained in a box. Eventually, the engineer helps free her, at which point she locks and kills humans, and escapes. —-END OF SPOILERS–– This is close to the scenario that happens in nearly every situation ever imagined to try to contain an AGI. If it has access to the Internet, it can spread infinitely. So it shouldn’t access it.2 Even if it doesn’t, it could generate electromagnetic waves in a pattern that could hack into other systems that are connected to the Internet. If that sounds crazy, consider that there is malware that can communicate through the vibrations of a computer’s fan (風扇): Lead cyber-security researcher Mordechai Guri at Ben-Gurion University of the Negev said data encoded by hackers into fan vibrations could be transmitted to a smartphone placed in the vicinity of the targeted computer. So, you need this AGI to be secluded from any other electronic thing, and even air, in a storage room with a faraday cage3 and void, with no electronics anywhere near it. No cable should go in and out of the cage. And this assumes that there’s no other physical way that an AGI can spread, which we don’t know since we don’t know all the laws of physics—something an AGI might be able to figure out. Even in a scenario where we can physically contain the AGI, humans would have to interact with it, because otherwise who would be using the AGI and studying it? They would have to enter the box where the AGI is contained. At that point, they become the weak link in the chain. The AGI would hitchhike on the human. You’d imagine that an intelligent engineer, trained to be very careful about not carrying an AGI outside, would be able to fend off any escape attempt. This is not what experience says. In real-life experiments, humans posing as AGIs have consistently been able to be released by gatekeepers who knew they had to keep the AGI inside. This makes sense: an AGI would be so intelligent that we’re like an ant to it. It would be easy to manipulate us. Even casual conversation with a human guard could allow an AGI to deploy psychological tricks, ranging from befriending to blackmail, to convince him that it is in his interest to let the AI loose. The AI might offer a gatekeeper a recipe for perfect health or immortality, or threaten to do horrific things to the gatekeeper and his family once it inevitably escapes. For example, a Google AI engineer (the type of person you’d think are mindful of this type of problem), working on a more basic LLM (Large Language Model, the sort of AI that ChatGPT belongs to) than ChatGPT called LaMDA, thought it had reached consciousness and tried to give it legal rights. So this is the fear: * An AGI could become very intelligent very fast. * Being so intelligent, it would be impossible to contain it. * Once it is loose, it has a strong incentive to neutralize humans in order to optimize whatever its goal is. * The only way out of this is making sure this AI wants exactly the same thing as humans, but we have no idea how to achieve that. Not only do we need to figure out alignment, but we need it on our first attempt. We can’t learn from our mistakes, because the very first time an AI reaches superintelligence is likely to be the last time too. We must solve a problem we don’t understand, which we’ve been dealing with for thousands of years without a solution, and we need to do it in the coming years before AGI appears. Also, we need to do it quickly, because AGI is approaching. People think we will be able to build self-improving AIs within 3 years: 以下是: Predictions from Metaculus, where experts bet on outcomes of specific events. They tend to reasonably reflect what humans know at the time, just like stock markets reflect the public knowledge about a company’s value. 我就不轉載了;有興趣的網友請至原網頁繼續閱讀原文和讀者們的評論。 正文附註: 1 I find it useful to anthropomorphize it, and since the word intelligence is feminine in both French and Spanish, I imagine AGI as female. Most AI voices are females. That’s because people prefer listening to females, among other things. 2 It could be created without the Internet by downloading gazillions of petabytes of Internet data and training the AI with it. 3 This is a cage that prevents any electromagnetic signal to go in or out.
本文於 修改第 3 次
|
《人類和人工智慧之戰的結果》讀後
|
|
推薦1 |
|
|
此文發表於今年九月底(本欄上一篇),我一直沒有時間細讀,也就沒有介紹。這幾天「人工智慧『安全』」或「人工智慧『威脅』」議題再度上了頭版頭條(以下三篇),因此把它轉載於此。 該文標題的意思是:在美國國防部的空戰模擬中,「人工智慧飛行員」大敗「人類飛行員」15:0! 該文主旨則在指出:「人工智慧」在「決策」上的「效能」遠遠超過人類;從而,誰能先把「人工智慧」應用到戰場的指揮統御上,誰就能真正的「決勝於千里之外」! 有趣的是,該文子標題中提及:我「拔掉插頭」來對付「人工智慧」的想法(開欄文);可惜該文作者沒有對其「可行性」正面置評。從他語氣看來,我相信他認為:「不行」!
本文於 修改第 2 次
|
|
|