網路城邦
回本城市首頁 時事論壇
市長:胡卜凱  副市長:
加入本城市推薦本城市加入我的最愛訂閱最新文章
udn城市政治社會政治時事【時事論壇】城市/討論區/
討論區知識和議題 字體:
看回應文章  上一個討論主題 回文章列表 下一個討論主題
電腦程式首度通過「杜林關卡」 - A. Griffin
 瀏覽1,077|回應2推薦1

胡卜凱
等級:8
留言加入好友
文章推薦人 (1)

胡卜凱

Computer becomes first to pass Turing Test in artificial intelligence milestone, but academics warn of dangerous future

 

Eugene Goostman, a computer programme pretending to be a young Ukrainian boy, successfully duped enough humans to pass the iconic test

 

Andrew Griffin, 06/08/14

 

A programme that convinced humans that it was a 13-year-old boy has become the first computer ever to pass the Turing Test. The test — which requires that computers are indistinguishable from humans — is considered a landmark in the development of artificial intelligence, but academics have warned that the technology could be used for cybercrime.

 

Computing pioneer Alan Turing said that a computer could be understood to be thinking if it passed the test, which requires that a computer dupes 30 per cent of human interrogators in five-minute text conversations.

 

Eugene Goostman, a computer programme made by a team based in Russia, succeeded in a test conducted at the Royal Society in London. It convinced 33 per cent of the judges that it was human, said academics at the University of Reading, which organised the test.
 

It is thought to be the first computer to pass the iconic test. Though other programmes have claimed successes, those included set topics or questions in advance.

 

A version of the computer programme, which was created in 2001, is hosted online for anyone talk to. (“I feel about beating the turing test in quite convenient way. Nothing original,” said Goostman, when asked how he felt after his success.)

 

The computer programme claims to be a 13-year-old boy from Odessa in Ukraine.

 

"Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything," said Vladimir Veselov, one of the creators of the programme. "We spent a lot of time developing a character with a believable personality."

 

The programme's success is likely to prompt some concerns about the future of computing, said Kevin Warwick, a visiting professor at the University of Reading and deputy vice-chancellor for research at Coventry University.

 

"In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human," he said. "Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime.
 

"The Turing Test is a vital tool for combatting that threat. It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true... when in fact it is not."

 

The test, organised at the Royal Society on Saturday, featured five programmes in total. Judges included Robert Llewellyn, who played robot Kryten in Red Dwarf, and Lord Sharkey, who led the successful campaign for Alan Turing's posthumous pardon last year.

 

Alan Turing created the test in a 1950 paper, 'Computing Machinery and Intelligence'. In it, he said that because 'thinking' was difficult to define, what matters is whether a computer could imitate a real human being. It has since become a key part of the philosophy of artificial intelligence.
 

The success came on the 60th anniversary of Turing's death, on Saturday.
 

http://www.independent.co.uk/life-style/gadgets-and-tech/computer-becomes-first-to-pass-turing-test-in-artificial-intelligence-milestone-but-academics-warn-of-dangerous-future-9508370.html
 

參考文章

 

1.    Alan Turing (亞蘭杜林小傳), http://en.wikipedia.org/wiki/Alan_Turing

2.    Turing Test (「杜林關卡」), http://en.wikipedia.org/wiki/Turing_test



本文於 修改第 5 次
回應 回應給此人 推薦文章 列印 加入我的文摘

引用
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5113965
 回應文章
我們需要Asimov機器人律條嗎 - the arXiv
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Do We Need Asimov's Laws?                

 

the arXiv, 05/16/14

 

In 1942, the science fiction author Isaac Asimov published a short story called Runaround in which he introduced three laws that governed the behaviour of robots. The three laws are as follows:

 

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

 

He later introduced a fourth or zeroth law that outranked the others:

 

0.     A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

 

Since then, Asimov’s laws of robotics have become a key part of a science fiction culture that has gradually become mainstream.

 

In recent years, roboticists have made rapid advances in the technologies that are bringing closer the kind of advanced robots that Asimov envisaged. Increasingly, robots and humans are working together on factory floors, driving cars, flying aircraft and even helping around the home.

 

And that raises an interesting question: do we need a set of Asimov-like laws to govern the behaviour of robots as they become more advanced?

 

Today, we get an answer of sorts from Ulrike Barthelmess and Ulrich Furbach at the University of Koblenz in Germany. These guys review the history of robots in society and argue that our fears over their potential to destroy us are unfounded. Because of this, Asimov’s laws aren’t needed, they say.

 

The word robot comes from the Czech word robota meaning forced labour, which first appeared in a 1924 play by the Czech author Karel Capek. The anglicised version spread rapidly after this along with the idea that these machines could all too easily destroy their creators, a theme that has become common in science fiction since then.

 

But Barthelmess and Furbach argue that this fear of machines is rooted far more deeply in our culture. While science fiction stories often use plots in which robots destroy their creators, this is a theme that has a long history in literature.

 

For example, in Mary Shelley’s Frankenstein, a monster made from human body parts turns against Frankenstein, its creator, because he refuses to make a mate for the monster.

 

Then there is the 16th century Jewish Golem narrative, in one version of which a Rabbi constructs a creature out of clay to protect the community while promising to deactivate it after the Sabbath. But the Rabbi forgets and the golem turns into a monster that has to be destroyed.

 

Barthelmess and Furbach argue that the religious undertone in both these stories is that it is forbidden for humans to act like God. And that any attempt to do so will always be punished by the creator.

 

Similar episodes appear in Greek mythology where humans who demonstrate arrogance towards the Gods are also punished, such as Prometheus and Niobe. That’s why stories of this kind are part of our culture going back thousands of years. It is this deep-rooted fear that science fiction authors play on in stories about robots.

 

Of course, there are real conflicts between humans and machines. During the industrial revolution in Europe, for example, there was a great fear of machines and their manifest ability to change the world in ways that had a profound influence on many people.

 

Barthelmess and Furbach point out that in 18th century England, people began a movement to destroy weaving machines which became so grave that the Parliament made demolishing machines a capital crime. A group known as the Luddites even battled the British army over these issues. “There was a kind of technophobia which resulted in fights against machines,” they say.

 

Of course, it’s not beyond the realms of possibility that a similar kind of antagonism could develop towards the new generation of robots that are set to take over the highly repetitive tasks that human workers currently perform in factories all over the world and in particular in Asia.

 

However, there is very different attitude towards robots in Asia. Countries such as Japan lead the world in the development of robots for automated factories and as human helpers, partly because of Japan’s ageing population and the well known health care problems that this will produce in the not too distant future.

 

That attitude is perhaps embodied by Astro Boy, a fictional robot who in 2007 was named by Japan’s Ministry of Foreign Affairs as the Japanese envoy for safe overseas travel.

 

For these reasons, Barthelmess and Furbach argue that what we fear about robots is not the possibility that they will take over and destroy us but the possibility that other humans will use them to destroy our way of life in ways we cannot control.

 

In particular, they point out that many robots will protect us by design. For example, automated vehicles and planes are being designed to drive and fly more safely than human operators ever can. So we will be safer using them than not using them.

 

An important exception are the growing numbers of robots specifically designed to kill humans. The US, in particular, is using drones for targeted killings in foreign countries. The legality, not to mention morality, of these actions is still being ferociously debated.

 

But Barthelmess and Furbach imply that humans are still ultimately responsible for these killings and that international law, rather than Asimov’s laws, should be able to cope with issues that arise, or adapted to do so.

 

They end their discussion by considering the potential convergence between humans and robots in the near future. The idea here is that humans will incorporate various technologies into their own bodies, such as extra memory or processing power, and so will eventually fuse with robots. At that point, everyday law will have to cope with the behaviour and actions of ordinary people and Asimov’s laws will be obsolete.

 

An interesting debate that is unlikely to be settled any time soon. Additional views in the comments section please.

 

Ref: arxiv.org/abs/1405.0961 : Do we need Asimov’s Laws?

 

http://www.technologyreview.com/view/527336/do-we-need-asimovs-laws/



本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5120660
偏見殺人
推薦2


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (2)

麥芽糖
胡卜凱

在《亞蘭杜林小傳》中,我們看到杜林在1952因同性戀被判罪,在入獄與「化學去勢」間選擇後者。兩年以後他因氰化物中毒過世得年41。許多人(包括驗屍官)認為他自殺;也有人不同意這個說法(包括杜林的母親)

 

不論杜林是蓄意自殺還是不小心中毒,他的被判罪與所受的處罰,都可能是導因之一。以杜林的天份與已做出的貢獻,如果能繼續研究20,今天的電腦和/或人工智慧成果絕對不可同日而語

 

偏見」不止殺人,還遲滯了人類的進步。小鼻子、小眼睛又自以為是的「衛『道』之士」,其宜三思乎!



本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5115913