回本城市首頁 全民監督
市長:uskmt  副市長:
討論區新知 字體:
上一個討論主題 回文章列表 下一個討論主題
監控為基礎的操作 Facebook或谷歌會傾斜選舉


Surveillance-based manipulation: How Facebook or Google could tilt elections

From Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World

Someone who knows things about us has some measure of control over us, and someone who knows everything about us has a lot of control over us. Surveillance facilitates control.

Manipulation doesn’t have to involve overt advertising. It can be product placement that makes sure you see pictures that have a certain brand of car in the background. Or just increasing how often you see those cars. This is, essentially, the business model of search engines. In their early days, there was talk about how an advertiser could pay for better placement in search results. After public outcry and subsequent guidance from the FTC, search engines visually differentiated between “natural” results by algorithm and paid results. So now you get paid search results in Google framed in yellow and paid search results in Bing framed in pale blue. This worked for a while, but recently the trend has shifted back. Google is now accepting money to insert particular URLs into search results, and not just in the separate advertising areas. We don’t know how extensive this is, but the FTC is again taking an interest.

When you’re scrolling through your Facebook feed, you don’t see every post by every friend; what you see has been selected by an automatic algorithm that’s not made public. But someone can pay to increase the likelihood that their friends or fans will see their posts. Corporations paying for placement is a big part of how Facebook makes its money. Similarly, a lot of those links to additional articles at the bottom of news pages are paid placements.

The potential for manipulation here is enormous. Here’s one example. During the 2012 election, Facebook users had the opportunity to post an “I Voted” icon, much like the real stickers many of us get at polling places after voting. There is a documented bandwagon effect with respect to voting; you are more likely to vote if you believe your friends are voting, too. This manipulation had the effect of increasing voter turnout 0.4% nationwide. So far, so good. But now imagine if Facebook manipulated the visibility of the “I Voted” icon based on either party affiliation or some decent proxy of it: ZIP code of residence, blogs linked to, URLs liked, and so on. It didn’t, but if it did, it would have had the effect of increasing voter turnout in one direction. It would be hard to detect, and it wouldn’t even be illegal. Facebook could easily tilt a close election by selectively manipulating what posts its users see. Google might do something similar with its search results.

A truly sinister social networking platform could manipulate public opinion even more effectively. By amplifying the voices of people it agrees with, and dampening those of people it disagrees with, it could profoundly distort public discourse. China does this with its 50 Cent Party: people hired by the government to post comments on social networking sites supporting, and challenge comments opposing, party positions. Samsung has done much the same thing.

Many companies manipulate what you see based on your user profile: Google search, Yahoo News, even online newspapers like The New York Times. This is a big deal. The first listing in a Google search result gets a third of the clicks, and if you’re not on the first page, you might as well not exist. The result is that the Internet you see is increasingly tailored to what your profile indicates your interests are. This leads to a phenomenon that political activist Eli Pariser has called the “filter bubble”: an Internet optimized to your preferences, where you never have to encounter an opinion you don’t agree with. You might think that’s not too bad, but on a large scale it’s harmful. We don’t want to live in a society where everybody only ever reads things that reinforce their existing opinions, where we never have spontaneous encounters that enliven, confound, confront, and teach us.

In 2012, Facebook ran an experiment in control. It selectively manipulated the newsfeeds of 680,000 users, showing them either happier or sadder status updates. Because Facebook constantly monitors its users—that’s how it turns its users into advertising revenue—it was easy to monitor the experimental subjects and collect the results. It found that people who saw happier posts tended to write happier posts, and vice versa. I don’t want to make too much of this result. Facebook only did this for a week, and the effect was small. But once sites like Facebook figure out how to do this effectively, the effects will be profitable. Not only do women feel less attractive on Mondays, they also feel less attractive when they feel lonely, fat, or depressed. We’re already seeing the beginnings of systems that analyze people’s voices and body language to determine mood; companies want to better determine when customers are getting frustrated and when they can be most profitably upsold. Manipulating those emotions to better market products is the sort of thing that’s acceptable in the advertising world, even if it sounds pretty horrible to us.

This is all made easier because of the centralized architecture of so many of our systems. Companies like Google and Facebook sit at the center of our communications. This gives them enormous power to manipulate and control.

There are unique harms that come from using surveillance data in politics. Election politics is very much a type of marketing, and politicians are starting to use personalized marketing’s capability to discriminate as a way to track voting patterns and better “sell” a candidate or policy position. Candidates and advocacy groups can create ads and fundraising appeals targeted to particular categories: people who earn more than $100,000 a year, gun owners, people who have read news articles on one side of a particular issue, unemployed veterans... anything you can think of. They can target outraged ads to one group of people, and thoughtful policy-based ads to another. They can also finely tune their get-out-the-vote campaigns on Election Day and more efficiently gerrymander districts between elections. This will likely have fundamental effects on democracy and voting.

Psychological manipulation—based both on personal information and control of the underlying systems—will get better and better. Even worse, it will become so good that we won’t know we’re being manipulated.






操縱這裡的潛力是巨大的。這裡有一個例子。在2012年的選舉中,Facebook的用戶有機會發布一個“我投票”的圖標,就像投票後,真正的貼紙我們很多人在拿到投票站。有關於投票成文從眾效應;你更可能投票,如果你認為你的朋友都投票了。該操作必須在全國范圍提高投票率0.4%的效果。到目前為止,一切都很好。但現在想像一下,如果Facebook的操縱“我投票”圖標的可視性基於任何黨派或它的一些體面的代理:ZIP居住代碼,博客鏈接,網址喜歡,等等。它沒有,但是如果沒有,那就不得不提高選民投票率在一個方向的作用。這將是防不勝防,它甚至不會是非法的。 Facebook的可以很容易地通過選擇性操縱什麼職位的用戶看到一個傾斜勢均力敵的選舉。谷歌可能會做它的搜索結果類似。



在2012年,Facebook的跑了控制實驗。它有選擇地操縱68萬用戶的新聞源,顯示他們無論是快樂還是令人悲哀的狀態更新。因為Facebook不斷地監視它的用戶,這是它如何對待它的用戶到廣告收入,很容易監測實驗對象,並收集結果。研究發現,誰看見幸福的帖子人們往往寫快樂的職位,反之亦然。我不想做太多的這種結果。 Facebook的只有這樣做了一個星期,效果不大。但是,一旦網站如Facebook找出如何有效地做到這一點,其影響將是有利可圖的。不僅女性覺得在週一的吸引力,他們也覺得缺乏吸引力,當他們感到孤獨,脂肪,或沮喪。我們已經看到的是分析人的聲音和肢體語言來判斷情緒系統的開端;企業要想更好地確定當客戶感到沮喪,當他們可以最有利可圖upsold。操縱這些情緒,以更好的市場產品是這種東西那是在廣告界可以接受的,即使它聽起來很可怕給我們。




回應 回應給此人 推薦文章 列印 加入我的文摘