Someone who knows things about us has some measure of control over us, and someone who knows everything about us has a lot of control over us. Surveillance facilitates control.
Manipulation doesn’t have to involve overt advertising. It can be product placement that makes sure you see pictures that have a certain brand of car in the background. Or just increasing how often you see those cars. This is, essentially, the business model of search engines. In their early days, there was talk about how an advertiser could pay for better placement in search results. After public outcry and subsequent guidance from the FTC, search engines visually differentiated between “natural” results by algorithm and paid results. So now you get paid search results in Google framed in yellow and paid search results in Bing framed in pale blue. This worked for a while, but recently the trend has shifted back. Google is now accepting money to insert particular URLs into search results, and not just in the separate advertising areas. We don’t know how extensive this is, but the FTC is again taking an interest.
When you’re scrolling through your Facebook feed, you don’t see every post by every friend; what you see has been selected by an automatic algorithm that’s not made public. But someone can pay to increase the likelihood that their friends or fans will see their posts. Corporations paying for placement is a big part of how Facebook makes its money. Similarly, a lot of those links to additional articles at the bottom of news pages are paid placements.
The potential for manipulation here is enormous. Here’s one example. During the 2012 election, Facebook users had the opportunity to post an “I Voted” icon, much like the real stickers many of us get at polling places after voting. There is a documented bandwagon effect with respect to voting; you are more likely to vote if you believe your friends are voting, too. This manipulation had the effect of increasing voter turnout 0.4% nationwide. So far, so good. But now imagine if Facebook manipulated the visibility of the “I Voted” icon based on either party affiliation or some decent proxy of it: ZIP code of residence, blogs linked to, URLs liked, and so on. It didn’t, but if it did, it would have had the effect of increasing voter turnout in one direction. It would be hard to detect, and it wouldn’t even be illegal. Facebook could easily tilt a close election by selectively manipulating what posts its users see. Google might do something similar with its search results.
A truly sinister social networking platform could manipulate public opinion even more effectively. By amplifying the voices of people it agrees with, and dampening those of people it disagrees with, it could profoundly distort public discourse. China does this with its 50 Cent Party: people hired by the government to post comments on social networking sites supporting, and challenge comments opposing, party positions. Samsung has done much the same thing.
Many companies manipulate what you see based on your user profile: Google search, Yahoo News, even online newspapers like The New York Times. This is a big deal. The first listing in a Google search result gets a third of the clicks, and if you’re not on the first page, you might as well not exist. The result is that the Internet you see is increasingly tailored to what your profile indicates your interests are. This leads to a phenomenon that political activist Eli Pariser has called the “filter bubble”: an Internet optimized to your preferences, where you never have to encounter an opinion you don’t agree with. You might think that’s not too bad, but on a large scale it’s harmful. We don’t want to live in a society where everybody only ever reads things that reinforce their existing opinions, where we never have spontaneous encounters that enliven, confound, confront, and teach us.
In 2012, Facebook ran an experiment in control. It selectively manipulated the newsfeeds of 680,000 users, showing them either happier or sadder status updates. Because Facebook constantly monitors its users—that’s how it turns its users into advertising revenue—it was easy to monitor the experimental subjects and collect the results. It found that people who saw happier posts tended to write happier posts, and vice versa. I don’t want to make too much of this result. Facebook only did this for a week, and the effect was small. But once sites like Facebook figure out how to do this effectively, the effects will be profitable. Not only do women feel less attractive on Mondays, they also feel less attractive when they feel lonely, fat, or depressed. We’re already seeing the beginnings of systems that analyze people’s voices and body language to determine mood; companies want to better determine when customers are getting frustrated and when they can be most profitably upsold. Manipulating those emotions to better market products is the sort of thing that’s acceptable in the advertising world, even if it sounds pretty horrible to us.
This is all made easier because of the centralized architecture of so many of our systems. Companies like Google and Facebook sit at the center of our communications. This gives them enormous power to manipulate and control.
There are unique harms that come from using surveillance data in politics. Election politics is very much a type of marketing, and politicians are starting to use personalized marketing’s capability to discriminate as a way to track voting patterns and better “sell” a candidate or policy position. Candidates and advocacy groups can create ads and fundraising appeals targeted to particular categories: people who earn more than $100,000 a year, gun owners, people who have read news articles on one side of a particular issue, unemployed veterans... anything you can think of. They can target outraged ads to one group of people, and thoughtful policy-based ads to another. They can also finely tune their get-out-the-vote campaigns on Election Day and more efficiently gerrymander districts between elections. This will likely have fundamental effects on democracy and voting.
Psychological manipulation—based both on personal information and control of the underlying systems—will get better and better. Even worse, it will become so good that we won’t know we’re being manipulated.
監控為基礎的操作:如何Facebook或谷歌會傾斜選舉
從數據和歌利亞:隱藏的戰鬥來收集數據,並控制你的世界。
人誰知道關於我們的事情已經超過我們控制的一些措施,並有人誰知道一切關於我們有很多在我們的控制。監控有利於控制。
操縱不必涉及公開廣告。它可以是植入式廣告,確保你看到有一定的品牌汽車的背景圖片。或只是增加你怎麼經常看到這些車。這實質上是搜索引擎的商業模式。在他們的初期,有談如何的廣告客戶可以在搜索結果中支付更好的位置。從FTC輿論嘩然和隨後的指導後,搜索引擎通過視覺算法和付費結果“自然”的結果之間的差異。在谷歌所以,現在你得到報酬的搜索結果框在冰黃色和付費搜索結果誣陷淡藍色。這工作了一段時間,但最近的趨勢已經轉變了。谷歌現在收錢插入特定的URL到搜索結果,而不是僅僅在單獨的廣告領域。我們不知道如何廣泛的是,但FTC再次採取了興趣。
當你通過滾動你的Facebook的飼料,你沒有看到每一個崗位由每一位朋友;你所看到的已選定,這不是公開的自動算法。但有人可以支付,以增加他們的朋友或歌迷會看到自己的崗位的可能性。公司支付的安置是Facebook的如何使得其資金的重要組成部分。同樣,很多在新聞頁面底部的鏈接的其他文章都支付存款。
操縱這裡的潛力是巨大的。這裡有一個例子。在2012年的選舉中,Facebook的用戶有機會發布一個“我投票”的圖標,就像投票後,真正的貼紙我們很多人在拿到投票站。有關於投票成文從眾效應;你更可能投票,如果你認為你的朋友都投票了。該操作必須在全國范圍提高投票率0.4%的效果。到目前為止,一切都很好。但現在想像一下,如果Facebook的操縱“我投票”圖標的可視性基於任何黨派或它的一些體面的代理:ZIP居住代碼,博客鏈接,網址喜歡,等等。它沒有,但是如果沒有,那就不得不提高選民投票率在一個方向的作用。這將是防不勝防,它甚至不會是非法的。 Facebook的可以很容易地通過選擇性操縱什麼職位的用戶看到一個傾斜勢均力敵的選舉。谷歌可能會做它的搜索結果類似。
一個真正陰險的社交網絡平台,可以更有效地操縱輿論。通過放大人的聲音是同意的,和抑制那些人用不服的,可以深刻地扭曲了公共話語。中國這樣做,其五毛黨:由政府聘請的人來支持發布在社交網站上的評論和質疑的意見對立,黨的立場。三星已經做了同樣的事情。
許多公司操縱你所看到的基於用戶配置文件:谷歌搜索,雅虎新聞,甚至是在線報紙像紐約時報。這是一個大問題。在谷歌搜索結果的第一上市獲得三分之一的點擊,如果你不是在第一頁上,你還不如不存在。其結果是,你看到的互聯網是越來越切合你的個人資料表明您利益。這導致一個現象,政治活動家禮帕里澤已被稱為“過濾泡沫”:優化你的喜好,你從來沒有遇到一個意見,你不同意互聯網。你可能會認為這是不是太糟糕,但大規模它是有害的。我們不希望生活在一個社會裡,每個人都永遠只能讀的東西,鞏固他們現有的意見,我們從來沒有自發遇到搞活,驚慌失措,對抗,並教導我們。
在2012年,Facebook的跑了控制實驗。它有選擇地操縱68萬用戶的新聞源,顯示他們無論是快樂還是令人悲哀的狀態更新。因為Facebook不斷地監視它的用戶,這是它如何對待它的用戶到廣告收入,很容易監測實驗對象,並收集結果。研究發現,誰看見幸福的帖子人們往往寫快樂的職位,反之亦然。我不想做太多的這種結果。 Facebook的只有這樣做了一個星期,效果不大。但是,一旦網站如Facebook找出如何有效地做到這一點,其影響將是有利可圖的。不僅女性覺得在週一的吸引力,他們也覺得缺乏吸引力,當他們感到孤獨,脂肪,或沮喪。我們已經看到的是分析人的聲音和肢體語言來判斷情緒系統的開端;企業要想更好地確定當客戶感到沮喪,當他們可以最有利可圖upsold。操縱這些情緒,以更好的市場產品是這種東西那是在廣告界可以接受的,即使它聽起來很可怕給我們。
這是所有做,因為我們這麼多系統的集中式架構更容易。公司像谷歌和Facebook坐在我們的通信中心。這給了他們巨大的力量操縱和控制。
有跡象表明,來自用政治監測數據獨特的危害。選舉政治是一個很有型的營銷和政治家都開始使用個性化營銷的能力,以此來跟踪投票模式,更好地“推銷”候選人或政策立場歧視。候選人和倡導團體可以創建針對特定類別的廣告和籌款呼籲:誰賺超過10萬美元一年的人,槍主,誰看了新聞文章針對某個特定問題,失業退伍軍人...任何一個身邊的人你能想到的的。他們可以針對憤怒的廣告,以一組人,和周到的基於策略的廣告到另一個。他們還可以在選舉日,並更有效地微調自己的get-出的票活動gerrymander選舉之間的地區。這將可能對民主和投票重大影響。
心理操縱術,同時基於個人信息和底層控制系統,會越來越好。更糟糕的是,它會變得這麼好,我們不會知道我們被操縱。