|
| 1 | +--- |
| 2 | +title: Social Manipulation |
| 3 | +description: AI could predict and shape human behaviour on an unprecedented scale. |
| 4 | + |
| 5 | +featured: |
| 6 | + class: c |
| 7 | + element: '<risk class="communication">Social Manipulation</risk>' |
| 8 | +tags: |
| 9 | + - AI-Risk |
| 10 | + - Social-Manipulation |
| 11 | +sidebar_position: 2 |
| 12 | +tweet: yes |
| 13 | +--- |
| 14 | + |
| 15 | +AI systems designed to influence behaviour at scale could (and do) undermine democracy, free will, and individual autonomy. |
| 16 | + |
| 17 | +## Sources |
| 18 | + |
| 19 | +- **The spread of true and false news online** [Vosoughi, Roy, & Aral, 2018](https://doi.org/10.1126/science.aap9559): Demonstrates that false news travels faster and reaches more people than true news on social platforms, indicating how AI-driven disinformation campaigns could exploit these vulnerabilities. |
| 20 | + |
| 21 | +- **The science of fake news** [Lazer et al., 2018](https://www.researchgate.net/publication/323650280_The_science_of_fake_news/link/5b30d8760f7e9b0df5c767b7/download?_tp=eyJjb250ZXh0Ijp7ImZpcnN0UGFnZSI6InB1YmxpY2F0aW9uIiwicGFnZSI6InB1YmxpY2F0aW9uIn19): Examines the ecosystem of fake news creation and dissemination, emphasising the critical need for policy, research, and technological measures to counter AI-enabled misinformation. Also, recognises how at odds any measures of control are with the purest notions of "free speech". |
| 22 | + |
| 23 | +- **Deepfakes: A Grounded Threat Assessment** [Chesney & Citron, 2019](https://dx.doi.org/10.2139/ssrn.3213954): Explores the implications of deepfake technology, highlighting the urgent necessity for innovative detection tools and policy interventions (see below). |
| 24 | + |
| 25 | +- **Nazi Propaganda** [United States Holocaust Memorial Museum](https://encyclopedia.ushmm.org/content/en/article/nazi-propaganda): Examines how the Nazi regime harnessed mass media—including radio broadcasts, film, and print—to shape public opinion, consolidate power, and foment anti-Semitic attitudes during World War. (Fake content isn't a new problem.) |
| 26 | + |
| 27 | +--- |
| 28 | + |
| 29 | +## How This Is Already Happening |
| 30 | + |
| 31 | +### AI-Powered Targeted Advertising & Manipulation |
| 32 | + |
| 33 | +- Algorithms mine user data to deliver highly customised ads. |
| 34 | +- Personalized messaging exploits individual biases, vulnerabilities, or preferences. |
| 35 | + |
| 36 | +**Real-Life Examples:** |
| 37 | + |
| 38 | +- The [Cambridge Analytica scandal (2016)](https://en.wikipedia.org/wiki/Facebook–Cambridge_Analytica_data_scandal): Data from millions of Facebook users was used to create highly customized political ads, influencing voter perceptions in the U.S. and beyond. |
| 39 | +- Political microtargeting in multiple elections: Campaigns worldwide have leveraged platforms like Meta (Facebook) or Google to deliver personalized messages designed to sway opinion on sensitive issues. |
| 40 | + |
| 41 | +### AI-Generated Disinformation & Deepfakes |
| 42 | + |
| 43 | +- Sophisticated tools create realistic but false content that distorts public perception. |
| 44 | +- Deepfake videos or audio can undermine trust in legitimate information sources. |
| 45 | + |
| 46 | +**Real-Life Examples:** |
| 47 | + |
| 48 | +- [Fake Zelensky video (2022)](https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia): A deepfake video urged Ukrainian forces to surrender, illustrating how synthetic media can be weaponized during international conflicts. |
| 49 | +- Fake celebrity endorsements: AI-generated videos and images appear online, falsely promoting products or political messages by well-known public figures. e.g. [2022 Elon Musk Crypto Scam](https://www.vice.com/en/article/scammers-use-elon-musk-deepfake-to-steal-crypto/): A deepfake video circulated on social media, featuring a convincing impersonation of Elon Musk endorsing a fraudulent cryptocurrency platform. |
| 50 | +- [2019 Deepfake CEO Phone Scam](https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/): In a widely reported case, criminals used AI-generated voice to impersonate a chief executive, successfully convincing a subordinate to transfer \$243,000 to a fraudulent account. |
| 51 | + |
| 52 | +### Predictive AI Systems Controlling Social Behavior |
| 53 | + |
| 54 | +- Platforms use behavioral models to shape user experiences, potentially pushing them to adopt certain viewpoints or behaviors. |
| 55 | +- Data-driven predictions about habits and preferences can be used to modify or influence choices. |
| 56 | + |
| 57 | +**Real-Life Examples:** |
| 58 | + |
| 59 | +- TikTok’s recommendation algorithm: Known for its powerful engagement-driven feed, it can rapidly shape users’ content consumption, potentially reinforcing certain narratives or trends. See: [TikTok Algorithm Eating Disorder article on The Verge](https://www.theverge.com/2021/12/18/22843606/tiktok-wsj-algorithm-change-eating-disorder). |
| 60 | + |
| 61 | +- China’s social credit initiatives: Although not purely about content manipulation, these systems use data and behavioral metrics to encourage or discourage particular actions, effectively guiding social behavior. |
| 62 | + |
| 63 | +--- |
| 64 | + |
| 65 | +## Mitigations |
| 66 | + |
| 67 | +### AI Transparency Regulations |
| 68 | + |
| 69 | +- Mandate clear labeling of AI-generated content. |
| 70 | +- Require accountability and auditing mechanisms for social media platforms. |
| 71 | +- **Examples:** |
| 72 | + - [Generative AI and watermarking - European Parliament](https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/757583/EPRS_BRI\(2023\)757583_EN.pdf) |
| 73 | +- **Efficacy:** Medium – Transparency can deter some manipulative actors, but determined bad actors may still evade or exploit labelling. |
| 74 | +- **Ease of Implementation:** Moderate – Requires infrastructure for labelling, auditing, and enforcement, but could be mandated by legislation. |
| 75 | + |
| 76 | +### Ethical AI Development Standards |
| 77 | + |
| 78 | +- Industry-wide codes of conduct to discourage manipulative AI. |
| 79 | +- Incentivize designers to embed fairness and user consent into algorithmic systems. |
| 80 | +- **Examples** |
| 81 | + - [Understanding artificial intelligence ethics and safety - Turing Institute](https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf) |
| 82 | + - [AI Playbook for the UK Government](https://www.gov.uk/government/publications/ai-playbook-for-the-uk-government/artificial-intelligence-playbook-for-the-uk-government-html#principles) |
| 83 | + - [DOD Adopts Ethical Principles for Artificial Intelligence](https://www.defense.gov/News/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/) |
| 84 | +- **Efficacy:** Medium – Encourages best practices and self-regulation, but relies on voluntary compliance without legal backing. |
| 85 | +- **Ease of Implementation:** Low – Professional bodies and industry coalitions can quickly adopt and publicize guidelines, though ensuring universal adherence remains a challenge. Firms have varying incentives, budgets, and ethical priorities, making universal buy-in elusive. |
| 86 | + |
| 87 | +### Education & Public Awareness Campaigns |
| 88 | + |
| 89 | +- Equip citizens with media literacy skills to spot deepfakes and manipulation attempts. |
| 90 | +- Encourage public understanding of how personal data can be exploited by AI-driven systems. |
| 91 | +- **Examples:** |
| 92 | + - https://newslit.org |
| 93 | + - https://www.unesco.org/en/media-information-literacy |
| 94 | +- **Efficacy:** High – Empowered, media-savvy populations are significantly harder to manipulate. However, scaling efforts to entire populations is a substantial challenge given diverse educational, cultural, and socioeconomic barriers. |
| 95 | +- **Ease of Implementation:** Low – While public outreach is feasible, achieving wide coverage and sustained engagement can be resource-intensive. Overcoming entrenched biases, misinformation echo chambers, and public apathy is an uphill battle, particularly if there’s no supportive policy or consistent funding. |
| 96 | + |
| 97 | + |
0 commit comments