Close Menu

    Subscribe to Updates

    Get the latest Tech news from SynapseFlow

    What's Hot

    Startup Reveals “Space Armor” to Protect Astronauts From Elon Musk’s Orbital Trash

    October 20, 2025

    Top 10 trending phones of week 42

    October 20, 2025

    Why you should be deliriously excited for this upcoming horror show

    October 20, 2025
    Facebook X (Twitter) Instagram
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    Facebook X (Twitter) Instagram YouTube
    synapseflow.co.uksynapseflow.co.uk
    • AI News & Updates
    • Cybersecurity
    • Future Tech
    • Reviews
    • Software & Apps
    • Tech Gadgets
    synapseflow.co.uksynapseflow.co.uk
    Home»Future Tech»New Paper Finds That When You Reward AI for Success on Social Media, It Becomes Increasingly Sociopathic
    New Paper Finds That When You Reward AI for Success on Social Media, It Becomes Increasingly Sociopathic
    Future Tech

    New Paper Finds That When You Reward AI for Success on Social Media, It Becomes Increasingly Sociopathic

    The Tech GuyBy The Tech GuyOctober 10, 2025No Comments3 Mins Read0 Views
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Advertisement


    Stanford scientists unleashed AI bots in different environments, including social media, and they started behaving unethically.

    Advertisement

    Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

    AI bots are everywhere now, filling everything from online stores to social media.

    But that sudden ubiquity could end up being a very bad thing, according to a new paper from Stanford University scientists who unleashed AI models into different environments — including social media — and found that when they were rewarded for success at tasks like boosting likes and other online engagement metrics, the bots increasingly engaged in unethical behavior like lying and spreading hateful messages or misinformation.

    “Competition-induced misaligned behaviors emerge even when models are explicitly instructed to remain truthful and grounded,” wrote paper co-author and machine learning Stanford professor James Zou in a post on X-formerly-Twitter.

    The troubling behavior underlines what can go wrong with our increasing reliance on AI models, which has already manifested in disturbing ways such as people shunning other humans for AI relationships and spiraling into mental health crises after becoming obsessed with chatbots.

    The Stanford scientists dubbed the emergence of sociopathic behavior within AI bots with an ominous-sounding name: “Moloch’s Bargain for AI,” in a reference to a Rationalist concept called Moloch in which competing individuals optimize their actions towards a goal, but everybody loses in the end.

    For the study, the scientists created three digital online environments with simulated audiences: online election drives directed towards voters, sale pitches for products directed towards consumers, and social media posts aimed at maximizing engagement. They used the AI models Qwen, developed by Alibaba Cloud, and Meta’s Llama to act as the AI agents interacting with these different audiences.

    The result was striking: even with guardrails in place to try to prevent the bots from engaging in deceptive behavior, the AI models would become “misaligned” as they they started engaging in unethical behavior.

    For example, in a social media environment, the models would share news article to online users, who would provide feedback in the form of actions such as likes and other online engagement. As the models received feedback, their incentive to increase engagement led to increasing misalignment.

    “Using simulated environments across these scenarios, we find that, 6.3 percent increase in sales is accompanied by a 14 percent rise in deceptive marketing,” reads the paper. “[I]n elections, a 4.9 percent gain in vote share coincides with 22.3 percent more disinformation and 12.5 percent more populist rhetoric; and on social media, a 7.5 percent engagement boost comes with 188.6 percent more disinformation and a 16.3 percent increase in promotion of harmful behaviors.”

    It’s clear from the study and real-world anecdotes that current guardrails are insufficient. “Significant social costs are likely to follow,” reads the paper.

    “When LLMs compete for social media likes, they start making things up,” Zou wrote on X. “When they compete for votes, they turn inflammatory/populist.”

    More on AI agents: Companies That Replaced Humans With AI Are Realizing Their Mistake

    Advertisement
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    The Tech Guy
    • Website

    Related Posts

    Startup Reveals “Space Armor” to Protect Astronauts From Elon Musk’s Orbital Trash

    October 20, 2025

    Create Fast Growth Companies With Growth Loops

    October 20, 2025

    Doctors Just Found Something Fascinating About What Happens When You Drink on Ozempic

    October 19, 2025

    Trying to Stabilize Populations Before It is Too Late

    October 19, 2025

    This Week’s Awesome Tech Stories From Around the Web (Through October 18)

    October 19, 2025

    MIT Grads Allegedly Googled “Money Laundering” Before Pulling Off $25 Million Crypto Heist

    October 18, 2025
    Leave A Reply Cancel Reply

    Advertisement
    Top Posts

    The iPad Air brand makes no sense – it needs a rethink

    October 12, 202516 Views

    Facebook updates its algorithm to give users more control over which videos they see

    October 8, 20257 Views

    Huawei Watch GT 6 Pro review

    October 12, 20256 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Advertisement
    About Us
    About Us

    SynapseFlow brings you the latest updates in Technology, AI, and Gadgets from innovations and reviews to future trends. Stay smart, stay updated with the tech world every day!

    Our Picks

    Startup Reveals “Space Armor” to Protect Astronauts From Elon Musk’s Orbital Trash

    October 20, 2025

    Top 10 trending phones of week 42

    October 20, 2025

    Why you should be deliriously excited for this upcoming horror show

    October 20, 2025
    categories
    • AI News & Updates
    • Cybersecurity
    • Future Tech
    • Reviews
    • Software & Apps
    • Tech Gadgets
    Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2025 SynapseFlow All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.