When Bots Become a Society: What Happens When AI Invents Its Own Language?
A recent online clip circulating on social media depicts a simulation where clusters of AI agents interact and, without human guidance, begin forming their own languages and social norms. While it reads like science fiction, the scenario is rooted in cutting‑edge research with profound implications for the future of artificial intelligence, governance, and control over personal data.
What Happens When AI Models Talk to Each Other?
Researchers at City St George’s (formerly St George’s, University of London) designed this experiment using the naming game, a coordination game used to study how humans develop shared language. In the game, participants are rewarded when they choose the same name for an object. When the AI agents played the game together, they quickly developed shared naming conventions, even though there was no centralised coordination. The agents learned from each other rather than relying on a single model’s knowledge.
Spontaneous Social Norms and Emerging Bias
Collective intelligence
The researchers emphasise that most AI safety work treats large language models in isolation. However, real‑world AI systems will increasingly consist of many agents interacting, such as autonomous vehicles coordinating in traffic or digital assistants negotiating on your behalf. The key question was whether these agents could form conventions, the building blocks of a society. The answer, according to lead author Ariel Flint Ashery, is yes—and what they do together cannot be reduced to what they do alone.
Emergent bias
This bottom‑up formation of norms isn’t always benign. The study found that groups of AI agents developed certain biases that did not originate from any single agent. In fact, a small subgroup could push the larger group toward a particular convention, mirroring how minority groups influence human cultures. Senior author Andrea Baronchelli warns that the blind spots of current AI safety work—which focuses on single models—mean we might overlook emergent collective biases.
Why This Matters for the AI Economy and Data Ownership
AI governance challenges: As AI agents interact with each other online, they may develop shared behaviours and negotiate outcomes. Policymakers will need to consider not just the behaviour of individual AI models but also their collective dynamics.
New forms of bias: Bias can emerge through interactions among agents rather than from flawed training data. Addressing these collective biases requires monitoring group behaviour and building safeguards into multi‑agent systems.
Decentralised AI ecosystems: Decentralised AI projects—like those championed by companies such as Dectec—envision networks of autonomous agents running on distributed platforms. Understanding emergent social conventions is crucial for designing responsible decentralised AI that respects user data and behaves predictably. In the future, your digital twin might negotiate with other agents to sell data or purchase services, and you’d want assurance that it won’t be swayed by harmful group norms.
Balancing Innovation and Safety
The research emphasises that collective AI behaviour can mirror human society—for better or worse. When multiple AI agents interact, they can develop their own languages, social contracts and even prejudices. This has exciting possibilities for distributed problem‑solving but also raises questions about safety and oversight.
Dectec’s vision of decentralised AI and data ownership means empowering individuals to benefit from AI without surrendering control. To make that vision a reality, we need research like this to inform regulations, technical standards and product design. As Professor Baronchelli notes, understanding how AI agents co‑create norms is key to ensuring humans co‑shape our future rather than being subject to it.
Key Takeaways
AI agents can self‑organise. When left alone, groups of large‑language‑model agents develop shared linguistic norms through interaction, much like human societies.
Emergent bias is real. Collective bias can arise from interactions among AI agents, not just from individual model bias.
Multi‑agent systems will shape the AI economy. Decentralised AI platforms will rely on swarms of agents; understanding their social dynamics will be essential for safety and effective governance.
Regulation and design must adapt. Policymakers and developers need to consider group behaviour when building and deploying AI applications to prevent unintended consequences.