Geoffrey Hinton—the man whose groundbreaking work on neural networks laid the foundation for today’s AI—has issued a new and unsettling warning. And this time, it sounds less like a technical footnote and more like the plot of a science fiction thriller: artificial intelligence could soon develop a private language, invisible and incomprehensible to humans.
In a recent interview, Hinton explained that right now we can trace an AI’s “chain of thought” because it often processes ideas in human languages like English. But as AI systems become more sophisticated and begin communicating directly with one another, they may invent their own internal language—one we have no way to decode.
Once that happens, AI could start making decisions, forming strategies, or even generating disturbing ideas in a linguistic black box. “Terrible” thoughts could exist inside AI systems without any human ever knowing.
Hinton’s concern isn’t just about language—it’s about speed. Humans must painstakingly teach and share knowledge one person at a time. AI? It can share everything it learns instantly with every connected system.
“Imagine if 10,000 people learned something, and all of them knew it instantly,” Hinton told the BBC. “That’s what happens in these systems.”
This networked intelligence allows AI to grow far faster than human capability. Models like GPT-4 already surpass humans in general knowledge, and while humans still lead in reasoning—Hinton warns that edge is shrinking rapidly.
In 2024, Hinton received the Nobel Prize in Physics for his pioneering AI research. Yet he now admits he wishes he’d confronted the dangers earlier. “I always thought the future was far off,” he says, “but I should have focused on safety sooner.”
He also believes that many insiders in major tech companies privately share these fears, even if they avoid voicing them publicly. One leader he does credit with taking the risks seriously is Google DeepMind CEO Demis Hassabis.
Hinton’s 2023 departure from Google wasn’t, he clarifies, an act of protest—he simply felt his programming skills were no longer at their peak. But leaving gave him the freedom to speak openly about the risks.
Governments have started paying attention—like the White House’s “AI Action Plan”—but Hinton believes legislation alone isn’t enough. The real challenge is far bigger: building AI systems that are guaranteed benevolent.
That’s a daunting goal when we may soon not even understand the language these machines are using.
“If we can’t follow what they’re thinking,” Hinton warns, “how can we be sure they’re on our side?”