A landmark global survey released this month by researchers from AI Impacts and the Universities of Oxford and Bonn has captured unprecedented insights into how top artificial intelligence (AI) experts view the pace, potential, and risks of their field. Published in the Journal of Artificial Intelligence Research (JAIR), the study surveyed 2,778 researchers who had presented at leading AI conferences — making it the largest expert survey on AI progress ever conducted.
Human-Level AI May Arrive Decades Earlier Than Expected
The findings reveal a striking shift in expert expectations. According to the survey, there is now a 50% probability that AI systems capable of performing all tasks better and more cheaply than humans will exist by 2047 — a full 13 years sooner than the 2022 estimate. Researchers also gave a 10% chance that such systems could emerge as early as 2027.
In practical terms, this means that within the next decade, advanced AI could autonomously:
- Fine-tune and train large language models without human supervision
- Build fully functional online services, such as e-commerce or payment platforms
- Compose original music indistinguishable from top-charting human artists
Yet the researchers were careful to note a distinction between technical achievement and societal impact. Even if machines reach human-level cognitive performance, full automation of labor across all occupations was not expected until 2116, suggesting that adapting economies and institutions to such capabilities may take far longer.
Confidence and Concern: A Divided Community
The AI community remains both hopeful and anxious about where this trajectory leads. About 68% of respondents believed that positive outcomes from advanced AI are more likely than negative ones. However, nearly half of these optimists (48%) still assigned at least a 5% probability of catastrophic consequences.
Between 38% and 51% of experts estimated at least a 10% chance that advanced AI could cause human extinction or permanent loss of control.
When asked about nearer-term risks, respondents were overwhelmingly alarmed by societal threats:
- 86% cited misinformation (e.g., deepfakes) as a “substantial” or “extreme” concern
- 79% highlighted manipulation of public opinion
- 73% warned of authoritarian misuse
- 71% feared growing global inequality due to uneven AI access
Transparency, a long-standing challenge in AI development, also drew skepticism. Only 5% of experts believed that by 2028, leading AI systems would be able to explain their reasoning in ways that humans could fully understand.
Preparing for the Next Phase of AI Evolution
The JAIR survey arrives amid a chorus of institutional warnings about the need for AI safety, governance, and oversight.
- The Stanford HAI AI Index 2025 reported record-breaking investments and performance milestones but underscored that governance frameworks are failing to keep pace with technological progress.
- The World Economic Forum’s Global Future Council on Artificial General Intelligence (AGI) has called for early international frameworks to address cross-border risks.
- Bloomberg Law recently pointed out that vague definitions of “AGI” complicate both regulation and public discourse.
- The WEF’s “Artificial Intelligence in Financial Services 2025” white paper warned that as AI permeates financial systems, the need for auditability and systemic resilience has become urgent.
- A PYMNTS report found that while 70% of executives say AI has boosted productivity, the same percentage believe it has increased their digital risk exposure, and only 39% have a formal AI governance framework in place.
Together, these findings paint a picture of technological acceleration outpacing societal readiness — a convergence of opportunity and vulnerability.
A Call for Responsible Development
The most unifying takeaway from the JAIR survey was the growing consensus that AI safety research must become a top global priority. Over 70% of experts now agree, up sharply from 49% in 2016.
However, despite this alignment in concern, there is still deep division over how AI alignment and oversight should be implemented. Should AI be governed like nuclear technology, with strict international treaties? Or should it be treated like the internet, guided by open collaboration and ethical norms?
As the world approaches the mid-21st century, one truth is clear: AI is evolving faster than humanity’s systems for managing it. If human-level AI does arrive by 2047, it will mark not just a technological milestone but a civilizational turning point — forcing societies to redefine work, ethics, and even human identity.
The JAIR survey offers both a warning and a roadmap: our window to prepare is narrowing, and the future of AI — whether it becomes humanity’s greatest tool or gravest risk — will depend on the choices made today.