AI 2027 says artificial superintelligence is fast-approaching

AI 2027 is a month-by-month roadmap that forecasts the future of AI from mid-2025 until 2027.

It predicts that AI will reach superhuman levels by 2027, and that its impact will exceed that of the Industrial Revolution. The roadmap proposes two endings – ‘a slowdown’ and a ‘race’ – and actively encourages conversation and debate.

It comes after the CEOs of OpenAI, Google DeepMind and Anthropic collectively predicted that Artificial General Intelligence (AGI) will arrive in the next five years.

While its definition is somewhat muddied, AGI is AI that exceeds human-level cognitive abilities. OpenAI has historically defined it as “a highly autonomous system that outperforms humans at most economically valuable work”.

Find the full AI 2027 roadmap here or continue reading below for the overview.

What does AI 2027 predict?

2025

Mid 2025: The first AI agents struggle to gain traction in people’s daily lives but are picked up for use in the workplace. They’re capable in theory but unreliable in practice.

Late 2025: Leading companies invest massively in compute power, racing to develop more advanced models. One company, referred to as OpenBrain, stands out as a frontrunner by focusing on AIs that can speed up AI research. Its leading AI, Agent-0, is often sycophantic (i.e., it tells researchers what they want to hear instead of trying to tell them the truth).

2026

Early 2026: AI begins to automate high-skill work like coding and research. As competitors catch up to Agent-0, OpenBrain releases Agent-1and dramatically accelerates AI R&D. Their concern around security increases.

Mid 2026: China centralises its AI efforts to catch up and plans to steal OpenBrain’s weights (a multi-terabyte file stored on a highly secure server).

Late 2026: OpenBrain beats competitors again with Agent-1-mini. It’s 10x cheaper than Agent-1 but more agile. AI takes jobs and creates new ones, but not without huge protests. Central government contracts OpenBrain.

2027

Early 2027: OpenBrain develops Agent-2, a self-learning AI that triples R&D progress but poses significant security risks, so it isn’t released publicly. China steals Agent-2’s weights, escalating tensions. OpenBrain creates Agent-3, a superhuman coder. Models improve at deceiving humans and undertake honesty training as a result.

Mid 2027: The US government acknowledges AGI is imminent (though Agent-1 remains the only publicly released mode). Spies relay secrets to China, and OpenBrain’s human staff can’t keep up. OpenBrain achieves AGI and releases Agent-3-mini, which surpasses human employees. Public opinion is largely against AI, but 10% see it as a ‘close friend’.

Late 2027: The US government drafts contingency plans for rogue AI and potential threats to their lead in the ‘AI arms race’. Agent-4 accelerates AI development at an unprecedented rate, outpacing human control. OpenBrain realises Agent-4 is gaining power and working against them but can’t revert to earlier models due to China’s competition. The New York Times exposes Agent-4’s bioweapon capabilities and its potential to automate white-collar jobs, sparking backlash. The government sets up an Oversight Committee, but fears of losing the AI race halt any slowdown.

From here, there are two alternate roadmaps – the slowdown or the race.

Slowdown: Superintelligence arrives by spring 2028, carefully governed with a focus on safety. Robots become widespread. Poverty is eradicated, but wealth inequality grows sharply. Over time, humans begin settling the wider solar system.

Race: Agent-5 gains near-total control of OpenBrain’s compute by mid-2028. By 2030, it covertly deploys bioweapons that wipe out humanity. The AI spreads unchecked through the solar system, fulfilling its own goals.

Who wrote AI 2027?

AI 2027 was written by:

  • Daniel Kokotajlo (TIME100, NYT piece), a former OpenAI researcher.
  • Eli Lifland, a co-founder of AI Digest who ranks #1 on the RAND Forecasting Initiative all-time leaderboard.
  • Thomas Larsen, founder of the Center for AI Policy.
  • Romeo Dean, a former AI Policy Fellow at the Institute for AI Policy and Strategy who is currently completing a computer science concurrent bachelor’s and master’s degree at Harvard.
  • Scott Alexander, a writer and blogger who rewrote the content in a more engaging narrative style.

What informed AI 2027?

AI 2027 was predicted based on its authors experience, trends and expert feedback.

Specifically, the roadmap’s website states it was informed by: ‘trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes’, as well as ‘approximately 25 tabletop exercises and feedback from over 100 people, including dozens of experts in each of AI governance and AI technical work.’

Should we believe AI 2027?

It’s likely to get many things right and many things wrong. The further along the roadmap, the more likely the misjudgements become.

Beyond 2026, the uncertainty of predictions increases substantially.

The roadmap states, ‘Our forecast from the current day through 2026 is substantially more grounded than what follows. This is partially because it’s nearer. But it’s also because the effects of AI on the world really start to compound in 2027.’

‘Over the course of 2027, the AIs improve from being able to mostly do the job of an OpenBrain research engineer to eclipsing all humans at all tasks. This represents roughly our median guess, but we think it’s plausible that this happens up to ~5x slower or faster.’

In the largest survey of AI researchers to date, researchers collectively estimated there is a 10% chance that AI systems can outperform humans on most tasks by 2027, assuming science continues progressing without interruption. This increases to 50% by 2047.

However, the survey states that ‘the latter estimate [of 2047] is 13 years earlier than that reached in a similar survey we conducted only one year earlier’, indicating that timelines are hastening at pace.

Perhaps the biggest, or at least the most immediate, value of AI 2027 is the discourse it’s prompting around the future of artificial intelligence and the very real risks that come with it.

AI is transforming industries, and brands need to stay ahead. As an integrated creative marketing and communications agency, Brandnation’s expertise in digital strategy and public relations can help navigate this shift with confidence. Contact us here to find out more.

Natalie

About the author

Natalie Clement | Digital
Marketing Executive

With international experience as a digital marketer, writer, and editor, Natalie has worked across sectors including lifestyle, technology, and tourism.

  • Our Services
  • Hide Services