A new paper from Google DeepMind has made a bold and chilling prediction artificial intelligence could reach human-level intelligence as early as 2030, and if not handled carefully, it might even lead to the destruction of mankind. This next stage of AI is called Artificial General Intelligence, or AGI. Unlike regular AI which is trained for specific tasks, AGI would think, learn, and reason across a wide range of topics just like a human. And that’s exactly what has experts both excited and terrified.
The research, co-authored by DeepMind co-founder Shane Legg, doesn’t go into detail on how AGI might destroy humanity. Instead, it focuses on the serious risks it could bring and how we might prevent them. According to the paper, dangers include things like people misusing the technology, AI making mistakes, or systems becoming misaligned with human values.
The paper also warns that the risk of “permanently destroying humanity” is very real and must be taken seriously. It argues that it’s not just up to Google or one company to define what’s too risky this is something the entire world needs to decide together.
To prevent disaster, DeepMind has laid out a plan to reduce risks, especially around misuse. This means putting strict safeguards in place to stop people from using AI to cause harm.
Earlier this year, DeepMind CEO Demis Hassabis also spoke out about the need for global cooperation. He believes that AGI could appear within 5 to 10 years and says the world needs something like a “technical United Nations” to manage it.
Hassabis said, referring to the global scientific effort behind particle physics “I’d support a CERN-like research collaboration for AGI. We also need something like the IAEA for AI an organization that can watch over dangerous projects. This should be part of a bigger global system where many countries help decide how AGI is developed and used.”
As AI becomes more advanced, many experts believe the next few years are critical. The choices made now could shape whether AGI helps humanity or ends it.