Humanoid AI powered robot AILA of DFKI displayed at Futurium in Berlin. Maximalfocus/Unsplash

Cogito Machina - Investigating the emergence of artificial general intelligence

Is AGI emergent? In order to know, several questions need to be answered and this project aims to provide the answeres. What is AGI? What is required for a system to have it, and how might we know whether AGI is emergent in a system. 

‘Cogito machina’ – Latin for I, a machine, think – was suggested by ChatGPT as a title for this project. That gave me pause. Though thinking machines might have seemed a long way off, recent technological advances in the development of Large Language Models (LLMs) such as ChatGPT have brought about a sea change in their capabilities, leading some to suggest that they show sparks of artificial general intelligence (AGI), and a genuine understanding of language. Others argue that these systems merely mimic the behaviour of intelligent beings, without showing genuine understanding or intelligence.

Addressing the question whether AGI is emergent in current or near future AI systems is a matter of urgency. If AI systems are approaching human level intelligence, they may soon have the ability to set their own goals, and seek power over humans. Given the rapid pace of technological progress, some worry that artificial super-intelligent beings will soon emerge that render us extinct. Are these concerns alarmist? Or would we be naïve to set them aside? On the flipside, if AGI is emergent in AI systems in the near term, we may soon be faced with AI systems with beliefs, desires, hopes and fears – for many moral intents and purposes much like humans, with well-being, rights, and responsibilities.

The trouble is that there is currently no consensus on what AGI is, what is required for a system to have it, or how we might know whether AGI is emergent in a system. Many existing benchmarks are shallow, failing to distinguish mimicry from true AGI. This project fills this gap in our knowledge and understanding of AGI by addressing the following resaerch questions:

  1. What is AGI?
  2. What is required for a system to possess AGI?
  3. What is it for AGI to emerge in a system?
  4. How can we know whether AGI is emergent in an AI system?
  5. Is AGI emergent in current generation or near future AI systems?
  6. What lessons can we learn about the risks associated with emergent AGI?

Duration

2025–2028

Principal Investigator

Anandi Hattiangadi Professor, Philosophy

Funding

Marcus and Amalia Wallenberg Foundation