The title of the research project sounds like a criminal file: »Clarification of the suspicion of ascending consciousness in artificial intelligence«. Behind this is the question of whether AI can become dangerous for humanity.
Alexa, Sophia, Watson: The age-old idea of a machine similar to humans but equipped with superhuman abilities has received new nourishment through the progress of AI research. Some actors are promising the development of a »superintelligence« that will become aware of itself. But how realistic is this? In the project »Clarification of the Suspicion of Ascending Consciousness in Artificial Intelligence« funded by the Federal Ministry of Education and Research (BMBF), technology impact researchers at the Karlsruhe Institute of Technology (KIT) are getting to the bottom of this previously little researched question.
When in October 2017 the robot »Sophia« stepped up to the lectern at a conference in Riyadh, Saudi Arabia, and explained its self-image of a learning and communicating machine in human form to the half amused, half astonished audience, this was a milestone in public perception on the seemingly ever shorter way to an artificial intelligence that »awakens« its individuality and reflects its internal states.
AI-supported systems such as the smart loudspeaker »Alexa«, IBM's dialogue-capable super-brain »Watson« or Google's self-learning chess giant »AlphaZero« also fuel the vision of a »super-intelligence« that can be realized in the foreseeable future and that eclipses everything that has existed before. Renowned actors in science and art warn of the consequences of such a change of epoch, others point to the almost utopian opportunities. The fundamental question of what »conscious AI« is at all and what could be the point of the scenarios of machines with an independent existence is surprisingly seldom asked.
This is where the two-year research project »Clarification of the Suspicion of Ascending Consciousness in Artificial Intelligence (AI Consciousness)« comes in. »Our aim is to demystify the topic of AI consciousness«, says project leader Professor Karsten Wendland from the Institute for Technology Assessment and Systems Analysis (ITAS) at KIT. »Some«, the computer scientist and technology consequence researcher explains his objective, »consider it impossible for machines, especially AI systems, to become 'conscious' at some point. Others claim that conscious AI systems have been around for a long time and are still hiding from us. Here we would like to develop a transdisciplinarily sustainable understanding and introduce the results into the public discourse«.
In a mixed method approach, the project team will first examine the status quo in terms of AI awareness by systematically recording the debates in the specialist disciplines and in public discourse, in expert interviews and in bibliometric media analysis. For the first time, typical positions, arguments and ways of speaking will be identified and described. »As analysts«, says project manager Wendland, »we adopt a neutral attitude with regard to content. At the same time, we do not shy away from including very courageous positions on AI consciousness in our research, positions that are rather rare in Europe, such as that it is not necessary to differentiate between humans and machines, since there is a piece of creation in everything«.
In a second step, the scientists want to bring together experts on central research questions in the field of »AI awareness« who have not yet spoken to each other. These transdisciplinary bridges, deepened in workshops and symposia, are intended to build up and consolidate an interdisciplinary understanding of this mysterious and sensitive topic.