Artificial Intelligence EU wants to develop Ethical Rules for AI

Fahnen vor dem EU-Parlament in Brüssel.
Flags before the EU Parliament in Brussels

In contrast to China and the United States, the EU Commission wants to develop an ethical set of rules for the use of artificial intelligence.

The Brussels authority presented corresponding recommendations on Monday. By 2020, companies, research institutes and authorities are to test the guidelines and report on their experiences. Specific legislative proposals will then follow if necessary.

In detail, the EU Commission recommends that artificial intelligence should generally contribute to the strengthening of fundamental rights and not restrict the independence of humans. People should also have full control over their data. These should not be used to harm them or discriminate against them. Clear responsibilities and accountability should also be created for independent machine decisions.

The initiative is part of the EU Commission's AI strategy. According to the strategy, at least 20 billion euros in private and public investment in this area are to be raised by the end of 2020. The Brussels authority wants to provide an additional 1.5 billion euros in public funds.

The definition of artificial intelligence is still under discussion. Generally speaking, it is about machine learning and the ability of computers to work independently on problems. The EU Commission also wants to promote the ethical development of AI on a global level, working in particular with like-minded partners such as Japan and Canada.

»The ethical dimension of artificial intelligence is not a luxury or an add-on,« said EU Commissioner Andrus Ansip. »Only with trust can our society fully benefit from technology. Ethical AI is a win-win proposal that can become a competitive advantage for Europe«.

Lobbyists warn against over-regulation

The VDMA (Association for Mechanical Engineering Industry) takes a fundamentally positive view of the development of ethical guidelines. »The successful use of artificial intelligence in industry presupposes a broad acceptance of this technology in society,« says a statement. At the same time, however, the association warns against overshooting the mark. The planned regulations would have to be dependent on the field of application, because humans would often not be affected by AI at all – for example in quality assurance or predictive maintenance. The race against countries such as the USA and China should not be restriced by »red lines«.

Oliver J. Süme, CEO of eco, the Association of the Internet Industry, also welcomes the EU Commission's initiative: »Trustworthy AI applications should be developed and used in such a way that they respect human autonomy«. He points out that »many companies are already assuming responsibility for ethical challenges in connection with digital transformation and are successfully contributing to compliance with ethical standards, for example through voluntary initiatives«. He points out that transparency is the key to the responsible use of digital technologies. »Only a transparent approach to artificial intelligence can strengthen people's trust in an autonomous and decisive system,« says Süme.