Morality for Artificial Intelligence

AI Technology Must Master Ethics

21. Februar 2022, 10:04 Uhr | Michael Finkler, uh
© Adobe Stock

Do what is feasible? AI should benefit companies without harming or disadvantaging people. What companies need to look out for when introducing artificial intelligence.

Diesen Artikel anhören

Researchers believe that AI will have human-like cognitive skills in the future, or even beyond. The question is not "if," but "when." Already today, faster results and new insights can be achieved thanks to AI. AI learns autonomously on the one hand - i.e. without constant supervision - and adaptively on the other: depending on what it is shown, it will find and learn different things.

The Problem: AI reinforces prejudices

What sounds quite positive, however, can also have negative effects. A well-known example of this is the so-called AMS algorithm, which divides job seekers into three classes: high, medium and low chances of finding a permanent job within the next six months. Critics accuse this system, among other things, of entrenching existing social ills.

This means that AI makes decisions on a worldview basis that is considered to have already been overcome. AI thus touches on ethical issues that - if not given appropriate attention - can quickly lead to problems and even public criticism.

Ethics & AI: The biggest challenges

The topic of AI and ethics is currently reflected in the following three areas in particular.

  • Bias: To put it simply, these are prejudices and resentments cast in algorithms. The consequences are, for example, that an AI system suggests only men or only people with light skin for vacancies with the same qualifications.
  • Lack of transparency: there are situations in which it is not possible to determine how algorithms arrive at certain outputs given certain data inputs. To get a handle on this problem, researchers are working on "Explainable AI."
  • Data protection: This is a good, but also a right that must be protected, especially in connection with AI. Technical solutions such as anonymization or regulatory approaches should help here.

 

KI Künstliche Intelligenz Mittelstand Industrie 4.0
© proalpha

How "Ethically Correct AI" Works

To minimize the risk of an AI solution delivering morally undesirable results, companies can take three simple approaches.

  • Educate: Awareness and understanding of the issue is needed. This also paves the way for a strategic approach when it comes to using AI. It also prevents blindly implementing AI first and then having to put out fires later.
  • Take a stand: There needs to be a management-backed statement - a manifesto of sorts - of where the company stands on ethics and AI, and what the basic conditions are away from laws that it is committed to.
  • Set up processes: Processes are needed to systematically, regularly review the risk potential of AI applications. These range from conception to planning and development to use.

 

AI Solutions for the Industry & SMEs

Once the foundations have been laid in terms of AI and ethics, the next step is concrete planning. Here, alignment with corporate strategy is at the top of the list. The crucial question is: How does AI fit into our organization and how can it help us achieve our goals? Planning doesn't just involve technical aspects: Equally important are both the business and human aspects. The latter addresses the impact of AI decisions on the users.

In a further step, a good roadmap is needed. In practice, it has been shown that companies should ideally start with quick wins - namely, where there is enough data of sufficient quality and people have already been able to draw useful conclusions from this data without much effort.  

Last but not least, AI-competent partners are needed in implementation to ensure that AI projects are successful not only in terms of technical organization, but also with regard to ethical issues.

Criticism and Dwindling Credibility

Those who work with data-driven AI, for example to classify produced goods into 'OK' and ' Not OK', always make a very strong generalization claim. This is because the basic claim here is that an accurate statement can be made about future events on the basis of observed examples. This does mean that AI solutions can be scaled well. At the same time, however, errors and ethically questionable results are also scaled.

As a result, AI projects need to be very well planned. Embedding AI into the existing organization means a serious and long-term change process that not only involves technical aspects, but also calls into question many a fundamental corporate belief.

Companies that do not approach the topic with the necessary caution run the risk of losing their »license to operate« through public criticism and a sustained loss of credibility. For customers today, added value is no longer defined exclusively by functionality, but increasingly also by ethical aspects and consideration of social expectations. (uh)


Matchmaker+