Do what is feasible? AI should benefit companies without harming or disadvantaging people. What companies need to look out for when introducing artificial intelligence.
Researchers believe that AI will have human-like cognitive skills in the future, or even beyond. The question is not "if," but "when." Already today, faster results and new insights can be achieved thanks to AI. AI learns autonomously on the one hand - i.e. without constant supervision - and adaptively on the other: depending on what it is shown, it will find and learn different things.
What sounds quite positive, however, can also have negative effects. A well-known example of this is the so-called AMS algorithm, which divides job seekers into three classes: high, medium and low chances of finding a permanent job within the next six months. Critics accuse this system, among other things, of entrenching existing social ills.
This means that AI makes decisions on a worldview basis that is considered to have already been overcome. AI thus touches on ethical issues that - if not given appropriate attention - can quickly lead to problems and even public criticism.
The topic of AI and ethics is currently reflected in the following three areas in particular.
To minimize the risk of an AI solution delivering morally undesirable results, companies can take three simple approaches.
Once the foundations have been laid in terms of AI and ethics, the next step is concrete planning. Here, alignment with corporate strategy is at the top of the list. The crucial question is: How does AI fit into our organization and how can it help us achieve our goals? Planning doesn't just involve technical aspects: Equally important are both the business and human aspects. The latter addresses the impact of AI decisions on the users.
In a further step, a good roadmap is needed. In practice, it has been shown that companies should ideally start with quick wins - namely, where there is enough data of sufficient quality and people have already been able to draw useful conclusions from this data without much effort.
Last but not least, AI-competent partners are needed in implementation to ensure that AI projects are successful not only in terms of technical organization, but also with regard to ethical issues.
Those who work with data-driven AI, for example to classify produced goods into 'OK' and ' Not OK', always make a very strong generalization claim. This is because the basic claim here is that an accurate statement can be made about future events on the basis of observed examples. This does mean that AI solutions can be scaled well. At the same time, however, errors and ethically questionable results are also scaled.
As a result, AI projects need to be very well planned. Embedding AI into the existing organization means a serious and long-term change process that not only involves technical aspects, but also calls into question many a fundamental corporate belief.
Companies that do not approach the topic with the necessary caution run the risk of losing their »license to operate« through public criticism and a sustained loss of credibility. For customers today, added value is no longer defined exclusively by functionality, but increasingly also by ethical aspects and consideration of social expectations. (uh)