Artificial intelligence (AI) offers companies immense opportunities, many of which are far from being exploited. Slowly, companies are beginning to understand the many benefits of this technology. Responsible use of AI systems is based on control, traceability and transparency.
Companies are discovering the potential of AI to develop new business models, analyse vast quantities and automate business processes. However, as with many new technologies, it is still a controversial topic. However, precisely because the potential of AI appears to be limitless, we need to consider how it will impact our society and address issues around morality and risk in this context. Only when companies fully understand the AI they are using and the decisions they are making can they maintain control over their algorithms.
Algorithm-based models are never completely unbiased because the data they are based on is judged and categorized according to predetermined criteria. For example, a person's hometown is, in and of itself, just a fragment of data. But if, on the basis of this fragment, the AI discriminates against a customer, for example, because he or she is presumed to have a lower income and is therefore less profitable, this is unfair and unethical discrimination. However, if this characteristic is used to play music on hold for the customer to remind them of home, this may have a positive impact on the customer experience. In both cases, the AI has used the same feature to trigger an action. So it's not necessarily the bias that's unethical – it's the action triggered by the bias.
It is essential, therefore, that companies harnessing AI make its use as open and understandable for stakeholders. This helps reduce fears and prejudices about the use of algorithms and analytics based on data. If you want to retain the trust of both your customers and your employees, you should first create general acceptance for these systems. To this end, it makes sense to involve all decisive departments in this process, as AI has potential benefits for many activities from sales to marketing, employee enagement and customer service.
For example, the legal department can create standards and check internal regulations on data use and analysis for compatibility with government guidelines and laws such as GDPR. A key role is played by the HR department as an intermediary between management and employees. It can review appraisal processes and training proposals and should be consulted on the basis on which personnel decisions are made. Finally, sales can leverage the value of AI to create a competitive product offering. To do so, it should be trained to understand what opportunities and challenges arise from the use of AI in sales and marketing.
Once the AI systems are ready, it is important to work permanently on optimizing the algorithms. And, these should be based on a wide-enough data set to reduce bias. This is because companies remain responsible for the decisions made on the basis of the algorithms throughout the entire lifecycle of an AI system. This not only implies a legal responsibility, but also an ethical one.
A responsible approach to AI systems thus rests on three pillars:
No company can afford to use AI without questioning it. Working with AI is an ongoing development process and requires ethical guidelines. Or, as Cathy O'Neil puts it in her 2017 book, Attack of the Algorithms, »Big Data processes don't define the future – human imagination does that.« She calls for us as humans to use our moral imagination to embed better values into our algorithms and create Big Data models that follow a consistent ethical model. Let's answer that call – and take our mission seriously.