Commission of enquiry

Artificial intelligence — chances and limits

13. November 2018, 10:38 Uhr | Iris Stroh
Prof. Katharina Zweig, Kaiserslautern Technical University: »We have algorithms that a human has programmed with a human figure that can be sensible or not sensible. If such algorithms are used to judge humans, we must set a limit. AI should rather be used to conduct descriptive statistics without calculating indicators for individuals by which other individuals are assessed.«
© Felix Schmitt

In June 2018 Germany’s Federal Diet instituted a commission of enquiry »Artificial intelligence — social responsibility and economic potential«, formed of 19 members of the diet and 19 official experts.

Diesen Artikel anhören

The Official Daily spoke to Prof. Katharina Zweig, head of the algorithm accountability lab at Kaiserslautern Technical University and member of the commission, about AI and its limits.

Official Daily: The commission of enquiry has already convened. What was your impression?

Prof. Zweig: To date we only had the constituting session and a first round of introduction. In any case it was impressive how much expertise has come together here in the two groups. Initial discussions showed that everyone had come to grips with the subject. That makes me optimistic as regards our cooperation, and keen to see how our findings are taken up in the further political decision-making process.

There’s a lot of discussion about Germany, and Europe of course, investing much less in AI compared to the USA and China. That isn’t likely to change through the commission of enquiry either. What do you think are the major points that this commission can achieve, and how can we prevent Germany and Europe lagging behind the USA and China?

The intent of the commission, as I understand it, is to explore how well AI can be integrated in society: what are the prerequisites, how can the common good benefit, how can technologies be transfered from science into the economy, and so on. We’re very well set up in Germany as far as the scientific fundamentals are concerned. But now it’s also a question of what AI we want to implement and where. And that’s where we’re bound to choose a different way in Germany and Europe than in the USA and China.

In the automobile sector, in medicine or in industry there’s a veritable hype about AI, it stands for autonomous driving, for better diagnostics, for greater productivity. Do you think there are limits for AI, what it cannot achieve, or at least not yet?

Yes, I believe there are these limits, namely when machines are to decide about humans in complex social situations. Examples are risk assessment in US court rooms, classification of the unemployed in Poland, or terrorist identification like the USA has considered. Here it’s difficult for both man and machine to make decisions, and machine decisions are often not comprehensible enough.
Nothing against preparing data-based decisions by data science and machine learning. It’s only a question of whether a machine should ultimately reduce a very complex situation to a single number when it comes to important decisions about persons.

Ethics also plays a role in the commission of enquiry. Currently you often read that algorithms take racism into a new dimension for example. Can you make AI ethical?

Of course. There are methods verifiably non-discriminating, but they may take long to compute before finding a solution.
Naturally these algorithms aren’t always used. If you want an algorithm that decides whether or not an applicant is invited for an interview, and this algorithm took a year to compute the answer, it’s obviously the wrong approach.

What are the basic requirements, then, so that an algorithm doesn’t discriminate?

Well, you must know the purpose of an algorithm, and what the database looks like. For example, an algorithm that decides the credit worthiness, the financial standing of someone and does this based on data from the past may come to discriminating results. It’s possible that in the past, typically, a man has applied for a credit, and a woman rather when she was single. The latter increases the risk of not using a credit. If an algorithm is used based on such data to learn the credit worthiness of people, it can obviously lead to the wrong results.

Anbieter zum Thema

zu Matchmaker+

  1. Artificial intelligence — chances and limits
  2. There is no »either - or«, but many options

Matchmaker+