Commission of enquiry

Artificial intelligence — chances and limits

13. November 2018, 10:38 Uhr | Iris Stroh
Diesen Artikel anhören

Fortsetzung des Artikels von Teil 1

There is no »either - or«, but many options

But in that case someone in the bank can still decide whether to award the credit despite poor rating by the algorithm…

I think that’s highly improbable. For a simple reason. If a machine refuses the credit and the clerk rejects this decision and still awards the credit, which then isn’t paid back, they’re really in for trouble. And nobody wants that. And they don’t have to take the blame if they refuse someone a credit who still uses it. So there’s no motivation for the clerk to doubt the decision.
Of course there are many other examples of how machine decisions can be wrong. Take an algorithm to diagnose skin cancer that was only learned with data from white persons. It could have difficulty tracing carcinomas in dark-skinned persons.

That’s a surprising example, because precisely in medicine artificial intelligence is seen as being able to diagnose very much faster through its capability to evaluate image information. Can the wrong result come about because the wrong diagnoses go into the datasets?

I don’t think so. I’m proceeding from the fact that for cancer diagnosis there are histological findings as datasets, and these are as a rule explicit. My example is meant to show that the fundamental data can sometimes become a problem. Because in AI too — garbage in, garbage out.

In my opinion the quality of data can almost always be very much improved — if it’s checked in the first place. There’s the socalled entity recognition problem for instance, that two names held in a database are the same person because a name change may not be known to the system, leading to erroneous results of course. The question of how things are interpreted can also become a problem. Maybe news in the internet is seen as relevant for a person if it’s clicked. That can also lead to the entirely wrong results because if I’m tired I might click on a story from a tabloid even if it’s certainly not relevant for me.

Then it can be very difficult to classify things and so that a computer can continue processing them. How can you classify a successful employee, how do you map social concepts like loyalty, teamwork, reliability? Operationalizing something that can hardly be grasped in words, let alone in figures, is a very basic problem. So the datasets used must be examined in one way or another for their suitability. You must ensure that there’s no discrimination through the data. That’s the job of the data scientists. Data cleaning accounts for about 70 percent of the work of a data scientist. It’s also important that data scientists know their responsibility. Which is why the German Informatics Society is working on a curriculum in which ethical and social aspects also play a role.

But I’m convinced that the many problems discussed concerning today’s AI systems can really be avoided, or could at least be discovered fast by constant investigation. Nevertheless, AI shouldn’t, as already mentioned, decide direct in some cases — calculate a single figure as a result. Here too data science methods and methods of machine learning, in other words AI tools, should be used to support a data-based decision by humans.

To what extent can AI be controlled? It’s already very difficult to comprehend the results.

There are methods of AI that are really wonderful to comprehend, but then often not so efficient or need too long to compute, or aren’t flexible enough. Others have the last mentioned characteristics and are less comprehensible. It’s our decision alone when we think which characteristic is more important.

There might be situations in which AI systems are so useful that comprehension is less important, like in the medical field. If a machine can make suggestions for medication that enables average longer life of 10 percent without pain, we can let it make them and at the same time investigate scientifically what precisely the criteria are that lets them decide better. On this basis it’s possible to develop a system that always makes good decisions and is at the same time comprehensible. So there’s no ‘either-or’ but many options.

Anbieter zum Thema

zu Matchmaker+

  1. Artificial intelligence — chances and limits
  2. There is no »either - or«, but many options

Matchmaker+