Good algorithm, bad algorithm

Algorithms are increasingly dictating our lives, but are they a blessing or a curse? Germany’s leading computer scientist explains.

Can artificial intelligence solve our problems?
Can artificial intelligence solve our problems? phonlamaiphoto - stock.adobe.com

Professor Zweig, you work in the field of social informatics and study algorithms. Could you perhaps briefly explain first what algorithms actually are?

Algorithms are general instructions for how to solve problems. One good example is written multiplication: your teacher will have explained to you how to tackle multiplication problems in principle, regardless of which numbers you are specifically dealing with or how long they are. This general nature is characteristic of a good algorithm.

Katharina Zweig, a computer science professor at TU Kaiserslautern
Katharina Zweig, a computer science professor at TU Kaiserslautern TU Kaiserslautern

When does it make good sense to use algorithms?

Mostly, what people refer to as “algorithms” these days are not in fact algorithms at all. Algorithms per se are extremely useful because they have proven that they are able to find a solution. This guarantee is part of what defines an algorithm. However, when people talk of “algorithms” nowadays they tend to mean processes of machine learning. These processes uses statistics to search for patterns in data, and then create decision-making rules on this basis. For example, an algorithm may be instructed that if many successful applicants are between the ages of 25 and 30, then this information should be used as one of the criteria when next deciding whether to recruit a particular job applicant.

It is not a problem of the decision-making system, but of the way in which it is used.

Katharina Zweig, a computer science professor at TU Kaiserslautern

When is the use of algorithms problematic?

As can be seen from my example from the world of work, this rule entails the risk of discrimination. Wherever this may pose a possible risk – i.e. especially when it comes to accessing state benefits or in the world of work – such decision-making systems that have learnt from data need to be checked for possible discrimination. Incidentally, this does not require any knowledge of the code – it’s enough to look at the results for people with different characteristics, such as age or religious affiliation. What is more, this is not necessarily a problem of the decision-making system, but of the way in which it is used. The patterns that have been identified could be used just as well to address inequalities.

The aim of your latest book “Ein Algorithmus hat kein Taktgefühl” is to empower non-experts “to remain in control”. Can this work? 

Yes, I believe that anyone can achieve this – after all, we computer scientists can take care of the technical side of things. However, the question of which patterns an algorithm should hunt for in which data and what is to be optimised in the process is not something we can answer on our own. So if a system is being planned that is designed to predict whether a criminal is likely to reoffend, society needs to decide whether it really wants this. And if so, it needs to decide whether it is better for the system to detect almost all reoffending criminals or whether it should wrongly suspect as few rehabilitated criminals as possible. To decide this, it needs a rough idea of how such machine learning processes function – plus a healthy dose of common sense. And this is something that anyone who goes around with their eyes open has.

Professor Katharina Zweig heads the department of “Graph Theory and Analysis of Complex Networks” at TU Kaiserslautern and was responsible for creating the degree course in “social informatics” that explores the effects of information technology on society. She has been awarded the Communicator Prize for Science Communication and the Theodor-Heuss Medal, among others.

Interview: Martin Orth

© www.deutschland.de

You would like to receive regular information about Germany? Subscribe here: