Are manufacturers and AI developers also interested in educating their customers in this way? In Till's and Müller-Brehm's experience, the industry certainly supports the ZVKI's activities: "Our experience has been very good. Of course, not all companies cooperate, but a number of important partners are already involved. They definitely see the advantages of building trust in new technologies."
For example, certification of AI solutions is being considered. This would be an important contribution to being able to better assess the trustworthiness of AI software, according to the two ZVKI colleagues. Such certificates can act as a seal of approval and give consumers more security. Whether this actually succeeds depends on how certification processes are designed. This includes, for example, whether the testing of corresponding applications is carried out by external experts. At the same time, standards, certificates and seals are only one component of many in designing AI systems in such a way that they do not harm individuals or society.
What does "trustworthy AI" mean?
In order to flesh out the idea of certification and further measures, it is first necessary to clarify what "trustworthy AI" actually means. "Clearly defining the term trust cannot be a task for us as ZVKI. A wide variety of disciplines have been discussing this for centuries," explains Verena Till. "We need a practical approach. Our question is: What does trustworthiness mean in connection with AI technology?" To find an answer to this question, the experts go on to explain that the term trust is broken down into individual aspects. Among other things, the ZVKI has identified the aspects of "fairness," "reliability" or "transparency" as essential building blocks of trustworthy AI.
To assess an AI solution in terms of its trustworthiness, it is necessary to check how the respective software was programmed, explains Verena Till. In other words, you look at whether a solution has been set up in such a way that it delivers fair, reliable and transparent results for the particular use case.
According to the experts, the example of reliability makes it clear that AI procedures cannot be assessed independently of their application contexts. For example, they said, an AI-supported production system does not show fatigue and can therefore theoretically operate at a fairly constant output throughout. "This and the evaluation of production processes can reduce scrap and material waste," says Jaana Müller-Brehm. She adds that, on the other hand, an AI solution based on faulty pre-assumptions can automate and thus multiply unreliability. Moreover, societal stereotypes and biases can be carried into AI applications at various points of AI development and deployment. They are reproduced again and again through the use of such an AI application. "Developing and making visible methods to uncover such mechanisms is one goal of our work," explain Verena Till and Jaana Müller-Brehm.