Current Human-in-the-Loop

Existing approaches to AI governance focus mainly on establishing general principles and creating transparency. These solutions reach their limits when it comes to defining, processing, and monitoring the specific powers of attorney and scope of action of an AI in specific individual cases. The current Human-in-the-Loop approach is suggesting that AI is only supporting humans, with humans taking final decisions. This approach, however, limits the potential of AI to act autonomously. It comes with the risk that the accountable human gets used to rely on AI and to not question the outcome anymore. As much as AI acts autonomously without a proper governance, it can create risks of organizational fault and/or trust damages.

Scroll to Top