Why OAuth isn’t good enough

AI like digital agents, agentic AI and humanoid robots can perform complex tasks autonomously, i.e., entering transactions, making decisions and performing actions. Humanoid robots represent a form of physical manifestation of digital agents. The capabilities of such AI poses challenges, particularly regarding control and accountability for the transactions, decisions and/or actions of these systems. AI governance aims to create frameworks and processes that ensure the ethical, safe, and lawful use of AI.

A central aspect of AI governance is the authorization and legitimization of AI. This involves clearly defining and documenting the granted powers, authority, and permitted scope of transactions, decisions or actions of an AI and on whose behalf it acts. This is particularly relevant in areas where AI acts on behalf of humans or organizations and makes potentially far-reaching decisions.

Existing approaches to AI governance focus mainly on establishing general principles and creating transparency. These solutions reach their limits when it comes to defining, processing, and monitoring the specific powers and scope of action of an AI in specific individual cases. The current Human-in-the-Loop approach is suggesting that AI is only supporting humans, with humans taking final decisions. This approach, however, limits the potential of AI to act autonomously. It comes with the risk that the accountable human gets used to rely on AI and to not question the outcome anymore. As much as AI acts autonomously without a proper governance, it can create risks of organizational fault and/or trust damages.

Current authorization protocols such as OAuth 2.0 (OAuth) offer access control options, but they are not specifically designed to meet the requirements of advanced AI and their governance. They primarily address the question of whether a system is allowed to access certain resources, but do not consider the more complex aspects of the decision-making powers and authority of independently acting AI. While OAuth typically integrates the OpenID Connect standard for verifying authorizers, the focus on system access remains.

In this context, the Model Context Protocol (MCP) was developed by the company Anthropic together with a developer community and represents an open standard that enables developers to establish bidirectional connections between data sources and AI-supported tools. Although it represents a step forward in the integration of AI, it does not comprehensively address governance aspects, in particular the question of authorizing and legitimizing AI for its decisions or actions. MCP applications typically use OAuth together with OpenID Connect or comparable standards.

Due to inadequate AI governance, both the combination of MCP, OAuth and OpenID Connect or comparable alternative standards are reaching their limits. It is not sufficient to limit AI authorization to access rights. Access rights are limited to answering the question “is this subject allowed to perform this action with this resource?”

Scroll to Top