Gimel leveraging the Law of Agency
AI agents and robots take over more tasks every day. They often act on our behalf. This requires trust. Many big tech companies ask us to “trust AI.” But we can’t trust AI blindly and just hope for the best. Calls for trust can sound like a plea for no interference. That isn’t good enough.
Today, AI providers underestimate key issues. Loyalty and responsibility to users, both covered by the law of agency, are not addressed. When AI acts, third parties don’t know what authority it really has. There is no clear disclosure. Alignment techniques like fine-tuning don’t truly capture human values. They can’t solve the problem of implied authority, which comes from vague rules for AI agents. As a result, we see AI behaving badly or acting against user interests. This leads to legal risks, organizational problems, and damage to trust.
Juan, Gimel Technologies
The current way to prevent these risks is the “human-in-the-loop” approach. This limits AI’s potential to act autonomously. A recent McKinsey & Company study found that most AI projects don’t deliver a return on investment. Human-in-the-loop increases costs instead of reducing them. General “AI principles” are often mentioned, but they don’t offer practical solutions. That’s why AI today lacks strong governance.
So, why use the law of agency in AI? Trust grows when AI follows this law. The law of agency exists to protect people when they delegate authority. It was designed for humans, but it can also work for AI. It makes sure principals (the people in charge) can’t avoid liability if their agent (human or AI) causes harm. If principals could just accept the good and reject the bad, nobody would trust agents. Relying parties wouldn’t risk dealing with AI agents. Without trust, business via agents would shrink. These rules are essential for building trust in business.
Our G-Agent and GAuth+ solutions use the law of agency to create a better authorization protocol. It makes AI’s authorizations clear. This helps third parties approve AI actions in a controlled and secure way. It’s a big step forward from vague or secretive governance systems.
Current protocols like OAuth by IETF and its implementations like EntraID by Microsoft focus on access control. They don’t address the complex powers of attorney that AI needs. Even latest contributions like from the Cloud Security Alliance (Agentic IAM), Google (A2A) and others lack consistency to the law of agency. GAuth fills this gap.