What Enable agents to produce outputs that are cryptographically verified to originate from a specified model and input prompt.
Why By default, agents rely on third-party inferences, existing methods that struggle to verify inference integrity, especially with protected APIs requiring credentials. This introduces risks of manipulated outputs or human intervention, which compromises trust. Verifiable inference guarantees reliability for high-stakes applications like finance and healthcare.
How AVS uses zkTLS within a multi-party computation (MPC) framework to verify API outputs without exposing sensitive credentials. MPC-TLS generates cryptographic proof that an agent’s request to a protected API is valid, authenticated, and originates from a trusted source. This proof is passed to the user, ensuring the integrity of the agent’s action while maintaining privacy.
Verifiable LLMs are advanced AI systems that not only generate responses but also provide proofs that these outputs were generated by a specific model using a particular input.
By leveraging cryptographic techniques such as zero-knowledge proofs (ZKPs) and AVS attestation mechanisms, verifiable LLMs address critical concerns around the reliability and integrity of AI-generated content.
This architecture combines the AI frameworks for hosting and managing AI agents with the Othentic framework to enable cryptographically verifiable outputs from LLMs. The goal is to ensure that AI agent outputs are trustworthy, verifiable, and resistant to manipulation.