attesta.yaml and use it with LangChain, OpenAI Agents SDK, and Anthropic Claude simultaneously. All three frameworks share the same risk scoring, challenge policies, and tamper-proof audit log.
Why Multi-Framework Matters
When your application uses multiple AI frameworks, each framework typically has its own tool execution path. Without a shared governance layer:- Risk policies are duplicated and drift apart
- Trust scores are fragmented — an agent trusted in LangChain starts from zero in OpenAI
- Audit logs are scattered in different formats, making compliance audits painful
- Rubber-stamp detection cannot correlate approval patterns across frameworks
attesta.yaml.
Configuration
This config works identically across all three frameworks:attesta.yaml
The key insight is that
attesta.yaml is framework-agnostic. It defines what your policies are — the integrations handle how they are enforced in each framework’s tool execution path.Architecture
Framework agents execute tool calls
- LangChain Agent — uses
AttestaToolWrapper - OpenAI Agent — uses
attesta_approval_handler - Anthropic Claude — uses
AttestaToolGate
Shared Tools Module
First, define the tool functions that all frameworks will use. These are plain Python functions with Attesta gating.shared_tools.py
Framework 1: LangChain
Framework 2: OpenAI Agents SDK
Framework 3: Anthropic Claude
TypeScript: All Three Frameworks
The same pattern works in TypeScript. A singleAttesta instance is shared across all framework integrations.
multi-framework.ts
Running All Three Together
Create a unified runner that exercises all three frameworks against the same Attesta instance:run_all_frameworks.py
Unified Audit Trail
After running all three frameworks, a single audit log at.attesta/unified-audit.jsonl contains every action from every framework:
metadata.source field identifying which framework originated the action:
Cross-Framework Trust
When multiple frameworks use the sameagent_id, the trust engine maintains a unified trust profile. Trust earned in LangChain carries over to OpenAI and Claude.
What Gets Shared
| Component | Shared Across Frameworks | Details |
|---|---|---|
| Risk scorer | Yes | Same 5-factor scorer + domain patterns |
| Challenge map | Yes | Same risk-to-challenge mapping |
| Trust engine | Yes | Unified per-agent trust across all frameworks |
| Audit trail | Yes | Single JSONL file with hash chain |
| Risk overrides | Yes | Same overrides apply to all tool calls |
| Amplifiers | Yes | Same regex amplifiers for all function names |
| Domain profile | Yes | Same domain-specific patterns and escalation rules |
| Review times | Yes | Same minimum review times for all challenges |
Next Steps
LangChain Integration
Full LangChain and LangGraph reference
OpenAI Agents SDK
Approval handlers and guardrails
Anthropic Claude
Gate Claude tool_use blocks
Audit Trail
Tamper-proof logging across frameworks