attesta_approval_handler() returns an async handler that matches the approval_handler signature expected by Runner.run(). Every tool call during the run is evaluated through Attesta before execution.
from openai.agents import Agent, Runnerfrom attesta import Attestafrom attesta.integrations.openai_sdk import attesta_approval_handler# Configure Attestaattesta = Attesta.from_config("attesta.yaml")# Define your agent with toolsagent = Agent( name="DevOps Assistant", instructions="Help manage infrastructure deployments.", tools=[deploy_tool, rollback_tool, status_tool],)# Run with Attesta as the approval handlerresult = await Runner.run( agent, input="Deploy api-gateway v2.1.0 to production", approval_handler=attesta_approval_handler(attesta),)print(result.final_output)
When the agent calls deploy_tool, Attesta:
Builds an ActionContext from the tool name and arguments
Scores the risk (the word “deploy” + “production” arguments will score HIGH)
Presents the appropriate challenge to the human operator
Returns True (approved) or False (denied) to the SDK
The **kwargs passed to the handler are forwarded as hints in the ActionContext. This means any extra metadata the SDK provides is available to the risk scorer.
AttestaGuardrail is a callable class that matches the tool_guardrails interface on the Agent class. It evaluates each tool invocation and returns None to allow or a dict with an "error" key to deny.
from openai.agents import Agentfrom attesta import Attestafrom attesta.integrations.openai_sdk import AttestaGuardrailattesta = Attesta.from_config("attesta.yaml")agent = Agent( name="Data Pipeline Agent", instructions="Manage ETL pipelines and data transformations.", tools=[run_query_tool, export_data_tool, delete_records_tool], tool_guardrails=[AttestaGuardrail(attesta)],)# When the agent tries to call delete_records_tool, the guardrail# evaluates the risk. If denied, the SDK receives:# {"error": "Denied by Attesta (risk: high)"}
You can use both integration points simultaneously. The approval handler provides a global gate, while guardrails provide per-agent control:
from openai.agents import Agent, Runnerfrom attesta import Attestafrom attesta.integrations.openai_sdk import ( attesta_approval_handler, AttestaGuardrail,)attesta = Attesta.from_config("attesta.yaml")# Agent-level guardrail for this specific agent's toolsagent = Agent( name="Finance Agent", instructions="Process financial transactions.", tools=[transfer_tool, audit_tool], tool_guardrails=[AttestaGuardrail(attesta)],)# Runner-level approval for all agents in the runresult = await Runner.run( agent, input="Transfer $50,000 to account ACC-789", approval_handler=attesta_approval_handler(attesta),)
When both are active, tool calls are evaluated twice — once by the guardrail and once by the approval handler. For most use cases, choose one or the other. Use the approval handler for broad coverage across all tools, or guardrails for fine-grained per-agent control.
Approval handler returns False. The OpenAI Agents SDK skips the tool execution entirely. The agent does not receive any output for that tool call.
Guardrail returns {"error": "Denied by Attesta (risk: <level>)"}. The SDK passes this error back to the agent, which can then decide how to proceed (retry with different parameters, suggest alternatives, or inform the user).
# Example denial from guardrail:{"error": "Denied by Attesta (risk: critical)"}
Guardrails are generally preferred over approval handlers because they provide the agent with an explanation of why the action was denied. This allows the agent to suggest alternatives to the user rather than silently failing.