Gate CrewAI task outputs with AttestaHumanInput as a task callback
Attesta integrates with CrewAI through AttestaHumanInput, a callable that replaces (or augments) CrewAI’s built-in human_input mechanism. When a task completes, Attesta evaluates the output through the approval pipeline before the workflow continues.
Optional default risk level (e.g. "high") applied when no other risk information is available
Callable signature:
async def __call__(task_output) -> str
Return Value
Meaning
"approved"
The task output passed the Attesta gate
"denied"
The task output was denied
When default_risk is set, it is passed to the risk scorer as hints["risk_override"]. This is useful for tasks where the automated scorer cannot infer the true risk from the task output string alone.
from crewai import Agent, Task, Crewfrom attesta import Attestafrom attesta.integrations.crewai import AttestaHumanInput# Configure Attestaattesta = Attesta.from_config("attesta.yaml")gk_input = AttestaHumanInput(attesta, default_risk="high")# Define agentsresearcher = Agent( role="Security Researcher", goal="Identify vulnerabilities in the codebase", backstory="You are an expert security auditor.",)deployer = Agent( role="Deployment Engineer", goal="Deploy approved patches to production", backstory="You manage production deployments.",)# Define tasks with Attesta gatingresearch_task = Task( description="Scan the application for SQL injection vulnerabilities", expected_output="A list of vulnerabilities with severity ratings", agent=researcher,)deploy_task = Task( description="Deploy the security patch to production servers", expected_output="Deployment confirmation with rollback plan", agent=deployer, human_input=True, # Enable human-in-the-loop callback=gk_input, # Use Attesta instead of raw input())# Run the crewcrew = Crew( agents=[researcher, deployer], tasks=[research_task, deploy_task],)result = crew.kickoff()
When deploy_task completes:
The task output is stringified and sent to AttestaHumanInput
An ActionContext is built with function_name="crewai_task"
The output text (truncated to 200 characters) is used as the function_doc
Attesta evaluates the risk and presents the appropriate challenge
For every task evaluation, AttestaHumanInput constructs the following context:
ActionContext( function_name="crewai_task", kwargs={"output": "<stringified task output>"}, function_doc="<first 200 chars of task output>", hints={"risk_override": "high"}, # if default_risk is set)
The risk scorer analyzes:
Function name — "crewai_task" scores as a generic mutating verb
Arguments — the full task output is scanned for sensitive patterns (credentials, SQL, shell commands, URLs)
Docstring — the first 200 characters provide additional risk signals
Hints — the default_risk override, if provided, forces a specific level
For tasks where the output is a deployment plan, database migration, or infrastructure change, set default_risk="high" or default_risk="critical" to ensure the appropriate challenge is presented regardless of the scorer’s heuristic.
When a task output is denied, AttestaHumanInput returns the string "denied". CrewAI’s behavior on receiving this response depends on how you configure the task and crew:
# The callback returns "denied" -- CrewAI treats this as the# human input response. The task's agent receives this feedback# and can be instructed to revise its output.deployer = Agent( role="Deployment Engineer", goal="Deploy approved patches to production", backstory=( "You manage production deployments. If a deployment plan is denied, " "revise it to address the concerns and try again." ),)
The callback mechanism in CrewAI passes the return value back to the agent as feedback. Make sure your agent’s backstory or instructions explain how to handle a "denied" response, otherwise the agent may not know how to proceed.