langflow-attesta package provides the Attesta Approval component for Langflow. It is a Python component that evaluates AI agent actions for risk and returns a structured Data object with the verdict, risk score, and audit information.
Package:
langflow-attesta | Language: Python | Dependencies: attesta >=0.1.0 | Runtime: Langflow component system (lfx.custom.custom_component.component.Component)Installation
The Attesta Approval component can be installed in two ways: as a contribution to the Langflow source tree, or as a custom component loaded at runtime.- Langflow Source Contribution
- Custom Component (LANGFLOW_COMPONENTS_PATH)
Follow the Langflow contributing components guide:
Component Configuration
The component exposes four inputs in the Langflow canvas:| Input | Type | Default | Required | Advanced | Description |
|---|---|---|---|---|---|
| Function Name | MessageTextInput | — | Yes | No | Name of the action being gated (e.g., send_email, delete_record). |
| Risk Level | DropdownInput | auto | No | No | Risk level override: auto, low, medium, high, critical. |
| Action Arguments | MessageTextInput | {} | No | No | JSON string of arguments to evaluate (e.g., {"to": "user@example.com"}). |
| Risk Hints | MessageTextInput | {} | No | Yes | JSON string of risk hints (e.g., {"destructive": true, "pii": true}). |
| Output | Display Name | Method | Description |
|---|---|---|---|
result | Approval Result | evaluate_gate | Structured Data object with verdict, risk score, and audit metadata. |
How It Works
Parse Inputs
The component parses Action Arguments and Risk Hints from JSON strings into Python dictionaries using the
_parse_json() helper. Invalid JSON is silently replaced with an empty dict, and a warning is logged via self.log().Configure Risk Override
If Risk Level is set to anything other than
auto, the component creates a RiskLevel enum value (e.g., RiskLevel.HIGH) and passes it as risk_override to the Attesta instance. When set to auto, risk_override is None and the built-in scorer determines the level.Evaluate
The
ActionContext is passed to attesta.evaluate() (async). The Attesta pipeline runs risk scoring, challenge selection, and verification.Output Format
The Approval Result output is a LangflowData object with the following fields:
| Field | Type | Description |
|---|---|---|
verdict | string | approved, denied, modified, timed_out, or escalated |
risk_score | float | Numeric risk score between 0 and 1 |
risk_level | string | low, medium, high, or critical |
denied | bool | true if verdict is denied, timed_out, or escalated |
audit_entry_id | string | Unique audit log entry ID |
review_time_seconds | float | Time spent in human review |
function_name | string | Echo of the configured function name |
- Approved
- Denied
The
denied field is a convenience boolean that is True when the verdict is denied, timed_out, or escalated. Use this for simple conditional routing in your pipeline.Pipeline Examples
Example: Gate a Deployment Action
- Open a pipeline in Langflow.
- Drag the Attesta Approval component onto the canvas.
- Configure:
- Function Name:
deploy_service - Risk Level:
high - Action Arguments:
{"service": "api-gateway", "version": "2.1.0"} - Risk Hints:
{"production": true}
- Function Name:
- Connect the Approval Result output to a conditional component or downstream tool.
Example: Dynamic Arguments from Upstream
Connect the output of an upstream component (e.g., a Text Input or LLM) to the Action Arguments field:{"to": "ceo@company.com", "body": "..."}) is passed as the action arguments for risk evaluation.
Pipeline Patterns
Pattern: Conditional Execution
Use the outputData object’s denied field in a conditional component:
Pattern: Chained Evaluation
Evaluate multiple actions in sequence, each with appropriate risk levels:Pattern: High-Risk Action with Hints
For actions that are inherently dangerous, set explicit risk hints:- Set Function Name to
drop_database_table. - Set Risk Level to
critical. - Set Risk Hints to:
- The risk scorer will combine the destructive verb, the critical override, and the hints to produce a very high risk score, triggering multi-party approval.
JSON Parsing Behavior
Both Action Arguments and Risk Hints accept JSON strings. The_parse_json() helper handles edge cases gracefully:
| Input | Parsed Result | Behavior |
|---|---|---|
{"key": "value"} | {"key": "value"} | Normal parsing |
"" or empty | {} | Empty dict |
"not valid json{{" | {} | Warning logged via self.log(), empty dict used |
[1, 2, 3] (array) | {} | Non-dict JSON is treated as empty |
null or None | {} | Empty dict |
Source Code Reference
The component extends Langflow’sComponent base class:
evaluate_gate method is async and handles the full Attesta pipeline. The _parse_json private method provides safe JSON parsing with logging.
Related Pages
n8n Integration
Workflow node for n8n data pipelines
Flowise Integration
Tool component for Flowise chatflows
Dify Integration
Plugin tool for the Dify platform
No-Code Overview
Compare all no-code platforms