Skip to main content
The langflow-attesta package provides the Attesta Approval component for Langflow. It is a Python component that evaluates AI agent actions for risk and returns a structured Data object with the verdict, risk score, and audit information.
Package: langflow-attesta | Language: Python | Dependencies: attesta >=0.1.0 | Runtime: Langflow component system (lfx.custom.custom_component.component.Component)

Installation

The Attesta Approval component can be installed in two ways: as a contribution to the Langflow source tree, or as a custom component loaded at runtime.
Follow the Langflow contributing components guide:
1

Copy the component file

cp attesta_gate.py /path/to/langflow/src/lfx/src/lfx/components/tools/attesta_gate.py
2

Register in __init__.py

Add the import to the Tools category init file:
# In src/lfx/src/lfx/components/tools/__init__.py
from .attesta_gate import AttestaGate
3

Add the dependency

Add attesta to the Langflow pyproject.toml:
[project.optional-dependencies]
attesta = ["attesta>=0.1.0"]
4

Restart Langflow

Restart Langflow. The Attesta Approval component appears in the Tools category on the canvas.
langflow run

Component Configuration

The component exposes four inputs in the Langflow canvas:
InputTypeDefaultRequiredAdvancedDescription
Function NameMessageTextInputYesNoName of the action being gated (e.g., send_email, delete_record).
Risk LevelDropdownInputautoNoNoRisk level override: auto, low, medium, high, critical.
Action ArgumentsMessageTextInput{}NoNoJSON string of arguments to evaluate (e.g., {"to": "user@example.com"}).
Risk HintsMessageTextInput{}NoYesJSON string of risk hints (e.g., {"destructive": true, "pii": true}).
The component has one output:
OutputDisplay NameMethodDescription
resultApproval Resultevaluate_gateStructured Data object with verdict, risk score, and audit metadata.
The Risk Hints input is marked as advanced=True, meaning it is hidden by default in the Langflow UI. Click “Show Advanced” on the component to reveal it. For most use cases, the automatic risk scorer combined with the Function Name provides sufficient accuracy.

How It Works

1

Parse Inputs

The component parses Action Arguments and Risk Hints from JSON strings into Python dictionaries using the _parse_json() helper. Invalid JSON is silently replaced with an empty dict, and a warning is logged via self.log().
2

Configure Risk Override

If Risk Level is set to anything other than auto, the component creates a RiskLevel enum value (e.g., RiskLevel.HIGH) and passes it as risk_override to the Attesta instance. When set to auto, risk_override is None and the built-in scorer determines the level.
3

Build ActionContext

The component creates an ActionContext:
ActionContext(
    function_name="send_email",
    kwargs={"to": "user@example.com"},
    hints={"pii": True},
    environment="production",
    metadata={"source": "langflow"},
)
4

Evaluate

The ActionContext is passed to attesta.evaluate() (async). The Attesta pipeline runs risk scoring, challenge selection, and verification.
5

Return Data

The component returns a Langflow Data object containing the full evaluation result, including review_time_seconds and the echoed function_name.

Output Format

The Approval Result output is a Langflow Data object with the following fields:
FieldTypeDescription
verdictstringapproved, denied, modified, timed_out, or escalated
risk_scorefloatNumeric risk score between 0 and 1
risk_levelstringlow, medium, high, or critical
deniedbooltrue if verdict is denied, timed_out, or escalated
audit_entry_idstringUnique audit log entry ID
review_time_secondsfloatTime spent in human review
function_namestringEcho of the configured function name
{
  "verdict": "approved",
  "risk_score": 0.2,
  "risk_level": "low",
  "denied": false,
  "audit_entry_id": "audit-abc123",
  "review_time_seconds": 0.5,
  "function_name": "send_email"
}
The denied field is a convenience boolean that is True when the verdict is denied, timed_out, or escalated. Use this for simple conditional routing in your pipeline.

Pipeline Examples

Example: Gate a Deployment Action

  1. Open a pipeline in Langflow.
  2. Drag the Attesta Approval component onto the canvas.
  3. Configure:
    • Function Name: deploy_service
    • Risk Level: high
    • Action Arguments: {"service": "api-gateway", "version": "2.1.0"}
    • Risk Hints: {"production": true}
  4. Connect the Approval Result output to a conditional component or downstream tool.

Example: Dynamic Arguments from Upstream

Connect the output of an upstream component (e.g., a Text Input or LLM) to the Action Arguments field:
[User Input] --> [LLM] --> [Parse Output] --> [Attesta Approval] --> [Execute Tool]
                                               function_name: "send_email"
                                               action_args: {{parse_output.text}}
The parsed LLM output (e.g., {"to": "ceo@company.com", "body": "..."}) is passed as the action arguments for risk evaluation.

Pipeline Patterns

Pattern: Conditional Execution

Use the output Data object’s denied field in a conditional component:
[Attesta Approval] --> [Conditional: data.denied == false] --> [Execute Action]
                                        |
                                        --> [Notify User: "Action denied"]

Pattern: Chained Evaluation

Evaluate multiple actions in sequence, each with appropriate risk levels:
[Data Fetch]     --> [Attesta: read_data, low]     --> [Process Data]
[Process Data]   --> [Attesta: transform, medium]  --> [Write Results]
[Write Results]  --> [Attesta: deploy, critical]   --> [Deploy]

Pattern: High-Risk Action with Hints

For actions that are inherently dangerous, set explicit risk hints:
  1. Set Function Name to drop_database_table.
  2. Set Risk Level to critical.
  3. Set Risk Hints to:
    {"destructive": true, "irreversible": true, "production": true}
    
  4. The risk scorer will combine the destructive verb, the critical override, and the hints to produce a very high risk score, triggering multi-party approval.

JSON Parsing Behavior

Both Action Arguments and Risk Hints accept JSON strings. The _parse_json() helper handles edge cases gracefully:
InputParsed ResultBehavior
{"key": "value"}{"key": "value"}Normal parsing
"" or empty{}Empty dict
"not valid json{{"{}Warning logged via self.log(), empty dict used
[1, 2, 3] (array){}Non-dict JSON is treated as empty
null or None{}Empty dict
Invalid JSON does not stop the pipeline. The component logs a warning ("Warning: invalid JSON in {field_name}, using empty dict") but does not fail. This means risk scoring may be less accurate if arguments are malformed. Check Langflow’s logs if you suspect a parsing issue.

Source Code Reference

The component extends Langflow’s Component base class:
from lfx.custom.custom_component.component import Component
from lfx.io import DropdownInput, MessageTextInput, Output
from lfx.schema import Data
from attesta.core.gate import Attesta
from attesta.core.types import ActionContext, RiskLevel, Verdict


class AttestaGate(Component):
    display_name = "Attesta Approval"
    description = "Human-in-the-loop approval that evaluates AI agent actions for risk before execution"
    documentation = "https://attesta.dev"
    icon = "shield-check"
    name = "attesta_gate"
The evaluate_gate method is async and handles the full Attesta pipeline. The _parse_json private method provides safe JSON parsing with logging.

n8n Integration

Workflow node for n8n data pipelines

Flowise Integration

Tool component for Flowise chatflows

Dify Integration

Plugin tool for the Dify platform

No-Code Overview

Compare all no-code platforms