dify-attesta package provides an Attesta plugin for the Dify platform. It adds an Attesta Approval tool that evaluates AI agent actions for risk through the full Attesta pipeline, with provider-level credentials for configuring the risk threshold.
Package:
dify-attesta v0.0.1 | Language: Python | Type: Dify plugin | Dependencies: attesta, dify_pluginInstallation
Build the plugin archive
From the The plugin directory contains:
dify-attesta directory, build the .difypkg file following the Dify plugin packaging guide:Configure the provider
After installation, go to Tools > Attesta > Authorize and configure the optional Risk Threshold credential (see Provider Credentials below).
Provider Credentials
The Attesta provider (AttestaProvider) defines one optional credential that applies to all Attesta tools in the workspace:
| Credential | Type | Required | Default | Description |
|---|---|---|---|---|
risk_threshold | Number | No | 0.5 | Default risk threshold (0—1). Actions scoring above this value are flagged for review. |
_validate_credentials() method validates that risk_threshold is a number between 0 and 1. Values outside this range raise a ToolProviderCredentialValidationError.
To configure:
- Go to Tools > Attesta > Authorize
- Set the Risk Threshold value (placeholder:
0.5) - Save
Tool Configuration
The Attesta Approval tool (AttestaGateTool) has three parameters, defined in tools/attesta_gate.yaml:
| Parameter | Type | Required | Form | Description |
|---|---|---|---|---|
| function_name | String | Yes | LLM | Name of the action being evaluated (e.g., send_email, delete_record). The LLM provides this value based on context. |
| risk_level | Select | No | Form | Override the risk level. Options: Auto (Score-based), Low, Medium, High, Critical. Defaults to Auto. |
| action_args | String | No | LLM | JSON string of arguments for the action being evaluated. The LLM provides this based on the action context. |
Parameters with form type
LLM are filled by the language model at runtime based on the conversation context. Parameters with form type Form are configured by the user in the tool settings. This means function_name and action_args are dynamically determined by the LLM, while risk_level is set once during configuration.A human-in-the-loop approval tool that evaluates AI agent actions for risk before execution. Use this tool to check whether a proposed action should be allowed.
How It Works
LLM Invokes the Tool
During a conversation or workflow execution, the LLM decides to call the Attesta Approval tool. It provides the
function_name and action_args parameters based on the action it wants to evaluate.Parse Arguments
The tool parses
action_args. If the value is a string, it attempts JSON parsing. Failed parses are wrapped as {"raw_input": <value>}. If the value is already a dict (from a workflow node), it is used directly.Apply Credentials
The
risk_threshold from the provider credentials is read via self.runtime.credentials.get("risk_threshold") and added to the risk hints as {"threshold": <value>}.Build ActionContext
The tool creates an
ActionContext with:function_name: from the LLM parameterkwargs: from the parsed action argumentshints: risk hints including the threshold from credentialsenvironment:"production"metadata:{"source": "dify"}
Evaluate
The
ActionContext is passed to attesta.evaluate(). The tool handles both async and sync execution contexts (see Async Execution below).Output Format
The tool yields a JSON message with the following fields:| Field | Type | Description |
|---|---|---|
verdict | string | approved, denied, modified, timed_out, or escalated |
risk_score | float | Numeric risk score between 0 and 1 |
risk_level | string | low, medium, high, or critical |
denied | bool | true if verdict is denied, timed_out, or escalated |
audit_entry_id | string | Unique audit log entry ID |
function_name | string | Echo of the evaluated function name |
message | string | Human-readable summary (e.g., "Action 'send_email' was approved (risk: low, score: 0.20)") |
- Approved
- Denied
message field provides a human-readable summary that the LLM can relay to the user. The denied boolean provides a simple check for downstream branching in workflows.
Usage Examples
Example: Chatbot with Gated Actions
In a Dify chatbot app, add the Attesta Approval tool so the LLM can check approval before performing actions:- Create a new Chatbot app in Dify.
- Under Tools, add the Attesta Approval tool.
- Set Risk Level to
Auto (Score-based). - In the system prompt, instruct the LLM:
- The LLM will call the tool with the function name and arguments before executing any risky action.
Example: Workflow with Conditional Branching
In a Dify workflow app, use the Attesta tool output to branch execution:- Create a Workflow app.
- Add an LLM node that determines the action to take.
- Add the Attesta Approval tool node.
- Add an IF/ELSE node that checks the
deniedfield from the Attesta output. - Route to the action node on approval, or to a denial response on denial.
Example: Multi-Tool Evaluation
The LLM can call the Attesta tool multiple times in one conversation to evaluate different actions at different risk levels:Async Execution
The Dify tool uses a synchronous_invoke() generator method (as required by the dify_plugin.Tool base class), but Attesta’s evaluate() is async. The tool handles this automatically:
- No event loop running: Uses
asyncio.run()to execute the evaluation synchronously. - Event loop already running (common in Dify’s async runtime): Creates a task on the existing loop and waits for the result via a
concurrent.futures.Futurewith a 30-second timeout.
Manifest Details
The plugin manifest (manifest.yaml) declares the plugin metadata and resource requirements:
Related Pages
n8n Integration
Workflow node for n8n data pipelines
Flowise Integration
Tool component for Flowise chatflows
Langflow Integration
Python component for Langflow pipelines
No-Code Overview
Compare all no-code platforms