Implement the RiskScorer protocol to create domain-specific, ML-based, or rule-based risk scoring strategies
Attesta’s risk scoring system is fully pluggable. Any class that implements the RiskScorer protocol — a score(ctx) method and a name property — can be used wherever a scorer is expected. No inheritance or registration is required.This guide walks through building custom scorers from scratch, testing them, and wiring them into your Attesta configuration.
A rule-based scorer evaluates the ActionContext against a set of predefined rules. This is the most common approach for organizations with well-defined security policies.
1
Define Your Rules
Start by mapping your organization’s security policies to scoring rules. Each rule inspects a specific aspect of the action context and contributes to the total score.
from dataclasses import dataclass@dataclassclass ScoringRule: """A single rule that contributes to the risk score.""" name: str weight: float # 0.0 - 1.0 description: str def evaluate(self, ctx) -> float: """Return a raw score in [0.0, 1.0] for this rule.""" raise NotImplementedError
2
Implement Rule Logic
Create concrete rule classes for each security concern.
from attesta import ActionContextclass EnvironmentRule(ScoringRule): """Production actions are inherently riskier.""" def evaluate(self, ctx: ActionContext) -> float: if ctx.environment == "production": return 0.8 if ctx.environment == "staging": return 0.4 return 0.1class DataClassificationRule(ScoringRule): """Score based on data sensitivity hints.""" def evaluate(self, ctx: ActionContext) -> float: classification = ctx.hints.get("data_classification", "public") scores = { "public": 0.1, "internal": 0.3, "confidential": 0.6, "restricted": 0.9, } return scores.get(classification, 0.5)class BlastRadiusRule(ScoringRule): """Score based on how many systems are affected.""" def evaluate(self, ctx: ActionContext) -> float: affected = ctx.hints.get("affected_systems", 1) if affected >= 10: return 0.9 if affected >= 5: return 0.6 if affected >= 2: return 0.3 return 0.1
3
Assemble the Scorer
Combine the rules into a scorer that satisfies the RiskScorer protocol.
class PolicyRiskScorer: """Rule-based scorer driven by organizational security policies.""" def __init__(self): self.rules = [ EnvironmentRule( name="environment", weight=0.3, description="Production environment risk", ), DataClassificationRule( name="data_classification", weight=0.4, description="Data sensitivity classification", ), BlastRadiusRule( name="blast_radius", weight=0.3, description="Number of affected systems", ), ] @property def name(self) -> str: return "policy" def score(self, ctx) -> float: total_weight = sum(r.weight for r in self.rules) weighted_sum = sum( r.evaluate(ctx) * r.weight for r in self.rules ) return max(0.0, min(1.0, weighted_sum / total_weight))
4
Register and Use
Pass the scorer to @gate or the Attesta constructor.
For organizations with historical approval data, a machine-learning scorer can learn risk patterns from past decisions.
1
Prepare the Feature Extractor
Extract features from the ActionContext that your model can consume.
import hashlibfrom attesta import ActionContextdef extract_features(ctx: ActionContext) -> dict: """Extract ML features from an action context.""" fn_tokens = ctx.function_name.lower().split("_") arg_str = " ".join(str(a) for a in ctx.args) return { "has_destructive_verb": any( t in {"delete", "drop", "destroy", "purge"} for t in fn_tokens ), "has_mutating_verb": any( t in {"write", "update", "create", "send", "deploy"} for t in fn_tokens ), "is_production": ctx.environment == "production", "arg_length": len(arg_str), "has_pii_hint": bool(ctx.hints.get("pii")), "has_financial_hint": bool(ctx.hints.get("financial")), "function_name_hash": int( hashlib.md5(ctx.function_name.encode()).hexdigest()[:8], 16 ), }
2
Build the Scorer Class
Wrap your trained model in a class that satisfies the RiskScorer protocol.
import picklefrom pathlib import Pathfrom attesta import ActionContextclass MLRiskScorer: """Risk scorer backed by a trained scikit-learn model.""" def __init__(self, model_path: str | Path): with open(model_path, "rb") as f: self._model = pickle.load(f) self._feature_names = [ "has_destructive_verb", "has_mutating_verb", "is_production", "arg_length", "has_pii_hint", "has_financial_hint", "function_name_hash", ] @property def name(self) -> str: return "ml-classifier" def score(self, ctx: ActionContext) -> float: features = extract_features(ctx) feature_vector = [ [features[name] for name in self._feature_names] ] # predict_proba returns [[p_low, p_high]] proba = self._model.predict_proba(feature_vector)[0] risk_score = proba[1] # probability of "high risk" class return max(0.0, min(1.0, risk_score))
3
Add a Fallback
ML models can fail. Wrap the scorer with a fallback for robustness.
from attesta.core.risk import DefaultRiskScorerclass RobustMLScorer: """ML scorer with automatic fallback to the default heuristic.""" def __init__(self, model_path: str): self._ml = MLRiskScorer(model_path) self._fallback = DefaultRiskScorer() @property def name(self) -> str: return "ml-with-fallback" def score(self, ctx) -> float: try: return self._ml.score(ctx) except Exception: # Log the error in production return self._fallback.score(ctx)
4
Use with CompositeRiskScorer
Blend ML predictions with heuristic scores for defense in depth.
Never deploy an ML scorer without a fallback. If the model fails to load or predict, the fallback ensures that actions are still scored rather than silently passing through.
Always test custom scorers in isolation before deploying them.
import pytestfrom attesta import ActionContext, RiskScorerdef test_policy_scorer_protocol(): """Verify the scorer satisfies the RiskScorer protocol.""" scorer = PolicyRiskScorer() assert isinstance(scorer, RiskScorer) assert scorer.name == "policy"def test_production_scores_higher(): scorer = PolicyRiskScorer() dev_ctx = ActionContext( function_name="deploy", environment="development", ) prod_ctx = ActionContext( function_name="deploy", environment="production", ) assert scorer.score(prod_ctx) > scorer.score(dev_ctx)def test_score_is_bounded(): """Scores must always be in [0.0, 1.0].""" scorer = PolicyRiskScorer() ctx = ActionContext( function_name="extreme_action", environment="production", hints={ "data_classification": "restricted", "affected_systems": 100, }, ) score = scorer.score(ctx) assert 0.0 <= score <= 1.0def test_ml_scorer_fallback(): """ML scorer should fall back to heuristic on failure.""" scorer = RobustMLScorer("nonexistent_model.pkl") ctx = ActionContext(function_name="deploy") # Should not raise, should return a valid score score = scorer.score(ctx) assert 0.0 <= score <= 1.0
Use DefaultRiskScorer.reset_novelty() in tests to clear the internal call counter between test cases. The novelty factor is stateful and can cause score drift if not reset.