Agent Traceability

Kōdo is the first programming language with built-in support for tracking AI agent authorship and enforcing trust policies. This enables organizations to maintain accountability over AI-generated code.

Annotations

@authored_by

Declares who wrote a function — human or AI agent:

@authored_by(agent: "claude")
fn ai_generated() -> Int {
    return 42
}

@confidence

Declares how confident the author is in the correctness of the code, on a scale from 0.0 to 1.0:

@confidence(0.95)
fn well_tested() -> Int {
    return 42
}

@reviewed_by

Declares that a human has reviewed the code:

@reviewed_by(human: "alice")
fn human_approved() -> Int {
    return 42
}

@security_sensitive

Marks a function as security-sensitive, requiring formal contracts:

@security_sensitive
fn validate_input(value: Int) -> Bool
    requires { value > 0 }
{
    return true
}

Note: Contract expressions currently support integer and boolean comparisons. String comparisons in requires/ensures are not yet supported.

Trust Policies

Low Confidence Review (E0260)

Functions with @confidence(X) where X < 0.8 must have @reviewed_by(human: "..."):

// ERROR: @confidence(0.5) < 0.8 without review
@confidence(0.5)
fn risky() -> Int { return 42 }

// OK: low confidence but reviewed
@confidence(0.5)
@reviewed_by(human: "alice")
fn reviewed_risky() -> Int { return 42 }

Security-Sensitive Contracts (E0262)

Functions marked @security_sensitive must have at least one requires or ensures clause.

Confidence Propagation

Kōdo computes transitive confidence for each function. A function’s computed confidence is the minimum of:

  • Its own declared @confidence (defaults to 1.0 if not specified)
  • The computed confidence of every function it calls

This means confidence propagates through the call chain — a function is only as trustworthy as its least trustworthy dependency.

Module Confidence Threshold (E0261)

Set min_confidence in the meta block to enforce a minimum confidence level:

module secure_app {
    meta {
        purpose: "A security-critical application"
        min_confidence: "0.9"
    }

    @confidence(0.5)
    @reviewed_by(human: "alice")
    fn weak_link() -> Int { return 1 }

    fn main() -> Int {
        return weak_link()  // ERROR E0261: module confidence 0.50 < threshold 0.90
    }
}

Confidence Report

Use kodoc confidence-report to inspect confidence across a module:

kodoc confidence-report my_module.ko

# Output:
# Confidence Report for module `my_module`
# ============================================================
# Overall confidence: 0.50
#
# Function                       Declared   Computed
# ------------------------------------------------------------
# weak_link                          0.50       0.50
# main                               1.00       0.50

For JSON output (suitable for AI agent consumption):

kodoc confidence-report my_module.ko --json

Preventing Reviewer Forgery

A fundamental challenge with AI-generated code: an LLM could write @reviewed_by(human: "alice") in source code, bypassing trust enforcement. Kōdo addresses this with trust identity verification via kodo.toml.

Configuring the [trust] Section

Add a [trust] section to your project’s kodo.toml:

[trust]
# Names of known AI agents — never permitted as human reviewers.
known_agents = ["claude", "gpt-4", "copilot", "gemini"]

# Optional: allowlist of authorized human reviewers.
# When set, only listed names are accepted in @reviewed_by(human: "...").
human_reviewers = ["alice", "bob", "rfunix"]

Both fields are opt-in. A project without [trust] behaves exactly as before.

What Gets Rejected

E0263 — Agent claims human review: If a function has @reviewed_by(human: "claude") and "claude" is in known_agents, the compiler rejects it:

error[E0263]: function `process_payment`: reviewer `claude` is a known AI agent
              and cannot claim human review
  --> src/main.ko:5:1

The auto-fix changes human: to agent:.

E0264 — Reviewer not in allowlist: If human_reviewers is set and the reviewer is not listed:

error[E0264]: function `process_payment`: reviewer `unknown` is not in the
              `human_reviewers` allowlist
  --> src/main.ko:5:1

Combining with kodoc audit

Use trust=verified in the audit policy for CI/CD gating:

kodoc audit src/main.ko \
  --policy "min_confidence=0.8,reviewed=all,trust=verified"

This exits with code 1 if any @reviewed_by(human: "X") annotation names a known agent.

Security Model

MechanismProtectionLimitation
known_agents allowlistRejects obvious LLM identity stringsGameable if agent uses an unlisted name
human_reviewers allowlistRestricts valid reviewer identitiesGameable if attacker controls kodo.toml
CI git-blame checkDetects commits from bot accountsRequires protected CI pipeline
Cryptographic signatures (roadmap)Cryptographically binds review to a GPG keyRequires key management

The recommended production setup: configure both lists in kodo.toml, protect it via branch protection rules, and add a CI step that verifies @reviewed_by annotations were committed by a human account.