VERA: Verified Evidence & Reasoning Architecture
“A claim without evidence is an opinion. A chain of reasoning without verification is a story. VERA closes the gap between assertion and knowledge.”
What Is VERA?
VERA — Verified Evidence & Reasoning Architecture — is a structured framework for ensuring that every significant claim is traceable to evidence, every conclusion is the product of explicit reasoning, and every individual or organization retains sovereignty over the knowledge they generate and depend upon.
VERA is not a software system, though it can be implemented in one. It is not a certification standard, though it enables meaningful certification. It is not an AI system, though it is designed to work alongside AI. VERA is an architecture — a principled way of organizing how claims are made, how evidence is managed, how reasoning is constructed, and how verification is conducted.
Why VERA Exists
Three convergent forces make a framework like VERA necessary today.
The evidence crisis. Digital networks have made it trivially easy to circulate claims without evidence, to manufacture synthetic evidence, and to separate a conclusion from the chain of reasoning that produced it. Institutions that were once reliable filters — journalism, academia, government — are under structural pressure. The result is an epistemic environment in which it is genuinely difficult to know what to believe.
The AI amplification problem. Artificial intelligence systems can generate fluent, confident, detailed reasoning at massive scale. When that reasoning is correct, AI is transformative. When it is wrong — when it hallucinates, over-generalizes, or reflects training biases — the errors are just as fluent and just as confident. Without a framework for verification, AI-assisted reasoning is indistinguishable from AI-generated fiction.
The sovereignty gap. Individuals and organizations increasingly outsource their reasoning to opaque systems: search algorithms, recommendation engines, large language models, consultant reports. The conclusions they receive are disconnected from the evidence chains that would allow them to evaluate or challenge those conclusions. They become dependent on systems they cannot audit, reasoning they cannot trace, and evidence they cannot access.
VERA addresses all three forces. It provides the vocabulary, protocols, patterns, and maturity model needed to reason in a way that is verifiable, traceable, and sovereign.
Core Concepts at a Glance
| Concept | Definition |
|---|---|
| Claim | A structured assertion with explicit provenance and verification status |
| Evidence | Primary source material that supports or refutes a claim |
| Reasoning Chain | The explicit logical steps connecting evidence to a conclusion |
| Verification | The process of confirming that a claim is supported by its stated evidence |
| Pattern | A reusable, documented solution to a recurring reasoning or verification challenge |
| Domain | One of the six capability areas measured by the VERA Maturity Model |
| Sovereignty | Maintained authority over one’s data, reasoning processes, and conclusions |
| Maturity Level | One of five stages of VERA capability, from Aware to Sovereign |
How to Use This Documentation
This documentation is organized into four major sections.
Foundations establishes the philosophical and terminological bedrock of VERA. Read this first — especially the Lexicon, which defines terms precisely and prevents the drift in meaning that makes cross-team reasoning so difficult.
Maturity Model provides a five-level, six-domain model for assessing and growing VERA capability. Use it to assess your current state, set targets, and plan development work. The model is calibrated so that Level 3 (Practicing) represents a sustainable, high-value operating level for most teams.
Patterns Library is a catalog of reusable solutions to recurring reasoning and verification challenges. Patterns are not abstract — each one includes implementation steps, evidence requirements, and verification criteria.
Implementation provides practical guidance for adopting VERA: starting small, building habits, scaling to teams, and integrating with existing tools and workflows.
Design Principles
VERA was designed around several commitments that distinguish it from other epistemological frameworks.
Precision over completeness. VERA prefers a small number of precisely defined terms over a large vocabulary of loosely defined ones. The Lexicon is authoritative. When a term is defined there, it means exactly what the Lexicon says.
Process over judgment. Verification should be reproducible. Two people following the same VERA protocol on the same claim and evidence should reach the same verification state. Where judgment is required, VERA makes the judgment criteria explicit.
Traceability over efficiency. VERA asks you to slow down and document. This is a cost. The benefit is that you can audit any conclusion, return to it months or years later, update it when evidence changes, and hand it to someone else without losing the reasoning that produced it.
Sovereignty over convenience. VERA does not ask you to trust any external authority — including VERA itself. The framework is designed so that every protocol can be executed with tools you control, evidence you hold, and reasoning you can inspect and challenge.
Relationship to Other Frameworks
VERA is not designed to replace existing frameworks. It is designed to interoperate with them:
- Scientific method: VERA formalizes the hypothesis-evidence-verification cycle into repeatable, documented patterns usable outside the scientific publishing context.
- CMMI / ISO 9001: VERA’s maturity model follows the same five-level progression as CMMI. Organizations already operating at CMMI Level 3 will find the governance concepts familiar.
- Argumentation theory / logic: VERA is compatible with formal argumentation frameworks (Toulmin, IBIS) but does not require them. Its Reasoning Chain construct is simpler and more practical for organizational use.
- AI governance frameworks (EU AI Act, NIST AI RMF): VERA’s verification and sovereignty principles map naturally onto AI governance requirements. Chapter Standards Alignment documents these mappings.
Begin with Philosophy & Principles to understand the epistemic commitments that underlie every other VERA construct.
About the Author
Daniel Flügger is an applied AI engineer and founder based in New York. He builds GCP-native AI systems including geospatial intelligence pipelines, RAG architectures, and multi-agent workflows. He is currently in the Google Startups Accelerator and the author of The Proof Economy, a thesis on value capture in the intelligence age.
VERA is the practical methodology that grew out of that work.
→ danielflugger.com · GitHub · LinkedIn
Philosophy & Principles
“We do not reason to find truth so much as to find reasons. VERA is the discipline of closing the gap between the two.”
The Epistemic Problem VERA Addresses
Every decision — personal, organizational, scientific, political — rests on claims about the world. Those claims are supported, to varying degrees, by evidence. The reasoning connecting evidence to claim is often invisible, implicit, or absent altogether.
This is not a new problem. Epistemologists have grappled with it for millennia. But three developments have made it a practical crisis:
Scale. A single person can now broadcast a claim to millions. A single AI system can generate millions of claims in minutes. The volume of reasoning that flows through the world has outpaced any individual’s ability to evaluate it.
Opacity. The reasoning chains inside AI systems, institutional processes, and expert recommendations are typically not exposed to the people who depend on them. You receive the conclusion. The evidence and reasoning are invisible.
Decoupling. Claims travel far from their origins. An assertion that was hedged, contextualized, and conditional at the source arrives stripped of those qualifications. Verification status does not travel with claims; conclusions do.
VERA’s design philosophy holds that these problems are structural, not behavioral. You cannot fix them by asking people to be more careful. You fix them by building a structure in which claims, evidence, reasoning, and verification are bound together and cannot be separated.
Foundational Commitments
VERA rests on three philosophical commitments. They are not provable axioms — they are reasoned choices about how to structure a practical epistemology. Adopting VERA means accepting them as working principles.
Commitment 1: Epistemic Humility
No claim is beyond question. No source is beyond error. No reasoning chain, however elegant, is immune to revision when new evidence arrives.
This does not mean all claims are equally uncertain. It means that certainty is a gradient, not a binary — and that the appropriate response to a claim’s uncertainty is to document it explicitly rather than round it to “true” or “false.”
In VERA, the Verification State construct operationalizes this commitment. A claim is not “verified” or “unverified” in a binary sense. It exists in one of six states, each of which carries specific implications for how the claim may be used in further reasoning.
Commitment 2: Traceability as a First-Order Value
If you cannot trace a conclusion to the evidence and reasoning that produced it, you cannot evaluate it, update it, or responsibly act on it.
Traceability is costly. It requires documentation discipline, structured formats, and time. VERA accepts this cost and asks practitioners to accept it as well. The cost of traceability is paid once, when a claim is documented. The cost of its absence is paid repeatedly — every time someone needs to evaluate the claim and cannot, every time the claim is updated without propagating the change, every time an error propagates because no one can trace it back to its source.
Commitment 3: Sovereignty as Non-Negotiable
The person or organization whose decisions are affected by a claim must retain the ability to evaluate, challenge, and reject that claim. This is not merely a legal or political right — it is an epistemic requirement. A conclusion you cannot evaluate is, for practical purposes, a belief imposed from outside.
This commitment has radical implications. It means VERA cannot be implemented in a way that makes verification dependent on opaque external authorities. It means AI tools used within VERA must expose their reasoning, not just their conclusions. It means organizational VERA implementations must ensure that the people making decisions have access to the evidence and reasoning that inform them.
The VERA Ontology
VERA works with a small set of precisely defined objects. Understanding their relationships is essential to understanding the framework.
Evidence ──── supports ──→ Claim
│ │
│ │
▼ ▼
Evidence Reasoning Chain
Quality │
│
▼
Verification State
│
│
▼
Verified Conclusion
Evidence is the raw material. It is always external to the claim — a document, a measurement, an observation, a prior verified claim. Evidence has quality (rated on a four-tier scale) and provenance (a chain of custody from source to application).
Claims are structured assertions. They reference their supporting evidence, document the reasoning chain connecting evidence to conclusion, and carry a verification state. A claim without these elements is an assertion — a weaker epistemic object that VERA treats with appropriate caution.
Reasoning Chains are the explicit logical steps connecting evidence to a claim. They are not summaries or conclusions. They are step-by-step arguments: “Evidence E1 establishes X. Evidence E2 establishes Y. X and Y together, by principle P, imply claim C.” The quality of a reasoning chain is evaluated on four dimensions: validity (the logic is sound), relevance (the evidence addresses the claim), completeness (no crucial premise is hidden), and independence (the evidence items are not all proxies for the same underlying source).
Verification is the process of evaluating whether the stated evidence and reasoning actually support the claim. It is separate from the act of making a claim — ideally performed by someone other than the claimant.
Verification State is the current status of a claim in the verification process. It changes as evidence is gathered, reasoning is scrutinized, and review is performed.
The Six Principles of VERA
These six principles constitute the operational philosophy of VERA. They are ordered by priority: when principles conflict, earlier principles take precedence.
Principle 1: Evidence Primacy
Every claim that matters must be traceable to evidence.
“Matters” here means: informs a decision, guides an action, supports another claim, or is communicated beyond its originator. Personal beliefs and casual assertions are outside VERA’s scope. Claims with practical stakes are not.
Evidence Primacy does not require that all evidence be primary-source empirical data. Testimony, expert judgment, and prior verified claims are all valid evidence types — but they must be documented, sourced, and quality-rated. “Because I said so” and “everyone knows” are not evidence.
Evidence Primacy also requires documentation of the absence of expected evidence. If a claim would be strengthened by evidence type X, and evidence type X is absent, that absence must be noted and explained. Selective citation — mentioning only supporting evidence — violates Evidence Primacy.
Principle 2: Reasoning Transparency
The reasoning connecting evidence to conclusion must be made explicit.
Implicit reasoning is the most common source of reasoning error. Conclusions that feel obvious usually conceal a chain of reasoning that, when made explicit, reveals assumptions, gaps, or logical leaps. VERA requires that reasoning chains be documented in full — not summarized, not paraphrased, but step by step.
Reasoning Transparency extends to AI-assisted reasoning. When a claim is generated or substantially shaped by an AI system, the reasoning that system provided must be included in the claim’s documentation. “The AI said so” is not a reasoning chain. The AI’s stated reasoning — with its assumptions, its evidence references, and its caveats — must be captured and evaluated.
Principle 3: Verification Independence
Verification is more reliable when conducted by someone other than the claimant.
This principle does not prohibit self-verification. In early stages of VERA adoption, and for low-stakes claims, self-verification is acceptable. But the framework explicitly grades verification based on independence: a claim verified by its author carries less epistemic weight than one verified by a peer, which carries less weight than one verified by an expert in the relevant domain.
Verification Independence also means that the verification process must be capable of producing a negative result. A verification regime that never fails to verify is not verification — it is endorsement. VERA verification must be conducted against explicit criteria that can be failed.
Principle 4: Sovereignty Preservation
Individuals and organizations must retain ultimate authority over their reasoning and conclusions.
VERA is a tool for reasoning better. It is not a mechanism for transferring epistemic authority to external validators. When an organization adopts VERA, it does not outsource its judgment — it structures its judgment so that it is more defensible and auditable.
Sovereignty Preservation has implications for how VERA is implemented. Verification processes must be designed so that the people whose decisions depend on a claim can access its evidence and reasoning. Tooling must be open enough to be audited. Verification authorities must be accountable to the communities they serve.
Principle 5: Progressive Maturity
VERA capability grows incrementally, and incomplete adoption is better than no adoption.
The Maturity Model documents five levels of VERA capability. It is not a pass/fail system. An organization operating at Level 2 (Exploring) is doing something valuable. An organization at Level 1 (Aware) that is moving toward Level 2 is doing something even more valuable.
Progressive Maturity also means that VERA is applied selectively at first. Not every claim needs full VERA treatment. Starting with the claims that matter most — the ones informing significant decisions — builds the habit and the infrastructure that eventually support broader application.
Principle 6: Pattern Reusability
Solutions to recurring reasoning challenges should be documented as reusable patterns.
Inventing the wheel is costly. Every organization that tries to document evidence systematically rediscovers the same problems: how to rate evidence quality, how to handle conflicting evidence, how to document negative results, how to version claims when evidence changes. VERA’s Patterns Library captures solutions to these problems so they do not need to be rediscovered each time.
Pattern Reusability also serves as a check against local rationalization. A pattern that works in one context but cannot be generalized is a warning sign. A pattern that has been applied in multiple organizations and contexts, and documented each time, accumulates a kind of empirical verification that single-use solutions cannot achieve.
What VERA Is Not
Clarifying boundaries prevents misapplication.
VERA is not a fact-checking service. Fact-checking evaluates specific claims against available evidence. VERA is a framework for how to evaluate claims — the process, standards, and documentation that make any evaluation reproducible and trustworthy. A fact-checking service might use VERA. VERA is not that service.
VERA is not an AI system. AI tools can implement VERA protocols, assist with evidence collection, structure reasoning chains, and flag potential verification failures. But VERA’s commitments — especially Sovereignty Preservation — require that no single AI system be the authoritative arbiter of VERA compliance.
VERA is not a substitute for domain expertise. Evaluating whether evidence supports a claim in molecular biology requires expertise in molecular biology. VERA provides the structure for that evaluation, not the substance. Domain knowledge remains essential; VERA ensures that domain knowledge is documented and applied systematically.
VERA is not a certification program (in the credentialing sense). Organizations and individuals can be described as operating at a given Maturity Level. That description is a diagnostic tool, not a credential. It has value only to the extent that the maturity assessment is honest and rigorous.
The Relationship Between VERA and Truth
VERA does not claim to be a truth-finding machine. Evidence is imperfect. Reasoning is fallible. Verification is conducted by humans. A claim can pass full VERA verification and still be wrong.
What VERA provides is not truth — it provides epistemic accountability. When a verified VERA claim turns out to be wrong, the error is traceable. The evidence that was cited can be re-examined. The reasoning that connected evidence to conclusion can be scrutinized. The verification process that passed the claim can be audited. This traceability is the foundation of learning from error rather than merely being hurt by it.
The goal of VERA is not a world without false beliefs. It is a world where false beliefs are harder to propagate, easier to detect, and possible to correct.
Proceed to Lexicon: Canonical Definitions to establish the precise vocabulary that makes VERA’s protocols unambiguous.
VERA is authored by Daniel Flügger and licensed under CC BY 4.0. Free to use, share, and build on — attribution required. danielflugger.com
Lexicon: Canonical Definitions
This lexicon defines terms as they are used in VERA. When a term appears in VERA documentation, it carries the meaning given here — not a colloquial meaning, not a discipline-specific meaning from outside VERA, and not a meaning imported from another framework without explicit annotation.
Definitions are listed in dependency order: terms that appear in a definition are defined earlier in the list.
Core Objects
Assertion
An assertion is a statement made by an agent without formal evidence attachment, reasoning chain, or verification status. Assertions are the raw material of claims. They are not inherently invalid — they are epistemically unprocessed.
VERA does not prohibit assertions. It distinguishes them clearly from claims and tracks them separately in documentation. Assertions used as evidence in a reasoning chain must be flagged as such, and their use degrades the quality rating of that evidence chain accordingly.
Compare: Claim.
Claim
A claim is a structured epistemic object consisting of:
- A statement — the proposition asserted
- A claim identifier — a unique, stable reference (format:
VERA-C-[YYYY]-[NNNN]) - A provenance record — who made the claim, when, and in what context
- An evidence set — one or more Evidence Items cited in support
- A reasoning chain — the explicit logical steps connecting the evidence set to the statement
- A verification state — the current status of the claim in the verification process
- A confidence rating — a quantified assessment of epistemic confidence given the evidence and verification state
A claim that lacks items 4, 5, or 6 is an Assertion, not a claim, regardless of how its originator characterizes it.
Claims are the primary unit of VERA. All VERA protocols operate on claims.
Evidence Item
An evidence item is a discrete piece of source material cited in support of a claim. Each evidence item has:
- Source reference: A citation sufficient to locate the original material (URL with access date, document ID, physical location, etc.)
- Evidence type: One of Primary, Secondary, Tertiary, or Testimonial (see Evidence Quality)
- Relevance statement: An explicit description of what aspect of the claim this evidence supports
- Chain of custody: How the evidence passed from its original source to the claimant
- Access date: When the claimant accessed the evidence (important for time-sensitive material)
Evidence items are not arguments. They do not, by themselves, support a claim. They are the raw material that a Reasoning Chain processes into a conclusion.
Evidence Quality
Evidence quality is a four-tier rating applied to each evidence item, assessing its reliability as a basis for claims.
| Tier | Label | Definition |
|---|---|---|
| 1 | Primary | Original source: direct observation, raw data, original document, firsthand testimony from the event described |
| 2 | Secondary | Derived from primary sources: synthesis, analysis, peer-reviewed interpretation of primary data |
| 3 | Tertiary | Derived from secondary sources: textbooks, encyclopedias, review articles, journalism interpreting secondary sources |
| 4 | Testimonial | Assertion by a domain expert or credible witness, where the underlying evidence is not directly accessible |
Evidence quality is not a judgment of the evidence’s relevance or accuracy — it is a structural rating of how close the evidence is to its original source. A Tier 1 evidence item can be wrong. A Tier 4 evidence item can be accurate.
When multiple evidence items of different quality tiers support the same claim element, the reasoning chain must note the variation and justify reliance on lower-tier evidence when higher-tier evidence is unavailable.
Evidence Set
An evidence set is the complete collection of Evidence Items cited in a claim. A well-formed evidence set:
- Includes all evidence consulted, not only evidence that supports the conclusion (see Selective Citation)
- Identifies conflicting evidence explicitly and explains how it was weighed
- Notes the absence of expected evidence types when that absence is material
- Rates each item by Evidence Quality
The quality of an evidence set is calculated from the quality distribution of its items and the degree of independence among them (see Evidence Independence).
Evidence Independence
Evidence independence measures the degree to which items in an evidence set derive from genuinely separate sources rather than from a single underlying source that appears multiple times.
Three news articles all citing the same press release are not three independent pieces of evidence — they are one. A reasoning chain that treats them as three independent items is exhibiting a form of reasoning error VERA calls source collapse.
Independence is assessed on a three-point scale:
- Independent: Items trace to demonstrably separate primary sources
- Correlated: Items share a common upstream source but add independent interpretation or transformation
- Dependent: Items are proxies for the same underlying source
A claim whose evidence set contains only dependent items is treated as resting on a single evidence item, regardless of how many items are cited.
Reasoning Chain
A reasoning chain is the explicit, step-by-step argument connecting an evidence set to a claim’s statement. Each step in the reasoning chain consists of:
- Premises: Evidence items or prior conclusions accepted as inputs
- Inference rule: The logical principle applied (deductive, inductive, abductive, or analogical)
- Intermediate conclusion: The output of this step, which may serve as a premise for the next
A reasoning chain is complete when its final conclusion is identical to the claim’s statement, and every premise is either a referenced evidence item or a prior step’s intermediate conclusion.
Reasoning chain quality is evaluated on four dimensions:
| Dimension | Definition |
|---|---|
| Validity | The logical structure is sound — the conclusion follows from the premises |
| Relevance | The evidence items address the claim and not merely adjacent topics |
| Completeness | No material premise is hidden or assumed without acknowledgment |
| Independence | The evidence items are not all proxies for the same underlying source |
A reasoning chain that fails on any dimension is noted in the verification record, and the claim’s verification state is downgraded accordingly.
Verification
Verification is the process of evaluating whether a claim’s evidence set and reasoning chain adequately support its statement. Verification:
- Is conducted against explicit, pre-stated criteria (the Verification Protocol)
- Is performed by a designated verifier, who may or may not be the claimant
- Produces a verification record documenting what was assessed and how
- Results in an updated Verification State
- Is time-stamped and attributed — “verified” is not a permanent status
Verification is not proof. A verified claim is one that has been evaluated by the VERA process and found to meet the specified criteria. It may still be wrong. Verification establishes epistemic accountability, not truth.
Verification State
Verification state is the current status of a claim in the VERA verification lifecycle. There are six defined states:
| State | Symbol | Meaning |
|---|---|---|
| Unverified | ○ | Claim has not entered the verification process |
| Pending | ◐ | Verification has been initiated; evidence review is in progress |
| Partial | ◑ | Verification of some claim components is complete; others remain pending |
| Verified | ● | All claim components have been verified against stated criteria |
| Contested | ◈ | Verification has been challenged; the claim is under active dispute |
| Refuted | ✗ | Verification has concluded that the evidence does not support the claim |
State transitions require explicit triggers and documentation. A Verified claim does not become Contested automatically when new evidence appears — the challenge must be formally lodged through the VERA dispute process, which initiates re-evaluation.
Claims in Contested state may continue to be used in reasoning, but must be marked as Contested in any downstream reasoning chain. Refuted claims must not be used as supporting evidence.
Verification Record
A verification record is the documentation produced by a verification event. It contains:
- Claim identifier being verified
- Verifier identity (individual or team) and their relevant qualifications
- Verification date and the version of the claim verified
- Criteria applied (reference to Verification Protocol version)
- Findings for each criterion: met, not met, or not applicable
- Resulting verification state
- Notes on marginal cases, interpretation decisions, or concerns
- Confidence rating (see Epistemic Confidence) assigned by the verifier
Verification records are immutable once completed. If re-verification is needed, a new record is created and linked to the previous one.
Epistemic Confidence
Epistemic confidence is a numerical rating, from 0.0 to 1.0, representing the verifier’s assessment of how strongly the available evidence supports the claim, independent of the claim’s formal verification state.
Confidence is not probability. It is an explicit declaration of uncertainty that travels with the claim. A claim may be Verified (state) with a confidence of 0.65 — meaning it has passed the verification criteria, but the verifier judges the evidence to be only moderately strong.
The confidence rating is used to weight claims when they appear as evidence items in subsequent reasoning chains. A chain of high-confidence verified claims supports a stronger conclusion than a chain of low-confidence verified claims.
Standard confidence bands:
| Band | Range | Interpretation |
|---|---|---|
| High | 0.85 – 1.0 | Strong evidence, well-formed reasoning, independent verification |
| Moderate | 0.65 – 0.84 | Adequate evidence, sound reasoning, some gaps acknowledged |
| Low | 0.40 – 0.64 | Limited evidence, reasoning has notable gaps, use with caution |
| Speculative | 0.00 – 0.39 | Evidence is thin or circumstantial; claim requires further investigation |
Structural Concepts
Pattern
A pattern is a reusable, documented solution to a recurring reasoning or verification challenge. Patterns in VERA follow the canonical Pattern Template and must themselves be verified claims — not informal recommendations.
Patterns are not prescriptions. They document what has worked in identified contexts. Applying a pattern without verifying that the current context matches the pattern’s context is a reasoning error.
Domain
A domain is one of the six capability areas measured by the VERA Maturity Model. The six domains are:
| Domain | Description |
|---|---|
| Evidence | How evidence is identified, collected, classified, and maintained |
| Reasoning | How reasoning chains are constructed, documented, and evaluated |
| Verification | How claims are submitted to, processed through, and resolved by verification |
| Governance | How VERA practices are mandated, resourced, audited, and improved |
| Sovereignty | How authority over data, reasoning, and conclusions is maintained |
| Integration | How VERA practices connect to existing systems, workflows, and tools |
An organization’s VERA maturity level may differ by domain. It is common — and expected — for Governance and Integration to lag behind Evidence and Reasoning during early adoption.
Maturity Level
Maturity level is one of five stages in the VERA Maturity Model, representing increasing sophistication in VERA practice:
| Level | Name | Defining characteristic |
|---|---|---|
| 1 | Aware | VERA concepts are understood; no systematic practice exists |
| 2 | Exploring | VERA is applied to selected claims; practice is inconsistent |
| 3 | Practicing | VERA is applied systematically to all significant claims; results are auditable |
| 4 | Governing | VERA practices are institutionalized, measured, and continuously improved |
| 5 | Sovereign | VERA enables full epistemic sovereignty; the framework itself is subject to VERA review |
Maturity levels are assessed per domain and per organization (or individual). A single number cannot fully characterize a complex organization’s VERA maturity.
Sovereignty
Sovereignty in VERA means the state in which an individual or organization retains ultimate authority over:
- The data they rely on (they can access, audit, and export it)
- The reasoning they use (it is explicit, documented, and challengeable)
- The conclusions they reach (they are not imposed by opaque external systems)
- The verification processes they trust (they can audit those processes)
Sovereignty is not isolation. A sovereign VERA practitioner can and should use external evidence, expert testimony, and AI-assisted reasoning. What they cannot do — while maintaining sovereignty — is accept conclusions without access to the evidence and reasoning that produced them.
Error Taxonomy
Selective Citation
Selective citation is the practice of including only evidence that supports a claim while omitting evidence that complicates or contradicts it. It is a violation of Evidence Primacy and renders a claim’s evidence set invalid.
Selective citation includes: cherry-picking favorable studies while omitting unfavorable ones; citing the summary of a document while omitting its caveats; referencing a prior claim’s conclusion while omitting its uncertainty rating.
Source Collapse
Source collapse is the reasoning error of treating multiple evidence items as independent when they derive from the same underlying source. It inflates apparent confidence by counting one piece of evidence multiple times.
See also: Evidence Independence.
Reasoning Gap
A reasoning gap is a logical step in a reasoning chain that is assumed without documentation. Reasoning gaps are often the location of an argument’s most consequential assumptions. VERA’s Verification Protocol requires that all reasoning gaps be identified and either filled (with a documented reasoning step) or flagged (as an acknowledged assumption).
Chain Laundering
Chain laundering is the practice of using a weakly supported or unverified claim as if it were well-supported evidence in a subsequent reasoning chain. It is the epistemic equivalent of money laundering — the weak evidence is obscured by being embedded in a chain that appears solid.
Chain laundering occurs when a Testimonial-quality assertion is cited as Secondary evidence; when an Unverified claim is used as a premise in a reasoning chain without disclosure; or when a Contested claim is cited without noting its contested status.
Verification Capture
Verification capture occurs when the verification process is structurally unable to produce a negative result — when the verifier is too close to the claimant, too dependent on the claim being true, or operating under criteria calibrated to pass rather than evaluate. A captured verification process produces compliance theater rather than epistemic accountability.
Notation Conventions
| Symbol | Meaning |
|---|---|
VERA-C-[YYYY]-[NNNN] | Claim identifier |
VERA-P-[NNNN] | Pattern identifier |
VERA-V-[NNNN] | Verification record identifier |
| ○ ◐ ◑ ● ◈ ✗ | Verification states (Unverified, Pending, Partial, Verified, Contested, Refuted) |
| E1, E2, … | Evidence item references within a reasoning chain |
| C1 → C2 | Claim C2 uses verified Claim C1 as evidence |
| [conf: 0.78] | Inline confidence rating |
Proceed to Verification Protocol to understand how claims move through the VERA lifecycle.
Verification Protocol
The Verification Protocol is VERA’s procedural core. It specifies, step by step, how a raw assertion becomes a documented claim, moves through verification, and reaches a final epistemic state. Following the protocol produces reproducible results: two verifiers working independently on the same claim and evidence should reach the same verification state.
This document describes VERA Verification Protocol version 1.0. The version is significant — claims record the protocol version under which they were verified, so that verification standards can evolve without retroactively changing the status of existing records.
Protocol Overview
The protocol comprises five phases executed in sequence. Each phase has defined inputs, required activities, and outputs. A phase cannot be marked complete until its outputs satisfy the stated requirements.
Phase 1: Claim Formulation
│
▼
Phase 2: Evidence Assembly
│
▼
Phase 3: Reasoning Construction
│
▼
Phase 4: Verification Assessment
│
▼
Phase 5: Documentation & Publication
Phases 1–3 are the claimant’s responsibility. Phase 4 is the verifier’s responsibility. Phase 5 is shared. For self-verification, a single person or team executes all five phases, but must treat Phase 4 as a deliberate change of role — actively looking for weaknesses rather than defending the claim.
Phase 1: Claim Formulation
Input: A raw assertion — something believed to be true and worth documenting.
Objective: Transform the assertion into a precisely stated, uniquely identified claim ready for evidence assembly.
Step 1.1 — State the Assertion Precisely
Write the assertion in a single declarative sentence. Vague, compound, or hedged assertions must be decomposed before proceeding.
Test: Could two people read this statement and disagree on what would count as evidence for or against it? If yes, the statement is not yet precise enough.
Common problems at this step:
- Compound assertions: “Our process is efficient and our output quality is high” contains two claims. Separate them.
- Weasel words: “Generally,” “usually,” “often,” and similar hedges embed ambiguity into the claim itself. State the actual scope: “In 87% of cases observed between January and June 2025.”
- Tautologies: “AI systems that produce inaccurate outputs are unreliable” is not a claim about the world — it is a definitional statement. Remove it from the VERA pipeline.
Step 1.2 — Decompose Compound Claims
If the assertion cannot be stated in a single precise sentence without losing essential meaning, decompose it into multiple claims. Each claim proceeds through the protocol independently. Compound conclusions are reconstructed in Phase 3 by using the verified sub-claims as evidence items.
Step 1.3 — Assign a Claim Identifier
Every claim receives a unique identifier before proceeding: VERA-C-[YYYY]-[NNNN], where YYYY is the four-digit year and NNNN is a sequential number within that year. For individual practitioners without an organizational registry, use initials as a prefix: VERA-C-DJF-2025-0001.
The identifier is permanent. If the claim is revised, the revision is documented as a new version of the same claim (e.g., VERA-C-2025-0001 v2), not as a new claim, so that downstream claims that cite this one can identify the version they used.
Step 1.4 — Record Claim Context
Document:
- Claimant: Individual or team making the claim
- Date initiated: When Phase 1 was begun
- Purpose: Why this claim is being documented (what decision it informs, what argument it supports)
- Scope: The conditions under which the claim is intended to hold (domain, time period, population, geography, etc.)
- Prior art: Related prior claims in the VERA registry that this claim extends, contradicts, or supersedes
Output: A Claim Record stub containing the claim identifier, precise statement, and context metadata. Verification state is set to Unverified (○).
Phase 2: Evidence Assembly
Input: The Claim Record stub from Phase 1.
Objective: Identify, locate, document, and rate all relevant evidence — including evidence that complicates or contradicts the claim.
Step 2.1 — Identify Evidence Sources
Before retrieving evidence, list the types of evidence that would be relevant to evaluating the claim. This prospective list is the foundation of the evidence search strategy. It must be documented — a search strategy recorded only after evidence has been found is vulnerable to unconscious selectivity.
For each evidence type on the list, note:
- What form would this evidence take? (dataset, document, experiment, testimony, etc.)
- Where is this evidence likely to exist?
- What would it mean for the claim if this evidence is absent?
Step 2.2 — Conduct the Evidence Search
Retrieve evidence using the pre-documented search strategy. For each item found:
- Create an Evidence Item Record with source reference, access date, and a brief description
- Note whether this item was on the prospective list (anticipated evidence) or discovered during search (unanticipated evidence)
- Note the relevance of the item: does it support, complicate, or contradict the claim?
Do not filter at this stage. Contradicting evidence is recorded identically to supporting evidence. Filtering happens in Phase 3.
Step 2.3 — Rate Evidence Quality
Apply the four-tier quality rating to each evidence item:
| Tier | Questions to ask |
|---|---|
| Primary (Tier 1) | Is this the original source? Did the claimant directly observe or measure this? Is this an original document rather than a copy or summary? |
| Secondary (Tier 2) | Has this been interpreted by a domain expert? Is it a synthesis of primary sources? Has it been peer-reviewed or editorially reviewed? |
| Tertiary (Tier 3) | Is this a textbook, encyclopedia, or journalistic account? Does it reference secondary sources? |
| Testimonial (Tier 4) | Is this an expert’s assertion without underlying data? Is this firsthand account where the underlying evidence is inaccessible? |
When evidence spans multiple tiers (e.g., a peer-reviewed meta-analysis that includes some original data), rate it at the tier that describes its dominant character.
Step 2.4 — Assess Evidence Independence
Group the evidence items and assess which ones share upstream sources. Mark items as Independent, Correlated, or Dependent (see Lexicon: Evidence Independence).
Dependent items are not discarded — they are consolidated into a single item with a note explaining why they are treated as one. The consolidation is documented so that the verifier can evaluate it.
Step 2.5 — Document Absent Evidence
Return to the prospective list from Step 2.1. For each evidence type that was listed but not found:
- Note that it was not found
- Assess what this absence implies for the claim: neutral, weakening, or significantly weakening
- If significantly weakening: note this explicitly in the Claim Record
Output: A complete Evidence Set — all items rated, independence assessed, and absent evidence noted. The evidence set is attached to the Claim Record. Verification state remains Unverified (○).
Phase 3: Reasoning Construction
Input: The Claim Record with attached Evidence Set.
Objective: Build an explicit reasoning chain connecting the evidence set to the claim’s statement.
Step 3.1 — Map Evidence to Claim Components
Decompose the claim’s statement into its constituent logical components. For each component, identify which evidence items are relevant to it. The mapping does not need to be one-to-one: multiple evidence items may support one component, and one evidence item may support multiple components.
This mapping reveals:
- Components that are well-supported (multiple independent evidence items)
- Components that are weakly supported (single evidence item, or only low-quality items)
- Components that are unsupported (no evidence items address them)
Unsupported components require one of three responses: gather additional evidence (return to Phase 2), reduce the scope of the claim to exclude the unsupported component, or explicitly flag the unsupported component as an assumption.
Step 3.2 — Identify the Logical Structure
Determine how the evidence connects to the conclusion. VERA recognizes four inference types:
| Type | Description | Strength |
|---|---|---|
| Deductive | Conclusion follows necessarily from premises | Strongest, but requires premises to be certain |
| Inductive | Conclusion generalizes from observed instances | Strong when instances are numerous and representative |
| Abductive | Conclusion is the best explanation of the evidence | Moderate; requires ruling out alternative explanations |
| Analogical | Conclusion infers from similarity to another case | Weakest; requires strong, well-documented similarity |
Most real-world reasoning chains combine inference types. The chain must document which type is used at each step.
Step 3.3 — Write the Reasoning Chain
Write the chain step by step, using this format for each step:
Step N:
Premises: [E1, E2] / [Step N-1 conclusion]
Inference: [Deductive / Inductive / Abductive / Analogical]
Reasoning: [The argument in plain language]
Conclusion: [The intermediate conclusion this step produces]
Confidence: [0.0–1.0, with explanation]
The chain is complete when its final conclusion matches the claim’s statement exactly. If the final conclusion is stronger or weaker than the stated claim, revise the claim statement (returning to Phase 1, Step 1.1) or revise the reasoning chain until they match.
Step 3.4 — Identify and Document Assumptions
Every reasoning chain rests on assumptions — things taken as true without direct evidence. Identify all assumptions explicit in the chain and any hidden assumptions that the chain requires.
For each assumption:
- State the assumption precisely
- Assess whether it is: widely accepted (minimal documentation needed), domain-specific (cite authority), or contested (must be flagged prominently)
- If contested, consider whether the claim should be scoped to conditions under which the assumption holds
Step 3.5 — Address Contradicting Evidence
Return to the evidence items marked as complicating or contradicting. The reasoning chain must address each one. Valid responses include:
- Outweigh: The contradicting evidence is weaker (lower tier, less independent) than the supporting evidence; explain why
- Distinguish: The contradicting evidence applies to a different scope than the claim; explain the distinction
- Qualify: The claim is revised to acknowledge the contradiction as a genuine limitation
- Concede: The contradicting evidence is decisive; the claim cannot be maintained
Any response other than “Concede” must be argued, not asserted.
Output: A complete Reasoning Chain attached to the Claim Record. Verification state is updated to Pending (◐).
Phase 4: Verification Assessment
Input: The complete Claim Record (statement, evidence set, reasoning chain) in Pending state.
Objective: Evaluate whether the evidence set and reasoning chain meet VERA verification criteria. Produce a Verification Record.
This phase is executed by the verifier — an individual or team distinct from the Phase 1–3 claimant. For self-verification, the practitioner must adopt a genuinely adversarial stance toward the claim, actively seeking to refute it.
Step 4.1 — Independence Check
Verify that the verifier meets the independence requirements for this claim:
| Verification level | Independence requirement |
|---|---|
| Foundational | Verifier is different from claimant |
| Peer | Verifier has relevant domain competency but no stake in the claim’s outcome |
| Expert | Verifier has recognized expertise in the claim’s domain and is institutionally independent |
The independence level achieved is recorded in the Verification Record. It affects the claim’s epistemic weight.
Step 4.2 — Apply Verification Criteria
Evaluate the claim against each criterion. For each criterion, record: Met, Not Met, or Not Applicable with supporting notes.
Evidence Criteria:
| # | Criterion | Met if… |
|---|---|---|
| E1 | Evidence Set Completeness | No significant evidence types from the prospective list are absent without explanation |
| E2 | Evidence Quality Adequacy | The claim’s conclusion is supported by evidence at the appropriate quality tier for the stakes involved |
| E3 | Independence Adequacy | Supporting evidence is not all dependent on the same underlying source |
| E4 | Contrary Evidence Addressed | All identified contrary evidence is explicitly addressed in the reasoning chain |
| E5 | Chain of Custody | Source references are sufficient to locate the original evidence |
Reasoning Criteria:
| # | Criterion | Met if… |
|---|---|---|
| R1 | Logical Validity | Each reasoning step’s conclusion follows from its premises under the stated inference type |
| R2 | Relevance | Evidence items cited in each step actually address the component they are claimed to support |
| R3 | Completeness | No material premise is hidden or assumed without documentation |
| R4 | Proportionality | The strength of the conclusion does not exceed what the evidence supports |
| R5 | Assumption Disclosure | All significant assumptions are identified and their status assessed |
Formal Criteria:
| # | Criterion | Met if… |
|---|---|---|
| F1 | Claim Precision | The claim statement is free of vague language that makes it untestable |
| F2 | Identifier Present | A valid VERA claim identifier is assigned |
| F3 | Scope Defined | The conditions under which the claim holds are documented |
Step 4.3 — Assess Overall Verification State
Based on the criteria assessment:
- All criteria Met → Verified (●) [or Partial if some N/A items]
- Any Evidence or Reasoning criterion Not Met → Partial (◑) if the deficiency is minor, or document specific findings for re-submission
- Multiple significant criteria Not Met → claim returns to claimant with findings; state remains Pending (◐) pending revision
- Evidence or reasoning is actively contradicted → Refuted (✗) with full documentation
Step 4.4 — Assign Epistemic Confidence
Assign a confidence rating (0.0–1.0) with a written justification. The confidence rating reflects:
- Quality distribution of the evidence set
- Independence of evidence items
- Strength of the reasoning chain’s inference types
- Presence and significance of unresolved assumptions
- Independence level of the verification
Step 4.5 — Complete the Verification Record
Create a Verification Record (VERA-V-[NNNN]) containing all findings from Steps 4.1–4.4. The record is signed (digitally or physically) by the verifier and dated. It is immutable once completed.
Output: A completed Verification Record. The Claim Record is updated with the resulting Verification State and confidence rating. The Verification Record is linked to the Claim Record.
Phase 5: Documentation & Publication
Input: Verified (or otherwise resolved) Claim Record and Verification Record.
Objective: Ensure the claim is recorded in a durable, accessible, and auditable form.
Step 5.1 — Format the Claim Record
Complete the Claim Record in the canonical VERA format (see Pattern Template for the full template). Ensure all mandatory fields are populated and all linked documents (evidence items, verification record) are accessible via their references.
Step 5.2 — Register the Claim
Enter the claim in the applicable claim registry — organizational, team, or personal. The registry provides:
- A searchable catalog of existing claims (prevents duplicate effort)
- A mechanism for downstream claims to reference upstream claims
- An audit trail of the claim’s version history
Step 5.3 — Notify Downstream Claims
If this claim has already been cited as evidence in other claims, notify the maintainers of those downstream claims. A change in verification state, confidence rating, or claim scope may require re-evaluation of downstream claims.
Step 5.4 — Establish Review Cadence
Not all claims remain valid indefinitely. For time-sensitive domains, establish a review schedule:
| Domain sensitivity | Recommended review interval |
|---|---|
| High (regulatory, medical, financial) | 6 months or upon material change |
| Moderate (organizational policy, technical standards) | 12–18 months |
| Low (historical, conceptual) | 36 months or upon challenge |
A claim that has not been reviewed within its cadence is flagged as Stale — not Refuted, but requiring attention before use in new reasoning.
Output: The claim is registered, formatted, linked to its verification record, and scheduled for review. The claim is now available for use as evidence in other claims.
Special Situations
AI-Assisted Claims
When a large language model or other AI system contributes substantially to claim formulation, evidence identification, or reasoning chain construction, the involvement must be documented:
- Model and version used
- Prompts issued (or a representative summary)
- Outputs used verbatim versus interpreted
- Human review performed on each AI output before incorporation
AI-assisted claims are subject to the same verification criteria as human-generated claims. AI involvement does not add to or subtract from evidence quality ratings, but the reasoning chain must document where human judgment verified AI-generated reasoning steps.
Contested Claims
When a Verified claim is formally challenged:
- The challenge is logged in the Verification Record, with the challenger’s identity and the basis for the challenge
- The claim transitions to Contested (◈) state
- An independent verifier is assigned to re-evaluate the specific points of challenge
- The re-evaluation produces a new Verification Record linked to the original
- Based on re-evaluation: the claim returns to Verified, or is downgraded to Partial or Refuted
Challenges must be substantive — they must identify specific criteria that the original verification failed to assess correctly. A challenge that merely asserts disagreement without identifying a verification failure is recorded but does not trigger re-evaluation.
Urgent Verification
In time-sensitive situations, a modified verification path is permitted:
- The claim is marked Pending-Urgent (◐!) with a documented rationale for urgency
- Verification criteria E1, R1, R2, and F1 are assessed immediately (the minimum viable set)
- The claim is published with verification state Partial-Urgent (◑!) and a confidence rating calculated only from criteria assessed
- Full verification proceeds concurrently
- The claim is updated to its full verification state within 72 hours, or escalated to the team’s governance process if this is not possible
Urgent verification must not become routine. A registry showing more than 15% of claims in Urgent status indicates a governance problem, not a verification problem.
Proceed to Pattern Template to learn how verified claims and reasoning are packaged into reusable patterns.
Pattern Template
A VERA pattern is a reusable, documented solution to a recurring challenge in evidence management, reasoning construction, or verification practice. Patterns are not informal tips or best practices — they are structured knowledge objects that follow the canonical template defined in this chapter, and they are themselves subject to VERA verification.
This chapter defines the Pattern Template in full, explains each field, provides guidance on when and how to use existing patterns, and describes how new patterns are proposed, documented, and verified.
Why Patterns
Every organization that systematically applies VERA will encounter the same recurring challenges: How do you handle evidence from a source that has a conflict of interest? How do you reason about absence of evidence? How do you construct a reasoning chain when the best available evidence is Testimonial-tier? How do you verify a claim when the domain expert and the claimant are the same person?
These challenges have been encountered before. In most cases, they have been worked through — imperfectly at first, then with increasing clarity — and the solutions are transferable. VERA patterns capture those solutions so that each team doesn’t rediscover them from scratch.
Patterns also serve a normative function. When a VERA community agrees that a pattern represents the right way to handle a recurring situation, deviations from that pattern become visible and require explicit justification. This is not rigidity — it is accountability. If your context genuinely requires a different approach, document why.
The Canonical Pattern Template
The following template is mandatory for all VERA patterns. Optional fields may be omitted if genuinely not applicable, but their absence must be intentional — not accidental.
Field-by-Field Reference
Pattern Header
# Pattern: [Name]
The pattern name should be:
- Descriptive of the situation or solution, not abstractly titled
- Consistent with VERA naming conventions (Title Case, no jargon)
- Unique within the Patterns Library
Good names: “Conflicted Source Disclosure,” “Absence of Evidence Assessment,” “AI-Assisted Claim Documentation” Poor names: “Pattern 7,” “The Hard Case,” “Evidence Thing”
Pattern ID
Pattern ID: VERA-P-[NNNN]
Assigned by the pattern registry upon submission. Format is VERA-P- followed by a four-digit sequential number. The ID is permanent and does not change if the pattern is revised.
Classification
Domain: [Evidence | Reasoning | Verification | Governance | Sovereignty | Integration]
Applicability: [Individual | Team | Organization | All]
Complexity: [Simple | Moderate | Complex]
Maturity Level: [1 | 2 | 3 | 4 | 5]
- Domain: The primary VERA domain this pattern addresses. A pattern may span domains; list the primary one.
- Applicability: Whether this pattern is most relevant to individual practitioners, teams, or organizational systems.
- Complexity: A rough guide to implementation effort. Simple = < 30 minutes to implement correctly. Moderate = requires planning. Complex = requires architectural decisions or significant tooling.
- Maturity Level: The minimum VERA maturity level at which this pattern is typically applied. A Level 2 pattern can be applied by a Level 2 or higher practitioner; it is unlikely to be useful at Level 1.
Context
## Context
[2–4 paragraphs describing the situation in which this pattern applies. Include:
- The type of claim or reasoning scenario
- The organizational or individual circumstances
- What makes this situation distinctive enough to warrant a documented pattern
- Any prerequisites (tools, practices, or prior VERA maturity) needed]
The context section is not a problem statement — it is a situation description. It answers: When does someone find themselves needing this pattern?
Problem
## Problem
[1–2 paragraphs stating the specific challenge the pattern addresses.
Use precise VERA language. Reference Lexicon terms by name.
State the problem as it actually appears — the symptom the practitioner notices —
not as the underlying theoretical issue.]
Example of a well-stated problem: “When a claim’s most relevant evidence comes from a single source that also has a financial interest in the claim’s truth, the verification criterion E3 (Independence Adequacy) cannot be met in the conventional sense. Verifiers are uncertain whether to return the claim as unverifiable or to proceed with an acknowledged limitation.”
Example of a poorly stated problem: “Evidence quality is hard to assess when sources are biased.”
Forces
## Forces
- [Force 1: a competing concern or constraint that the solution must navigate]
- [Force 2: ...]
- [Force 3: ...]
- [Force 4 (optional): ...]
Forces are the tensions that make this a genuine problem rather than a simple question with an obvious answer. They explain why the pattern is needed — if there were no competing concerns, the right answer would be obvious.
Example forces for the conflicted source problem:
- The evidence from the conflicted source may be the only available evidence of this type
- Discarding it entirely loses real information
- Using it without disclosure violates Evidence Primacy
- Flagging it too prominently may make the claim appear weaker than warranted
Solution
## Solution
[3–6 paragraphs describing the solution. The solution must be specific enough
to implement without judgment calls that are not themselves documented.
It should address each Force listed above, explaining how the solution
navigates the tension.]
The solution section is the pattern’s core. It should read as clear, implementable guidance — not general advice. If the solution requires making a judgment call, it must specify what the judgment criteria are.
Implementation Steps
## Implementation
1. [Step 1 — specific action with clear completion criterion]
2. [Step 2]
3. [...]
Implementation steps are numbered and sequential. Each step has a completion criterion — something observable that tells you the step is done. Steps should be atomic enough that they can be delegated and checked.
Evidence Requirements
## Evidence Requirements
[What evidence must be assembled before or during application of this pattern.
Specify quality tier minimums where applicable.
Note any evidence types that are specifically required or specifically prohibited.]
This section distinguishes VERA patterns from general methodology: every pattern specifies what evidence standards it requires.
Verification Criteria
## Verification Criteria
[How to verify that the pattern has been correctly applied.
List observable outcomes — not activities. "The verifier reviewed the claim"
is an activity. "The Verification Record contains a documented independence
assessment for each evidence item" is an observable outcome.]
Consequences
## Consequences
**Benefits:**
- [Benefit 1]
- [Benefit 2]
**Liabilities:**
- [Liability 1]
- [Liability 2]
Patterns have costs as well as benefits. Documenting liabilities is not a weakness — it is what distinguishes an honest pattern from marketing. A pattern with no liabilities is either trivial or poorly analyzed.
Known Uses
## Known Uses
- [Organization/context 1]: [Brief description of how pattern was applied and what was learned]
- [Organization/context 2]: [...]
Known uses are the empirical foundation of a pattern. A pattern documented without any known uses is a proposal, not a pattern. During review, the absence of known uses is noted; upon accumulation of two or more documented uses, the pattern is upgraded to Verified status.
Related Patterns
## Related Patterns
- [VERA-P-NNNN — Pattern Name]: [How this pattern relates]
- [...]
Verification Status
## Verification Status
State: [Proposed | Under Review | Verified | Deprecated]
Verifier: [Name / Team]
Verified on: [Date]
Record: [VERA-V-NNNN]
Confidence: [0.0–1.0]
Patterns, like claims, carry verification status. A Proposed pattern has been submitted but not reviewed. Under Review means active evaluation is in progress. Verified means the pattern has met VERA verification criteria and has at least two documented Known Uses. Deprecated means the pattern has been superseded or found to be ineffective.
Complete Template (Copy-Ready)
# Pattern: [Name]
**Pattern ID:** VERA-P-[NNNN]
| Field | Value |
|-------|-------|
| Domain | [Evidence / Reasoning / Verification / Governance / Sovereignty / Integration] |
| Applicability | [Individual / Team / Organization / All] |
| Complexity | [Simple / Moderate / Complex] |
| Maturity Level | [1–5] |
---
## Context
[Describe the situation in which this pattern applies.]
## Problem
[State the specific challenge.]
## Forces
- [Force 1]
- [Force 2]
- [Force 3]
## Solution
[Describe the solution in detail.]
## Implementation
1. [Step 1]
2. [Step 2]
3. [Step 3]
## Evidence Requirements
[Specify evidence standards required.]
## Verification Criteria
[List observable outcomes that confirm correct application.]
## Consequences
**Benefits:**
- [Benefit 1]
**Liabilities:**
- [Liability 1]
## Known Uses
- [Context 1]: [Description]
## Related Patterns
- [VERA-P-NNNN — Name]: [Relationship]
---
## Verification Status
| Field | Value |
|-------|-------|
| State | [Proposed / Under Review / Verified / Deprecated] |
| Verifier | [Name] |
| Verified on | [Date] |
| Record | [VERA-V-NNNN] |
| Confidence | [0.0–1.0] |
A Worked Example: The Absence-of-Evidence Pattern
To illustrate the template in use, the following is a partial example of a real VERA pattern.
Pattern: Absence-of-Evidence Assessment
Pattern ID: VERA-P-0001
| Field | Value |
|---|---|
| Domain | Evidence |
| Applicability | All |
| Complexity | Moderate |
| Maturity Level | 2 |
Context
In the course of assembling an evidence set (Verification Protocol Phase 2), practitioners regularly encounter a situation where evidence of an expected type is absent. This might mean the expected evidence genuinely does not exist, that it exists but was not found, or that it was found and selectively omitted.
The VERA requirement to document absent evidence (Step 2.5) creates a documentation obligation but does not specify how to reason about the significance of that absence. This pattern addresses the analytical work required at Step 2.5.
Problem
The practitioner has completed an evidence search and finds that a category of evidence listed in the prospective search plan (Step 2.1) yielded no results. The claim’s verification status depends partly on how materially the absent evidence weakens the claim — but there is no standard method for assessing materiality.
Without guidance, practitioners tend toward one of two errors: dismissing the absence as unimportant (understating the evidential gap) or treating any absence as fatal to the claim (overstating it). Neither produces calibrated epistemic confidence.
Forces
- The absence of evidence genuinely varies in significance: absent safety records for a medical device are more significant than absent sales records for the same period.
- The claim’s scope may be adjustable to exclude the area the absent evidence would have addressed.
- Documenting absent evidence as “neutral” when it is actually significant constitutes a form of selective citation.
- Marking every absence as “significantly weakening” would make most claims unverifiable.
Solution
Assess the significance of each absent evidence item on three dimensions:
1. Expected vs. unexpected. Evidence that should exist given the claim’s scope, and whose absence is therefore unexplained, is more significant than evidence that might exist but whose absence is explicable.
2. Substitutability. Can the absent evidence be replaced by a different evidence type that addresses the same claim component? If yes, the absence is less significant — provided the substitute is assembled. If no, the absence leaves a genuine gap.
3. Directionality. If the absent evidence existed, would it more likely support or contradict the claim? When the absent evidence would likely support the claim and it is still absent, the gap is moderate. When the absent evidence would likely contradict the claim (the “file drawer problem”) and is absent, the gap is serious — and warrants explicit notation.
For each absent evidence item, apply this assessment and assign one of three materiality ratings:
- Non-material: Absence is explainable and substitutable; confidence impact < 0.05
- Moderate: Absence represents a genuine gap; note in reasoning chain; confidence impact 0.05–0.15
- Significant: Absence leaves a material claim component unsupported; must be acknowledged in the claim statement or the claim scope must be revised
Implementation
- List every evidence type on the prospective plan that was not found.
- For each, answer the three assessment questions (Expected? Substitutable? Directionality?).
- Assign a materiality rating (Non-material / Moderate / Significant).
- For Moderate absences: document in the reasoning chain as an acknowledged limitation.
- For Significant absences: either (a) revise the claim’s scope to exclude the unsupported component, or (b) downgrade the confidence rating by at least 0.15 and document the absence prominently in the claim record.
- Record the complete absence assessment in the Evidence Set documentation.
Verification Criteria
- The Verification Record confirms that the verifier reviewed the evidence search plan, not only the evidence collected.
- Each absent evidence item from the prospective plan appears in the Claim Record with an assigned materiality rating.
- No evidence type rated Significant is absent from the claim record’s limitations section.
- The confidence rating reflects adjustments for Moderate and Significant absences.
Consequences
Benefits:
- Prevents selective citation by making the absence assessment explicit and auditable.
- Provides a calibrated method for adjusting confidence that avoids both dismissal and over-reaction.
- Creates a clear record for downstream verifiers and claim users.
Liabilities:
- Adds analytical time to Phase 2.
- Directionality assessment (step 3) requires domain judgment; practitioners without domain expertise may need consultation.
Known Uses
- Internal policy team, financial services firm (2024): Applied this pattern to regulatory impact assessments; identified two Significant absences that led to scope revisions, preventing two subsequent claims from being challenged during regulatory review.
- Research team, academic institution (2025): Used during systematic review to distinguish non-publication bias (absent studies likely neutral) from selective non-reporting (absent data likely directional).
How to Propose a New Pattern
Patterns emerge from practice. If you encounter a recurring challenge that is not addressed by an existing pattern:
- Document the first occurrence using the evidence and reasoning of the situation itself — the claim record for the challenging situation becomes data for the eventual pattern.
- Look for recurrence: before proposing a pattern, verify that you’ve encountered the same challenge at least twice in distinct contexts.
- Draft the pattern using the full template. Mark it as Proposed with no verifier.
- Submit to the Patterns Library with your draft. The submission triggers a review process in which at least two experienced VERA practitioners evaluate the pattern against the template requirements.
- Accumulate Known Uses: as the proposed pattern is applied (with appropriate documentation) in real situations, those uses are added to the Known Uses section.
- Verification: once the pattern has two documented Known Uses and passes template review, it is moved to Verified status.
Patterns are living documents. A Verified pattern that proves ineffective in practice is flagged, documented, and eventually Deprecated — with documentation of why it failed, which is itself valuable knowledge.
Proceed to Sovereignty Principles to understand the commitments that constrain how VERA is implemented and governed.
Sovereignty Principles
“To reason well is not enough. You must reason in a way you can own — where the evidence is yours to inspect, the logic is yours to challenge, and the conclusion is yours to reach.”
What Sovereignty Means in VERA
Sovereignty, as used in VERA, is not a political concept. It is an epistemic one.
Epistemic sovereignty is the condition of having genuine authority over the knowledge you act on. It requires four capacities:
- The ability to access the evidence underlying a claim
- The ability to inspect the reasoning chain connecting evidence to conclusion
- The ability to challenge both evidence and reasoning through a defined process
- The ability to reach your own conclusion rather than being required to accept one
These four capacities can be eroded in ways that are subtle and often invisible. An organization that delegates fact-finding to a consultant, accepts AI-generated reasoning without review, or builds critical decisions on evidence it cannot locate or audit has ceded epistemic sovereignty — even if all four capacities are nominally intact.
VERA’s Sovereignty Principles are a set of binding design constraints that prevent sovereignty erosion in VERA-compliant implementations. They are not recommendations. Any VERA implementation that violates them is not VERA-compliant, regardless of how faithfully it follows the rest of the framework.
The Five Sovereignty Principles
Principle S1: Data Sovereignty
You retain ultimate ownership and access rights to every evidence item on which your claims depend.
Evidence that you cannot access, cannot export, and cannot independently verify is not evidence you own — it is evidence you borrow from whoever controls it. Borrowed evidence is legitimate; undisclosed borrowed evidence is a sovereignty violation.
What this requires:
-
Access: For every evidence item in a claim’s evidence set, the claimant must be able to locate and access the original source. A citation to a paywalled study, a proprietary dataset, or a deleted document must be accompanied by documentation of how access was obtained and what access limitations exist.
-
Export: Evidence stored in a VERA tooling system must be exportable in a standard format. Vendor lock-in that prevents evidence migration violates this principle. If the tool that holds your evidence stops working, your evidence must remain accessible.
-
Audit trail: The chain of custody for each evidence item — who accessed it, when, what transformation (if any) was applied — must be documented and retained alongside the evidence itself.
What this does not require:
Data Sovereignty does not require that evidence be stored locally or that all evidence be primary-tier. Using cloud-hosted evidence repositories, licensed databases, or third-party evidence services is consistent with S1, provided the access, export, and audit requirements are met.
Common violations:
- Citing AI-generated evidence summaries without retaining the underlying sources
- Building claims on evidence stored in a system where organizational access depends on a vendor contract that could lapse
- Accepting “the AI found evidence” as a substitute for documented, accessible evidence items
Principle S2: Reasoning Sovereignty
You retain the ability to inspect, trace, and challenge every step in any reasoning chain that informs your decisions.
Reasoning you cannot see is reasoning you cannot own. When a reasoning chain — whether produced by a person, a team, or an AI system — is not exposed to the person whose decision it informs, that person is making a decision based on conclusions they have been handed rather than conclusions they have reached.
What this requires:
-
Transparency: Reasoning chains must be written out in full (per the Verification Protocol, Phase 3). Summary reasoning — “after analysis, we concluded…” — is not a reasoning chain.
-
Traceability: Every step in a reasoning chain must cite its premises, and every premise must trace to either a documented evidence item or a prior verified step. An orphaned premise — one that appears without sourcing — violates S2.
-
Challengeability: A defined process must exist for any stakeholder affected by a claim to formally challenge its reasoning. The challenge process must be capable of producing a change in outcome — a process that acknowledges challenges without capability to affect the claim is theater, not sovereignty.
AI reasoning and S2:
AI systems that produce reasoning — including large language models generating explanations, analytical systems producing recommendations, or decision-support tools producing conclusions — must expose that reasoning in a form that meets VERA’s traceability requirements. “The model produced this conclusion” is not a reasoning chain. The model’s stated reasoning must be captured, documented, and subjected to the same verification criteria as human-produced reasoning.
This does not mean AI reasoning must be trusted. It means it must be visible.
Common violations:
- Presenting AI-generated conclusions without the AI’s stated reasoning
- Using “black box” model outputs as premises in reasoning chains without documentation of what the model was asked and what it produced
- Organizational processes that require accepting expert conclusions without access to expert reasoning
Principle S3: Conclusion Sovereignty
The person or organization whose decisions are affected by a claim retains the right to reach their own conclusion, including a conclusion that differs from the verified claim.
Verification does not obligate acceptance. A claim may be fully Verified under VERA’s protocol and still be reasonably rejected by a stakeholder who has access to information, context, or values that affect how the claim should be weighted in a specific decision.
What this requires:
-
Non-coercion: VERA verification is advisory, not prescriptive. A governance framework that requires acceptance of any verified claim as a condition of participation violates S3.
-
Documented disagreement: When a stakeholder rejects a verified claim, they should document their reasoning. This documentation protects both the stakeholder (by making their position traceable) and the claim (by creating a record of challenges that may prompt re-verification).
-
No penalty for challenge: The VERA process must be designed so that challenging a claim — formally and through the defined process — does not carry institutional penalties. If challenging claims is professionally risky, claims will not be challenged even when they should be.
The limits of Conclusion Sovereignty:
S3 does not mean that verified claims are irrelevant. In contexts with explicit governance structures — organizational policy, regulatory compliance, scientific publication — verified VERA claims carry weight that undocumented disagreement does not automatically override. What S3 prohibits is the structural impossibility of disagreement, not the consequence of minority positions in legitimate governance processes.
Principle S4: Process Sovereignty
The verification process that validates your claims must be auditable and must be capable of failing.
A verification process you cannot audit is one you cannot trust. A verification process that never fails is one that is not actually verifying.
What this requires:
-
Auditability: The criteria, procedure, and record of every verification event must be accessible to the claimant, to affected stakeholders, and (in organizational contexts) to the governance function. Verification records are not confidential.
-
Falsifiability: Verification criteria must be stated in terms that allow claims to fail. Criteria that are designed to be satisfied by any well-formatted claim — regardless of the actual quality of evidence and reasoning — are not VERA-compliant criteria.
-
Independence from the verified: Verification processes controlled exclusively by the people who want the claim verified are not independent. S4 requires that at minimum, the verification criteria be set independently of the claimant, even if resource constraints limit full verifier independence.
-
Accountability for verifiers: Verifiers who consistently approve weak claims, or who have consistent conflicts of interest, must be identifiable through the verification record and subject to review. Anonymous verification is not consistent with S4 in high-stakes contexts.
Common violations:
- Verification processes that score all submitted claims as Verified unless they have obvious errors
- Verification criteria that are not published before claim submission (criteria set after reviewing the claim are not independent)
- Organizational cultures in which “failed verification” is treated as a procedural failure rather than the expected outcome of rigorous review
Principle S5: Temporal Sovereignty
You retain authority over when claims are verified, when they are reviewed, and when they are retired — without pressure to accelerate or delay these processes for non-epistemic reasons.
Epistemic quality is degraded by non-epistemic time pressure. A claim verified in an hour because a decision must be made today may be formally compliant while being practically inadequate. A claim left unreviewed for years because its review would be inconvenient is a different kind of failure. Both represent sovereignty violations: in the first case, decision pressure substitutes for epistemic rigor; in the second, institutional inertia prevents appropriate updating.
What this requires:
-
Documented urgency: When Urgent Verification (see Verification Protocol) is invoked, the urgency must be documented and the resulting partial verification must be labeled transparently. Urgency is a legitimate reason to adjust process; it is not a legitimate reason to misrepresent a partial review as a full one.
-
Review cadence: Claims must have documented review schedules (Verification Protocol Phase 5, Step 5.4), and the review must occur regardless of whether re-review would be convenient. A claim that was inconveniently reviewed before a product launch, policy renewal, or contract negotiation cannot be exempted from its review schedule on that basis.
-
Retirement rights: The claimant or the governance function may retire a claim — marking it as no longer in active use — without that retirement constituting a concession that the claim was wrong. Claims are retired because they are superseded, out of scope, or no longer referenced. Retirement is distinct from Refutation.
-
No false urgency: Declaring urgency to bypass verification rigor, or delaying review to protect a convenient claim, are governance violations. Both must be auditable through the review record.
Sovereignty in Practice
The five principles interact. A common real-world pattern is sovereignty degradation cascade: an organization loses Data Sovereignty (S1) because its evidence is locked in a proprietary system; this undermines Reasoning Sovereignty (S2) because reasoning chains cannot be traced to accessible evidence; which erodes Conclusion Sovereignty (S3) because stakeholders have no meaningful basis for challenge; which eventually compromises Process Sovereignty (S4) because the governance process has nothing auditable to work with.
Restoring sovereignty often requires working backwards: starting with Data Sovereignty — ensuring evidence is accessible — before attempting to improve reasoning transparency.
Sovereignty Vs. Convenience
Sovereignty is inconvenient. It requires documentation that takes time, transparency that creates vulnerability, and processes that can be challenged. Organizations that are serious about VERA must accept this cost as a deliberate choice.
The alternative is epistemic convenience: fast, smooth reasoning that feels authoritative and is difficult to audit. Epistemic convenience compounds. Each shortcut erodes the infrastructure needed to catch the next error. VERA’s sovereignty principles are designed to prevent that compounding.
Sovereignty and AI
AI systems are the most significant current threat to epistemic sovereignty — not because they are malicious, but because they are designed for friction reduction. They produce conclusions without exposed reasoning. They summarize evidence without preserving sources. They provide answers without documenting the question-formulation process. They make reasoning fast and opaque simultaneously.
VERA’s sovereignty principles require that every place AI touches the evidence-reasoning-verification pipeline, the AI’s contribution be made visible, traceable, and challengeable. This is not anti-AI. AI tools that are designed for VERA compliance can dramatically accelerate evidence assembly, reasoning chain construction, and pattern matching — while preserving the sovereignty that makes those outputs trustworthy.
The design principle for AI in a VERA context is: AI does the work; humans own the record.
Organizational vs. Individual Sovereignty
VERA applies at both individual and organizational levels, and the sovereignty principles operate at both:
Individual sovereignty: An individual practitioner applies VERA to their own reasoning. Their sovereignty over claims they make and evidence they hold is personal. They are accountable to themselves (and, in professional contexts, to the standards of their practice).
Organizational sovereignty: An organization applies VERA to its institutional reasoning — policies, decisions, communications, research. Organizational sovereignty means the organization (not any particular vendor, consultant, or AI provider) retains the epistemic capacities described in the five principles. Individual members of the organization may delegate evidence access or reasoning support to external parties; the organization itself may not cede the sovereignty capacities.
Sovereignty Assessment
The following questions are used to assess the degree of sovereignty in a VERA implementation. Each “No” answer identifies a specific sovereignty gap.
Data Sovereignty (S1):
- Can we access the original source of every evidence item in our active claims?
- Can we export all evidence from the systems that store it?
- Do we have a documented chain of custody for each evidence item?
Reasoning Sovereignty (S2):
- Are reasoning chains documented in full, step by step, for all significant claims?
- Can any stakeholder who is affected by a claim trace its reasoning chain to its evidence items?
- Is there a defined process for challenging reasoning, and is it capable of producing changed outcomes?
Conclusion Sovereignty (S3):
- Is it possible for someone to formally disagree with a verified claim without facing institutional penalties?
- Are documented disagreements retained in the claim record?
Process Sovereignty (S4):
- Are verification criteria published before claims are submitted for verification?
- Are verification records accessible to claimants and affected stakeholders?
- Can we identify which verifiers approved which claims, and audit their independence?
Temporal Sovereignty (S5):
- Are review cadences documented for all active claims?
- Are claims reviewed on schedule, regardless of institutional convenience?
- Is urgency-based verification labeled as such and followed up?
An implementation that answers “Yes” to all of these questions has achieved operational sovereignty. The Maturity Model’s Sovereignty domain measures progress toward this state across five maturity levels.
This concludes the Foundations section. Proceed to the Maturity Model Overview to understand how VERA capability is assessed and developed across organizations.
Overview: Five Levels × Six Domains
The VERA Maturity Model is a structured framework for assessing and developing an organization’s or individual’s capability to apply VERA practices systematically. It answers two questions: Where are we now? and What does meaningful progress look like?
The model is organized as a grid: five levels of capability on one axis, six domains of practice on the other. Every cell in the grid describes observable, concrete indicators — not aspirations, not general principles, but the specific things that are present or absent at a given level in a given domain.
Why This Structure
Five Levels, Not More or Fewer
Three-level models (basic / intermediate / advanced) are too coarse to guide development. They map onto each other easily in conversation but provide no traction when you ask: “What exactly do we need to do to move from intermediate to advanced?”
Seven- or ten-level models provide granularity at the cost of usability. Most organizations cannot meaningfully distinguish between adjacent levels, and the cognitive overhead of the model begins to compete with the practice it describes.
Five levels is the calibration point where each level is genuinely distinct, the transitions between levels are actionable, and the model is small enough to hold in working memory. VERA’s five levels draw directly from this reasoning and from the track record of CMMI, which settled on the same number for the same reasons.
Six Domains, Not One
A single overall maturity score conceals more than it reveals. Organizations routinely develop sophisticated Evidence practices while their Governance domain remains informal. Excellent Verification processes coexist with Sovereignty gaps that undermine the value of that verification. A composite score averaging across these domains would suggest moderate overall maturity while hiding the specific strengths and vulnerabilities that actually determine outcomes.
The six domains were chosen to cover the full VERA lifecycle — from raw evidence to governed, sovereign practice — with minimal overlap and maximal diagnostic value.
The Six Domains
1. Evidence
The Evidence domain measures how well an organization identifies, retrieves, rates, documents, and maintains the evidence items that underlie its claims. It covers the entire evidence lifecycle: prospective search planning, quality rating, independence assessment, chain-of-custody documentation, absent-evidence recording, and long-term accessibility.
The Evidence domain is foundational. Weaknesses here propagate forward into every other domain — you cannot reason well from undocumented evidence, you cannot verify claims whose evidence is inaccessible, and you cannot be sovereign over knowledge you cannot audit.
2. Reasoning
The Reasoning domain measures how explicitly and rigorously an organization constructs the logical connection between evidence and conclusion. It covers reasoning chain documentation, inference type labeling, assumption disclosure, completeness, and the organizational practices (review, training, peer critique) that maintain reasoning quality over time.
Reasoning quality is the most cognitively demanding domain to develop. Most organizations have strong intuitions about what “good reasoning” looks like but significant difficulty making those intuitions explicit enough to be taught, reviewed, and improved systematically.
3. Verification
The Verification domain measures how consistently and rigorously claims are evaluated against the Verification Protocol, how well the verification process maintains independence, how records are kept, and how contested and refuted claims are managed. It also covers the feedback loop from verification back into claim and reasoning quality.
Verification is where VERA’s epistemic commitments meet organizational reality most directly. A team that documents claims carefully but never verifies them has built good habits that produce unreliable results. Verification closes the loop.
4. Governance
The Governance domain measures whether and how VERA practice is institutionalized — mandated, resourced, trained, audited, and improved at an organizational level. Governance is the difference between VERA as a personal practice of a few motivated individuals and VERA as an organizational capability that persists through personnel changes and competing priorities.
Governance typically lags the other domains by one or two levels. Organizations often reach Level 3 Practice in Evidence, Reasoning, and Verification through the efforts of motivated champions, only to find that the practices don’t scale or survive when those champions move on. The Governance domain explicitly tracks this lag.
5. Sovereignty
The Sovereignty domain measures how fully the five Sovereignty Principles are implemented in practice. It covers data accessibility and exportability, reasoning chain traceability and challengeability, conclusion sovereignty, process auditability, and temporal sovereignty.
Sovereignty is the most distinctive VERA domain — it has no direct equivalent in most organizational capability frameworks. It explicitly asks: does your VERA implementation preserve the epistemic authority of the people whose decisions depend on it, or does it create new dependencies while eliminating old ones?
6. Integration
The Integration domain measures how fully VERA practices are embedded in the organization’s existing workflows, tools, and processes — rather than sitting alongside them as a parallel overhead. Integration measures whether VERA is something people do instead of their work, or something that shapes how they do their work.
Integration is typically the last domain to mature, and appropriately so. It doesn’t make sense to integrate VERA into organizational workflows before those workflows are mature enough to sustain it. But at Level 4 and 5, insufficient Integration becomes a significant drag: sophisticated VERA practice that is disconnected from decision-making processes, reporting structures, and tool ecosystems produces knowledge that doesn’t reach the people who need it.
The Five Levels
Level 1 — Aware
VERA concepts are understood. Practitioners can describe the difference between an assertion and a claim, explain why evidence quality matters, and articulate the purpose of verification. What does not yet exist is systematic practice: no claims are formally documented, no evidence sets are assembled, no verification records are kept.
Awareness is not a trivial achievement. Many organizations operate at Level 0 — below Aware — where VERA concepts are entirely unknown and epistemic quality is managed (or not) through informal organizational culture. Level 1 represents genuine understanding; the gap to Level 2 is one of habit and process, not comprehension.
Level 2 — Exploring
VERA is being applied to selected claims, inconsistently and without institutional mandate. Practice depends on motivated individuals. The quality of VERA work at Level 2 varies significantly — a practitioner who has internalized the Verification Protocol produces genuinely useful documented claims; a practitioner who has only skimmed the framework produces documentation that has VERA’s form without its substance.
Level 2 is valuable and should not be skipped. The experience of applying VERA to real claims — discovering what “prospective search plan” actually means in practice, encountering genuine evidence quality dilemmas, struggling with the first reasoning chain — is irreplaceable preparation for Level 3. Teams that try to jump directly to Level 3 institutionalization without Level 2 experience tend to institutionalize the wrong things.
Level 3 — Practicing
VERA is applied systematically to all significant claims. The practice is reproducible — it does not depend on which practitioner is working on a given claim — and results are auditable. Anyone in the organization can locate a claim, find its evidence set and reasoning chain, and understand the verification state and confidence rating. Level 3 is the primary target for most organizations. It delivers the core VERA value proposition — traceable, verifiable reasoning — without the organizational investment that Levels 4 and 5 require.
Level 4 — Governing
VERA practices are institutionalized, measured, and subject to continuous improvement. A governance function exists that takes responsibility for VERA quality across the organization. Metrics are tracked, trends are analyzed, and the quality of VERA work is treated as a managed organizational capability — like code quality or security posture — rather than an individual responsibility.
Level 4 is appropriate for organizations for whom epistemic quality is strategically important: research institutions, policy-making bodies, organizations in regulated industries, and those whose reputation depends on the quality of their public claims.
Level 5 — Sovereign
Full epistemic sovereignty is achieved across all five Sovereignty Principles. The VERA framework itself is subject to VERA-style review. The organization actively contributes to the broader VERA knowledge commons. Level 5 is both an organizational achievement and a responsibility: organizations at this level have the capability and the obligation to improve the framework for others.
The 5×6 Grid
The following grid summarizes the state of each domain at each level. Individual level chapters provide the full detail behind each cell.
| Domain | L1: Aware | L2: Exploring | L3: Practicing | L4: Governing | L5: Sovereign |
|---|---|---|---|---|---|
| Evidence | Quality tiers understood conceptually; no ratings applied | Evidence rated for selected claims; informal search plans; inconsistent documentation | All significant claims have rated evidence sets; prospective search plans documented before search; absent evidence recorded | Evidence quality metrics tracked; trusted source library maintained; quality trends reviewed | Full data sovereignty: all evidence accessible, exportable, with documented chain of custody; evidence infrastructure audited |
| Reasoning | Distinction between assertion and reasoning chain is understood | Reasoning chains written for some claims; informal format; inference types not labeled | All significant claims have step-by-step reasoning chains; inference types labeled; assumptions documented | Reasoning quality assessed against criteria; common gaps tracked; peer reasoning review is routine | Reasoning chains institutionally owned and auditable; AI-assisted reasoning systematically documented; reasoning quality measured in aggregate |
| Verification | Verification concept understood; not distinguished from agreement | Ad hoc verification; verifier often the claimant; informal criteria | Verification Protocol applied consistently; independence assessed; verification records created for all significant claims | Verification quality measured (pass rates, contested rate, time-to-verify); verifier pool managed; criteria reviewed | Verification process auditable externally; criteria publicly documented; challenge process active and produces changed outcomes |
| Governance | VERA known to key staff; no mandate or structure | Informal champion; no budget, policy, or formal mandate | VERA is policy for significant claims; ownership clear; documentation requirements enforced | Governance committee or equivalent; metrics reviewed on cadence; budget allocated; VERA in onboarding | Governance function evaluates itself using VERA methods; organization contributes to community; VERA is leadership-level accountability |
| Sovereignty | Sovereignty concept understood; no assessment conducted | Informal awareness of gaps; S1 partially addressed for highest-priority claims | Full sovereignty assessment complete; all five principles rated; gaps documented with remediation plan | Sovereignty gaps remediated on schedule; vendor dependencies managed; AI tool sovereignty assessed | All five principles fully met; sovereignty assessment is continuous; sovereignty is board-level accountability |
| Integration | VERA seen as separate from existing work | VERA applied alongside existing tools as personal practice; not embedded in workflows | VERA notation in common tools; claims registry accessible and used; VERA visible in knowledge artifacts | VERA integrated into decision gates, reporting, and project management; VERA metrics in governance reporting | VERA is the epistemic layer of the organization; all significant knowledge work is VERA-native |
How to Self-Assess
Self-assessment using the Maturity Model produces a profile — a level rating for each domain — rather than a single score. The profile reveals both strengths and gaps.
Assessment Process
Step 1: Domain-by-domain review. For each of the six domains, read the level descriptions in the individual level chapters and identify the highest level at which all indicators are present. A domain is at Level N only if Level N indicators are met; partial Level N means Level N-1.
Step 2: Evidence before judgment. The self-assessment must be grounded in evidence — specific, recent examples of actual practice — not general impressions or aspirations. “We generally try to document evidence” is not evidence of Level 3 Evidence practice. A specific claim record showing a prospective search plan, rated evidence items, and documented absent evidence is.
Step 3: Independent review. Self-assessment has an inherent optimism bias. At minimum, have someone outside the immediate team review the evidence you’re using to justify each level. At Level 4, external review is expected.
Step 4: Document the assessment. The assessment itself is a VERA claim: it should have an evidence set (the specific practice examples) and a reasoning chain (how those examples justify each level rating). An assessment that cannot be substantiated with specific examples is an aspiration, not an assessment.
Common Assessment Errors
Averaging across team members. If some practitioners in a team are at Level 3 and others are at Level 1, the team is at Level 1. Domain levels describe the floor of consistent practice, not the average.
Conflating understanding with practice. Knowing what good Evidence practice looks like, being able to describe the Verification Protocol, or having attended VERA training are Level 1 indicators. They do not constitute Level 2 or 3 practice, however fluently they can be articulated.
Selecting best examples. The evidence for a level assessment must be representative, not cherry-picked. If your best claim record shows Level 3 Evidence practice but 80% of your claims have no evidence ratings at all, you are at Level 2.
Common Trajectories
Different types of organizations tend to follow recognizable patterns across the domain profile:
Research-first trajectory: Evidence and Reasoning develop quickly; Governance and Integration lag. Common in academic and analytical contexts where epistemic quality is culturally valued but institutional formality is resisted.
Compliance-first trajectory: Governance and Verification develop before Evidence and Reasoning, because documentation requirements precede the substance that documentation should capture. Common in regulated industries. Results in formal VERA compliance without epistemic depth; requires deliberate remediation of the substance gaps.
Champion-then-scale trajectory: Evidence, Reasoning, and Verification reach Level 3 through the efforts of motivated individuals; Governance remains at Level 2. The organization then faces a scale problem: the practices exist but are fragile. Governance investment is needed but feels unnecessary when everything is working.
Sovereignty-delayed trajectory: All domains reach Level 3 while Sovereignty remains at Level 1 or 2. Common when VERA is implemented using hosted tools or proprietary platforms without auditing the sovereignty implications. A sovereignty gap at Level 3 is relatively easy to remediate; at Level 4, dependencies have typically deepened.
Domain Interdependencies
Some domain transitions depend on progress in other domains:
- Governance (L3) requires at least Evidence and Reasoning at L2, so there is something substantive to mandate.
- Sovereignty (L3) requires Evidence at L3, because the sovereignty assessment cannot be meaningful if evidence documentation is incomplete.
- Integration (L3) requires Verification at L2, because integrating unverified claims into organizational workflows propagates uncontrolled assertions.
- Level 4 in any domain requires Governance at L3, because measurement and improvement at the domain level cannot be sustained without organizational mandate.
- Level 5 in any domain requires Sovereignty at L4, because an organization that has not achieved operational sovereignty cannot credibly claim epistemic sovereignty in any specific domain.
These dependencies do not prevent development from happening in parallel. They mean that certain level transitions will be blocked until prerequisite conditions are met.
Read the individual level chapters for full detail on what each level looks like in practice, what moves you to the next level, and how to self-assess against observable indicators.
Level 1 — Aware
At Level 1, you understand the territory but have not yet entered it. The map exists; no footprints do.
What Level 1 Feels Like
Level 1 is the level of genuine comprehension without systematic practice. A practitioner at Level 1 can explain the difference between an assertion and a claim, articulate why evidence quality matters, and describe what a reasoning chain is for. They may have read this documentation, attended a workshop, or worked through a sample claim. What they have not yet done is apply VERA to claims that matter — in their actual work, on their actual decisions.
This is not a failure state. Level 1 is a genuine achievement over Level 0 (no awareness at all), and it is the necessary precondition for everything that follows. You cannot develop practice before developing understanding. What distinguishes Level 1 from Level 2 is not knowledge — it is application.
For organizations, Level 1 typically means that a handful of people understand VERA and may be advocating for its adoption, but no institutional practice has been established. There are no claim records, no evidence sets, no verification records in the organizational knowledge base. VERA is a future state, not a current one.
The Six Domains at Level 1
Evidence
At Level 1, the four-tier evidence quality rating (Primary, Secondary, Tertiary, Testimonial) is understood conceptually. The practitioner can correctly classify a given evidence item when asked to — a raw dataset is Primary, a peer-reviewed synthesis is Secondary, a textbook summary is Tertiary — but this classification does not happen as part of normal work. Evidence items are not rated in practice; they are simply used.
The concept of evidence independence is understood: multiple news articles citing the same press release are not independent evidence. But source collapse (the error of treating them as independent) continues to occur in practice because no process exists to check for it.
The prospective search plan — documenting what evidence you’re looking for before you look, so that absence can be noticed and recorded — is understood as a principle but not applied. Searches happen; they are not planned and documented.
The concept of absent evidence — the evidential significance of what you didn’t find — is understood at Level 1. It is not yet recorded.
Observable evidence that a domain is at Level 1:
- Staff can correctly rate evidence items when asked to do an exercise
- No claim records in the organizational knowledge base include evidence quality ratings
- When asked “what evidence supports this claim?” practitioners can retrieve evidence but cannot demonstrate a systematic search process
Reasoning
At Level 1, the distinction between a reasoning chain and a conclusion is understood. “We concluded X because Y” is recognized as an incomplete statement — it omits the logical steps connecting Y to X, the inference type used, and the assumptions the argument depends on. The practitioner understands that this incompleteness is a problem, not a stylistic choice.
The four inference types (deductive, inductive, abductive, analogical) are understood as distinct. The practitioner can identify which type is being used when one is pointed out.
What does not yet exist is the discipline of writing reasoning chains. When producing a conclusion, the practitioner typically produces the conclusion and perhaps a summary of supporting considerations — not a step-by-step argument with labeled inference types, documented assumptions, and intermediate conclusions. This is not evasion; it is habit. The habit of explicit reasoning chains develops in Level 2.
Observable evidence that a domain is at Level 1:
- When asked to explain their reasoning, practitioners provide conclusions and supporting observations, not explicit reasoning chains
- Staff recognize reasoning gaps when shown examples but do not catch them in their own work
- No claim records in the organizational knowledge base include step-by-step reasoning chains
Verification
At Level 1, the concept of verification — distinct from agreement, distinct from review, distinct from the original claimant checking their own work — is understood. The practitioner understands why independence matters: the verifier’s job is to look for failures, not to endorse conclusions.
The Verification Protocol is understood at a high level. The practitioner knows it involves five phases and that verification produces a formal record. They have not applied it to a real claim.
Critically, at Level 1 there is often still some conflation between “we reviewed this and agree with it” and “this has been verified.” Verification in VERA means something specific: evaluation against explicit criteria by someone other than the claimant, producing a formal record. Informal agreement is not verification, however carefully the agreement was reached.
Observable evidence that a domain is at Level 1:
- Staff can describe the Verification Protocol phases from memory or with brief reference
- No verification records exist in the organizational knowledge base
- When asked how a key organizational claim was verified, the answer describes agreement, review, or consensus — not a formal verification process
Governance
At Level 1, VERA is known to enough people that the conversation about adoption is possible. Someone — typically the person who encountered VERA and brought it to the organization — understands it well enough to advocate for it. Leadership is aware that VERA adoption has been proposed, even if they have not yet made a decision.
What does not exist is any formal mandate, budget, ownership, or policy. VERA adoption, to whatever degree it exists, is voluntary. No one is required to use VERA formats, no resources have been allocated to VERA implementation, and no one has formal responsibility for VERA practice.
The absence of governance at Level 1 is expected and appropriate. You do not need governance structures for a practice that hasn’t been demonstrated to work in your context. The role of Level 1 Governance is to create the conditions for Level 2 exploration: enough institutional awareness that individual champions can experiment without being obstructed.
Observable evidence that a domain is at Level 1:
- At least one person can articulate the VERA framework clearly to leadership
- No VERA policy, mandate, or formal ownership exists
- VERA has been discussed in at least one organizational forum (team meeting, planning session, working group)
Sovereignty
At Level 1, the concept of epistemic sovereignty — the capacity to access, inspect, challenge, and ultimately own the knowledge you act on — is understood. The practitioner can describe what it would mean for an organization to lack data sovereignty (evidence locked in a vendor system that could lapse) or reasoning sovereignty (decisions made on AI-generated conclusions with no visible reasoning chain).
What has not yet happened is assessment: no one has systematically evaluated which of the five Sovereignty Principles the organization currently meets, which it partially meets, and which it violates. The sovereignty landscape is understood conceptually; it has not been mapped for the current context.
This absence of assessment is the primary Level 1 Sovereignty characteristic. You cannot remediate gaps you haven’t identified, and you cannot identify gaps without assessment. The assessment itself — not the remediation — is what defines Level 2 Sovereignty progress.
Observable evidence that a domain is at Level 1:
- Staff can describe each of the five Sovereignty Principles accurately
- No formal sovereignty assessment has been conducted
- Key questions about the current state — “Can we export all our evidence? Can any stakeholder trace a claim’s reasoning chain? Can anyone formally challenge a verified claim?” — have not been formally answered
Integration
At Level 1, VERA is conceptually understood as something that would complement existing work — that claim documentation would improve knowledge management, that evidence rating would improve decision-making quality, that verification records would improve audit capability. The connection between VERA and existing workflows is apparent.
What does not exist is actual integration. VERA has no presence in the organization’s tools: not in the wiki, not in the project management system, not in the decision log, not in the knowledge base. When work happens, it happens in the organization’s native processes. VERA notation, templates, and formats are theoretical.
The absence of integration at Level 1 is not a problem — it is the expected state. Integration before practice is premature. The Level 1 Integration goal is simply to identify the workflows where VERA would add value, so that integration can be planned during Level 2 exploration.
Observable evidence that a domain is at Level 1:
- No VERA notation, templates, or formats appear in organizational tools
- Staff can identify the workflows where VERA would be most valuable
- No claim registry exists; no decision has been made about where it would live
Signals That Confirm Level 1
Across all six domains, the following are strong signals that an individual or organization is genuinely at Level 1 (and not lower or higher):
- Reading this documentation feels like recognition, not confusion — the concepts are intelligible and coherent
- Reviewing past claims or decisions reveals obvious gaps (absence of evidence documentation, implicit reasoning) that were not obvious before learning VERA
- The first attempt to document a claim using VERA formats feels awkward and effortful — because the habits are not yet formed
- Conversations about VERA adoption produce real engagement (not dismissal and not immediate full buy-in) — the framework is credible but not yet proven in context
What Does Not Qualify as Level 1
Below Level 1:
- VERA is completely unknown
- The person can describe what “evidence” means in everyday language but not the VERA-specific distinction between evidence tiers
- “Verification” means spell-checking or review for consistency, not epistemic evaluation against criteria
Mistaken self-assessment at Level 1:
- “We already do this” — most organizations that feel they already verify claims are performing agreement, review, or consensus, not VERA verification. The key question: do verification records exist? If not, you are at Level 1 or below in the Verification domain, regardless of how carefully decisions are made.
- “We just need to formalize what we’re doing” — this is sometimes true (Level 2 organizations often do have informal VERA-like practices that can be formalized) but is frequently an overestimate. Formalizing a practice reveals that it was less consistent than it felt.
Moving from Level 1 to Level 2
The transition from Level 1 to Level 2 does not require organizational approval, budget, or mandate. It requires one thing: applying VERA to a real claim.
Not a practice claim. Not a simplified example. A claim that actually matters — one that will be used to inform a real decision, support a real argument, or guide a real action. This is what converts understanding into practice and reveals the inevitable gap between the two.
Concrete steps:
-
Select a claim. Choose one claim from your current work that is significant enough to matter. Compound it if needed. State it precisely (Verification Protocol Phase 1).
-
Assemble evidence. Write a prospective search plan first. Then conduct the search. Rate what you find. Document what you don’t find. (Phase 2)
-
Build the reasoning chain. Write it step by step. Label each inference type. Document the assumptions you’re making. (Phase 3)
-
Verify it. Ask someone else — with at least some independence from the claim — to evaluate it against the criteria in Phase 4 of the Verification Protocol.
-
Create a record. The claim record, with its evidence set, reasoning chain, and verification record, is your first documented VERA artifact. It is also your first evidence that Level 2 practice is underway.
The first full VERA claim will take significantly longer than expected. This is normal. The second will be faster. By the fifth, the format will have become familiar enough that the cognitive effort shifts from “remembering the format” to “doing the substantive work” — which is where it belongs.
Level 1 Self-Assessment Checklist
Use this checklist with specific evidence for each item — not general impressions.
Understanding (all must be Yes for Level 1):
- I can define Claim, Evidence Item, Reasoning Chain, and Verification State without looking them up
- I can correctly classify a given evidence item into the four-tier quality scale
- I can describe the five phases of the Verification Protocol at a high level
- I can name and describe all five Sovereignty Principles
- I understand the difference between verification and agreement/review/consensus
Practice (all must be No for Level 1 — if any are Yes, you may be at Level 2):
- I have documented at least one claim using the full VERA format including evidence set and reasoning chain
- I have created at least one Verification Record
- Evidence quality ratings appear in my organization’s working documents
- A claim registry exists in my organization, even informally
Position (confirms Level 1 vs. Level 0):
- I have read at least the Foundations section of this documentation
- I can explain VERA to a colleague who has not encountered it before
- I can identify at least two claims in my current work that would benefit from VERA documentation
Proceed to Level 2 — Exploring to understand what the first steps of systematic practice look like and how to navigate common early challenges.
Level 2 — Exploring
At Level 2, VERA is real work — not uniform work. Some claims are documented. Some evidence is rated. Some reasoning is explicit. The practice is alive but not yet reliable.
What Level 2 Feels Like
Level 2 is the most variable level. The gap between the highest and lowest VERA quality within a Level 2 organization can be wider than the gap between Level 1 and Level 4. This is because Level 2 practice is driven by individual initiative rather than institutional process — and individual practitioners vary enormously in how deeply they’ve internalized the framework.
A practitioner in the early stages of Level 2 is applying VERA somewhat awkwardly, frequently referring back to the Lexicon and Protocol, uncertain about edge cases (Is this Primary or Secondary? Is this reasoning deductive or abductive?), and producing claims that are formally structured but not yet sharp. This is healthy. The awkwardness is the learning.
A practitioner in the late stages of Level 2 — ready to push the organization toward Level 3 — has developed genuine VERA fluency. They apply the framework confidently to significant claims, produce clean evidence sets and reasoning chains, and have identified the recurring challenges in their specific domain that eventually become the patterns their team will rely on.
Between these two ends of the Level 2 spectrum, the primary experience is discovery: discovering what “prospective search plan” actually means when you’re trying to research a competitive claim you’ve never analyzed before; discovering how hard it is to document the reasoning chain of a complex strategic judgment; discovering that what you thought was a single claim is actually three; discovering that the evidence you’ve relied on for years is Tertiary at best.
For organizations, Level 2 looks like VERA being practiced by a motivated few while the rest of the organization continues as before. Typically there is a champion or small cohort — sometimes a single person who encountered VERA and committed to applying it — producing real VERA artifacts: claim records, evidence sets, verification records. These artifacts exist alongside the organization’s normal knowledge outputs, often invisible to people who are not already VERA-aware.
The defining fragility of Level 2 is person-dependence: if the champion leaves or shifts focus, VERA practice typically stops. This is not a failure of the framework — it is the expected characteristic of a practice that has not yet been institutionalized. Managing this fragility is a key part of the Level 2-to-3 transition.
The Six Domains at Level 2
Evidence
At Level 2, evidence is being rated and documented for selected claims. The selection is typically made by the practitioner based on which claims feel most important, which decisions are most consequential, or simply which claims they have time to document properly. There is no systematic coverage criterion — significance is judged intuitively.
Prospective search plans exist informally at Level 2. The practitioner thinks ahead about what evidence they’re looking for before they look, but this plan may not be written down. As a result, it cannot be reviewed, cannot be audited, and cannot be used to detect absent evidence reliably. The gap between “thought about it” and “documented it” is small in effort but large in auditability.
Evidence independence assessment happens variably. Some practitioners apply it carefully; others conflate citation count with evidence independence. Source collapse remains a common error at Level 2 — especially in domains where a small number of authoritative sources dominate, making true independence hard to achieve.
Absent evidence notation is the most commonly skipped step at Level 2. Practitioners reliably document what they found; they less reliably document what they expected to find and didn’t. This asymmetry means that Level 2 evidence sets tend to look more complete than they are.
Level 2 Evidence indicators:
- Claim records exist that include rated evidence sets
- At least one practitioner consistently produces prospective search plans before evidence collection
- Evidence quality ratings use the four-tier scale consistently in documented claims
- Some but not all absent evidence is noted; notation is inconsistent
Reasoning
At Level 2, reasoning chains are written for some claims. The quality varies substantially: early-stage Level 2 reasoning chains often have the form of reasoning chains (numbered steps) without the substance (each step’s premises, inference type, and conclusion are vague or implicit). Late-stage Level 2 reasoning chains are genuinely explicit, with labeled inference types and documented assumptions.
The most common Level 2 reasoning failure is hidden deductive steps: the practitioner writes several observations and then a conclusion, without documenting the inferential steps connecting them. The conclusion follows — the reasoning is sound — but the chain, as written, has a gap where the key inferential move should be.
Assumptions are inconsistently documented. At Level 2, practitioners typically document the assumptions they are aware of making. They do not systematically identify assumptions they are making without realizing it — which is, of course, the harder category. The assumption-identification skill develops with practice; it is rarely strong at Level 2.
Level 2 Reasoning indicators:
- Some claims in the organizational knowledge base include step-by-step reasoning chains
- Inference types are labeled in at least some reasoning chains
- Practitioners can identify reasoning gaps in others’ work (peer critique is emerging)
- Most reasoning chains have at least one undocumented hidden assumption when reviewed carefully
Verification
At Level 2, verification is happening, but its independence and rigor vary. The most common Level 2 verification pattern is self-verification with skeptical intent: the claimant deliberately adopts an adversarial stance toward their own claim, evaluates it against the Verification Protocol criteria, and produces a verification record. This is better than no verification, but it is not the same as independent verification.
Formal verifier independence — having someone other than the claimant evaluate the claim — is aspired to at Level 2 but achieved inconsistently. It requires that a second person be available, willing, and sufficiently VERA-fluent to conduct a meaningful verification. At Level 2, these conditions are met sporadically.
Verification records exist at Level 2, but they may not be complete. The most commonly omitted elements are: the confidence rating with justification (practitioners record a number but not the reasoning behind it), the independence level assessment (the verifier’s relationship to the claim is not documented), and the specific criteria findings (the verification record says “Verified” without noting which criteria were assessed and how).
The contested claim process is understood at Level 2 but has rarely or never been invoked. The organization has not yet faced a situation where a verified VERA claim needed to be formally challenged — or has faced it and handled it informally rather than through the defined process.
Level 2 Verification indicators:
- Verification records exist for some claims
- At least one verification event involved a person other than the claimant
- Practitioners apply the Phase 4 criteria checklist, though not all criteria are consistently addressed
- No contested claim process has been formally invoked (the first contested claim is often the trigger for formalizing the process)
Governance
At Level 2, governance is informal and champion-driven. The champion(s) have both the understanding and the motivation to apply VERA, but they are operating without formal support. This has several practical consequences:
- Time for VERA practice must be carved out of a schedule that does not formally allocate it
- VERA artifacts (claim records, evidence sets) may be stored in personal tools rather than shared organizational systems
- VERA adoption conversations with leadership are advocacy rather than reporting — the champion is making the case, not reporting on a managed program
Despite its informality, Level 2 governance does useful work. The champion accumulates experience that will be essential for designing Level 3 governance. They identify which workflows most benefit from VERA, which practitioners are natural adopters, which organizational language maps to VERA terms, and where the existing culture’s assumptions conflict with VERA principles. This intelligence cannot be gathered any other way.
Level 2 Governance indicators:
- At least one person is explicitly identified (if only informally) as the VERA advocate or champion
- VERA has been discussed in at least one organized forum with participation from two or more people
- No formal VERA policy, mandate, or documented ownership exists
- VERA practice is voluntary; non-participation has no consequences
Sovereignty
At Level 2, the sovereignty picture is beginning to take shape but has not yet been formally assessed. Practitioners have an informal sense of which sovereignty principles they meet and which they don’t — evidence is accessible for the claims they’ve documented, reasoning chains are visible to the people who documented them, but it’s not clear whether other affected stakeholders can access them.
The most common Level 2 Sovereignty work is in Data Sovereignty (S1): because Level 2 practitioners are actively documenting evidence, they tend to record source references and access dates as a matter of good practice. This partial implementation of S1 is a natural byproduct of Level 2 Evidence work.
Reasoning Sovereignty (S2) and Conclusion Sovereignty (S3) are rarely assessed at Level 2. The claims being documented are typically the practitioner’s own, which means they naturally have reasoning transparency and conclusion authority. The sovereignty question becomes more pressing when the claims in question are produced by others — AI systems, consultants, external authorities — and those claims are then used without verifying their transparency and challengeability.
Level 2 Sovereignty indicators:
- Evidence source references and access dates are documented for Level 2 claims
- At least one practitioner is aware of sovereignty gaps in tools or processes they use
- No formal sovereignty assessment has been conducted
- At least one sovereignty gap has been identified informally and is being tracked
Integration
At Level 2, VERA is practice alongside work, not yet embedded in work. The practitioner who is doing VERA work is doing it in addition to their normal outputs — writing a VERA claim record about a topic they’ve also written a normal memo or report about. The two artifacts coexist without one informing the form of the other.
This parallel-track situation is appropriate at Level 2. Integration before practice maturity creates premature lock-in: if you build VERA templates into your wiki before you know what good VERA practice looks like in your context, you may end up with templates that shape practice in the wrong direction.
Some natural integration points emerge at Level 2 without deliberate design: a practitioner starts including evidence ratings in their normal research notes, or flags reasoning chains in meeting summaries, or links to a claim record from a project planning document. These informal integrations are precursors to the systematic integration of Level 3.
Level 2 Integration indicators:
- At least one VERA-native artifact (claim record with evidence set and reasoning chain) exists in the organizational knowledge base
- Some practitioners voluntarily apply VERA notation in their normal work outputs
- No VERA templates or formats are embedded in standard organizational tools
- The question of where a claim registry should live has been raised but not settled
Common Level 2 Traps
The Showcase Trap
Level 2 practitioners often have a small number of beautifully documented claims and a large number of undocumented ones. The documented claims are used to demonstrate VERA’s value; the undocumented ones are not discussed. This creates a misleading picture of maturity.
The showcase trap is addressed by shifting the question from “Can we produce VERA artifacts?” (Level 2 can) to “Do we produce VERA artifacts consistently for all significant claims?” (Level 3 requires this). Champions who fall into the showcase trap often discover it when they try to scale VERA adoption and find that new practitioners produce much lower quality than the showcases suggested was standard.
The Form-Without-Substance Trap
It is possible to produce claim records that have all the required fields and none of the required rigor. Evidence items are listed without quality ratings. Reasoning chains consist of bullets that restate the evidence items rather than logical steps connecting them. Verification records say “Verified” with minimal criteria documentation.
This trap is particularly dangerous because it is hard to detect from the outside. A claim record that looks like a VERA artifact but lacks epistemic substance is, in practice, an assertion with decorative formatting. The tell-tale signs: reasoning chains that could be read as conclusions rather than arguments, evidence sets where every item supports the claim (no absent evidence, no contrary evidence addressed), verification records completed by the claimant.
The Perfectionism Trap
The inverse of the form-without-substance trap: practitioners who understand VERA well refuse to document claims until they can do so perfectly, which means they never document them at all. VERA practice requires iteration. An imperfect claim record that captures real evidence and real reasoning is more valuable than a perfect claim record that is never written.
At Level 2, the target is genuine engagement with the protocol, not perfect execution. The verification process exists precisely to catch the gaps that the claimant missed.
The Champion Fatigue Trap
VERA adoption at Level 2 costs champions more than it costs anyone else. They are doing extra work — their own practice plus advocacy plus answering questions plus designing how VERA should work in the organizational context — without formal recognition or resource support. Champion fatigue is real, and it is the most common reason Level 2 organizations fail to advance.
Organizations that rely on a single champion for Level 2 VERA work should recognize this as a structural fragility and prioritize recruiting at least one additional practitioner, distributing the load, and beginning the Level 3 governance conversation before the champion burns out.
Moving from Level 2 to Level 3
The Level 2-to-3 transition requires resolving the person-dependence that defines Level 2. The specific transitions required:
Evidence: From informal prospective search plans to documented plans as a required step; from some evidence rated to all significant claims having rated evidence sets; from inconsistent absent-evidence notation to systematic recording.
Reasoning: From some reasoning chains to all significant claims having reasoning chains; from variable format to consistent format; from undocumented assumptions to systematic assumption documentation.
Verification: From ad hoc verification to consistent application of the Verification Protocol; from informal records to complete Verification Records for all significant claims; from tolerance of self-verification to requiring at minimum documented self-verification against criteria, with peer verification for high-stakes claims.
Governance: This is the most significant transition. From informal champion to formal mandate: VERA must become policy, not advocacy. This requires a decision — by someone with organizational authority — that significant claims will be documented and verified in VERA format. It requires documented ownership (who is responsible for VERA practice?), documented scope (which claims require VERA treatment?), and documented standards (what does VERA compliance look like in this context?).
Sovereignty: From informal awareness to completed formal assessment. The sovereignty assessment must be conducted, all five principles rated, gaps documented, and a remediation plan created with owners and timelines.
Integration: From VERA alongside work to VERA visible in work. Claim records must be accessible to people who didn’t create them. A claim registry must exist somewhere — even a simple shared document — where all documented claims can be found. VERA notation must appear in at least some standard organizational outputs.
Level 2 Self-Assessment Checklist
Evidence (Level 2 requires Yes to at least 2 of 4):
- At least five claims exist in our knowledge base with rated evidence sets
- At least one claim has a documented prospective search plan
- At least one claim’s evidence set includes a documented absent-evidence item
- Evidence quality ratings use the four-tier scale consistently (not improvised ratings)
Reasoning (Level 2 requires Yes to at least 2 of 3):
- At least three claims in our knowledge base include step-by-step reasoning chains (not conclusion summaries)
- At least one reasoning chain has labeled inference types for each step
- At least one reasoning chain has documented assumptions
Verification (Level 2 requires Yes to at least 2 of 3):
- At least three claims have formal Verification Records
- At least one verification event involved a person other than the claimant
- Verification records reference specific criteria from the Verification Protocol
Governance (Level 2 requires Yes to at least 2 of 3):
- At least one person is identified as the VERA champion or advocate
- VERA has been discussed in a meeting with two or more participants
- At least one other person in the organization besides the champion is actively applying VERA
Sovereignty (Level 2 requires Yes to at least 1 of 2):
- Source references and access dates are documented for all evidence items in Level 2 claims
- At least one specific sovereignty gap has been identified and is being tracked
Integration (Level 2 requires Yes to at least 1 of 2):
- At least one VERA artifact (complete claim record) is accessible to colleagues who did not create it
- VERA notation (claim IDs, evidence quality ratings, or verification state symbols) appears in at least one normal organizational output
Proceed to Level 3 — Practicing to understand what systematic, reproducible VERA practice looks like and what the path to institutionalization requires.
Level 3 — Practicing
At Level 3, VERA is no longer about whether to apply the framework. The question is only how well.
What Level 3 Feels Like
Level 3 is where VERA becomes organizational infrastructure rather than individual initiative. The defining characteristic is reproducibility: any significant claim produced by any practitioner in a Level 3 organization can be audited against the Verification Protocol. The quality may vary — some practitioners are more skilled than others — but the process is applied by everyone, all the time, for all significant claims.
This is a qualitative shift from Level 2. At Level 2, VERA work exists alongside normal work. At Level 3, VERA work is normal work — at least for significant claims. When someone asks “what’s the evidence for that?” the answer is a claim record, not a recollection. When someone asks “has this been verified?” the answer is a Verification Record identifier, not “yes, we reviewed it.”
The texture of daily work at Level 3 is different. Meetings where significant claims are introduced include references to verification state. Decision documents cite VERA claim IDs. When new evidence appears, it is assessed against existing claims to identify which ones require re-evaluation. When a claim is contested, there is a process for handling the contest — and it is used.
What Level 3 is not: it is not perfect. Reasoning chains at Level 3 still sometimes have gaps. Evidence sets sometimes have missing items. Verification is sometimes self-verification when independent verifiers are not available. What distinguishes Level 3 from Level 2 is not the absence of these imperfections — it is that imperfections are identified, documented, and addressed through the verification and review process, rather than being invisible or ignored.
The Level 3 organization has also resolved the person-dependence problem of Level 2. VERA practices survive personnel changes. When the original VERA champion moves on, others continue the practice — not because VERA is loved by everyone, but because it is mandated, resourced, and embedded enough in workflows that continuing it is easier than not.
The Six Domains at Level 3
Evidence
At Level 3, evidence documentation is a required part of producing any significant claim. “Significant” is defined in the organizational VERA policy — not assumed or interpreted ad hoc. The definition typically includes: claims that inform decisions above a defined stakes threshold, claims that will be communicated externally, claims that will be used as evidence in other claims, and claims that will be reviewed by oversight bodies or regulators.
Prospective search plans are written before evidence collection for all significant claims. This is the single hardest Level 3 Evidence habit to establish. It requires discipline at the moment when the practitioner is most eager to start searching — and it requires trust that the time invested in planning will be recovered in search efficiency and absence-evidence quality. Organizations that have successfully established this habit typically do so by making the search plan a deliverable: the plan must be submitted before search begins, creating an accountability point.
Evidence quality ratings are applied consistently across the team using a shared reference rather than each practitioner’s interpretation of the tiers. In practice, this means the organization has documented examples of Primary, Secondary, Tertiary, and Testimonial evidence in their specific domain — not just the abstract definitions — so that tier assignment is calibrated rather than variable.
Independence assessment is routine. Source collapse is caught in verification rather than slipping through. The evidence set for a claim reliably includes contrary evidence — not just supporting evidence — because the prospective search plan included anticipated contrary evidence sources, and the verification criteria check for their presence.
Level 3 Evidence indicators:
- Every significant claim produced in the last 90 days has a rated evidence set with a documented prospective search plan
- Absent evidence is systematically recorded; claims are not treated as having complete evidence sets when significant expected evidence types are missing
- Evidence independence assessment is a standard part of evidence set review; source collapse errors are caught before verification completes
- Domain-specific evidence quality examples exist and are used to calibrate ratings across practitioners
Reasoning
At Level 3, reasoning chains are standard, not exceptional. Every significant claim has one — not a summary of supporting considerations, but a step-by-step argument from evidence to conclusion with labeled inference types and documented assumptions.
The quality standard for Level 3 reasoning chains is: any practitioner in the organization should be able to read the chain and evaluate it. This is a calibration point. If a reasoning chain requires the domain expertise of its author to parse — if the logical steps only make sense to someone who already knows the field deeply — it is not an explicit enough chain. The inference steps must be written out completely enough to be followed by a VERA-competent non-specialist.
Peer reasoning review is a routine practice at Level 3. Not every claim needs a formal peer review of its reasoning chain, but claims above a significance threshold — those informing high-stakes decisions, those being communicated externally, those serving as evidence for other important claims — get reasoning chain review as a standard step, separate from and preceding formal verification.
Assumption documentation has matured by Level 3. Practitioners document not only the assumptions they are conscious of making but also actively search for hidden assumptions — premises the reasoning chain requires but doesn’t state. The habit of “what would need to be true for this step to work?” applied systematically to each reasoning step is the mechanism. It is not natural; it is a practiced discipline.
Level 3 Reasoning indicators:
- Every significant claim has a step-by-step reasoning chain with labeled inference types
- Reasoning chains are readable and evaluable by VERA-competent practitioners without domain expertise
- Peer reasoning review is in place for high-significance claims prior to verification
- Assumption documentation is comprehensive; the verification process includes checks for hidden assumptions
Verification
At Level 3, the Verification Protocol is applied consistently to all significant claims. “Consistently” means: using the same criteria, producing the same format of verification record, and applying the same standards for what “Met” means for each criterion.
Independence requirements are assessed for every verification event, not assumed. The verification record documents the verifier’s independence level (Foundational, Peer, or Expert) and the basis for that assessment. Verification records are complete: they document the findings for each criterion, not just the final state.
The verification process produces failures. This is a Level 3 quality signal — not a negative one. At Level 2, verification often feels like a hurdle that claims pass through. At Level 3, a meaningful percentage of claims submitted for verification come back for revision: the evidence set is incomplete, the reasoning chain has a gap, a contrary evidence item has not been addressed. The failure rate is tracked (at Level 4 it becomes a formal metric) and is used informally at Level 3 to assess whether claims are being prepared carefully.
Contested claims have a defined process that has been invoked at least once. The first contested claim is typically the organizational moment that crystallizes what the process must accomplish: a claim has been verified, someone believes the verification was wrong, and there must be a way to handle that dispute that is both fair and epistemically rigorous. By Level 3, this process exists and is documented.
Level 3 Verification indicators:
- Verification Protocol is applied to all significant claims; no significant claims are self-described as verified without a Verification Record
- All verification records are complete, with per-criterion findings and confidence ratings with justification
- A non-trivial percentage of submitted claims are returned for revision rather than verified
- Contested claim process exists and has been invoked at least once
Governance
Level 3 Governance is the critical transition that makes everything else sustainable. It consists of three elements: mandate, ownership, and scope.
Mandate: There is a documented organizational decision — a policy, a standard operating procedure, a leadership declaration — that significant claims will be documented and verified using VERA. The mandate is not a recommendation; it has consequences for non-compliance (at minimum, significant claims without VERA documentation are not treated as verified in decision-making contexts).
Ownership: Someone — a person, a role, a committee — is formally responsible for VERA practice. This owner is accountable for: ensuring practitioners are trained, maintaining the claim registry, setting the significance threshold (which claims require VERA treatment), and handling escalations when verification disputes or governance questions arise. The owner is not responsible for doing all VERA work — they are responsible for the framework within which others do VERA work.
Scope: The organization has defined which claims require VERA treatment. This definition is precise enough to apply consistently — not “important claims” but something like “claims that will be presented to the board, communicated in public materials, used to support regulatory submissions, or inform capital allocation decisions above $X.” Practitioners should be able to determine, for any given claim, whether it falls within scope without asking for guidance.
Training is a component of Level 3 Governance. New practitioners who join the organization receive VERA training as part of onboarding — not as optional enrichment, but as a required competency for their role.
Level 3 Governance indicators:
- A documented VERA policy exists with named ownership
- The significance threshold (which claims require VERA treatment) is defined in writing
- New practitioners receive VERA training as part of onboarding
- At least one person has formal accountability for VERA practice outcomes
Sovereignty
At Level 3, the formal sovereignty assessment has been completed. All five Sovereignty Principles have been evaluated: each is rated as fully met, partially met, or not met. Gaps are documented with specific descriptions of what is missing, and a remediation plan exists with owners and target dates.
The sovereignty assessment is itself a VERA claim: it has an evidence set (the specific tools, processes, and policies assessed against each principle), a reasoning chain (why each rating was assigned), and a verification state (it has been reviewed by someone other than the person who conducted the assessment).
Data Sovereignty (S1) is typically the easiest to assess at Level 3, because evidence documentation practice has generated the evidence needed: if evidence items have source references, access dates, and chain-of-custody documentation, it is straightforward to evaluate whether those sources are accessible and exportable. The common finding at Level 3 S1 assessment is that some evidence is stored in systems where organizational access is not guaranteed beyond the current vendor relationship.
Reasoning Sovereignty (S2) assessment at Level 3 typically finds that reasoning chains are visible to their authors and immediate collaborators, but that the claim registry’s accessibility determines whether anyone affected by a claim can actually trace its reasoning. This drives the claim registry accessibility requirement.
Process Sovereignty (S4) assessment is the most organizationally consequential at Level 3. It asks: are verification criteria published before claims are submitted? Can anyone affected by a claim access the Verification Record? Can the claim be formally challenged? The honest answer at Level 3 is often “the criteria are documented in the Verification Protocol, but they haven’t been published in a stakeholder-accessible format.”
Level 3 Sovereignty indicators:
- Formal sovereignty assessment completed within the last 18 months
- All five principles rated with specific evidence for each rating
- Remediation plan exists for any principle rated “not met” or “partially met”
- Claim records and Verification Records are accessible to stakeholders who are affected by those claims (not just to the practitioners who created them)
Integration
At Level 3, VERA has moved from a parallel practice to a visible presence in the organization’s knowledge artifacts and workflows. The specific integrations vary by organization, but the common ones at Level 3 include:
Claim registry: A centralized, accessible place where all documented claims can be found. At Level 3 this may be as simple as a maintained table in a shared wiki, with columns for claim ID, statement, verification state, and date. What matters is that it exists, is actively maintained, and can be used by anyone in the organization to find a claim.
Decision documentation: When significant decisions are made, the claim records supporting those decisions are referenced by ID. The decision document does not need to reproduce the claim’s evidence set and reasoning chain — it links to them. This creates a searchable record of which claims informed which decisions, which is essential for later review and audit.
Meeting practice: In meetings where significant claims are discussed, verification state is a normal part of the conversation. “Is that verified?” is a question people ask and expect a VERA-formatted answer to, not a question that derails discussion.
Review triggers: When new evidence appears — a new study is published, a regulatory ruling is issued, a key assumption changes — there is a process for identifying which existing claims might be affected and triggering their review.
Level 3 Integration indicators:
- A claim registry exists, is actively maintained, and is used by practitioners to look up existing claims before creating new ones
- Decision documents reference claim records by ID
- New evidence triggers a defined process for reviewing affected claims
- VERA notation (claim IDs, verification state) appears routinely in standard organizational outputs
Why Level 3 Is the Primary Target
The VERA Maturity Model is calibrated so that Level 3 delivers the core VERA value proposition without the organizational complexity that Levels 4 and 5 require. An organization operating consistently at Level 3 across all six domains has:
- Epistemic accountability: every significant claim is traceable to evidence and reasoning
- Audit capability: any claim can be examined, challenged, and re-evaluated
- Decision quality: decisions made on Level 3 claims have a documented basis that can be reviewed
- Error recovery: when a verified claim turns out to be wrong, the evidence and reasoning can be traced, the error located, and the impact on downstream claims assessed
Levels 4 and 5 add important capabilities — measurement, continuous improvement, institutional leadership, and full sovereignty — but these capabilities require organizational investment that is not proportionate for organizations where epistemic quality is a supporting rather than primary function.
Teams and individuals whose work primarily involves strategic research, policy development, regulated decision-making, or published claims should aspire to Level 4. For everyone else, sustained Level 3 is the goal.
Level 3 Anti-Patterns
The Compliance Surface
Organizations that reach Level 3 through governance pressure — where VERA documentation is required, so it gets done — sometimes produce claims that meet formal requirements without epistemic substance. The evidence set is populated with items but they weren’t gathered through a prospective search. The reasoning chain has the right format but the steps are vague. The verification record exists but the criteria were not genuinely applied.
The compliance surface is detected by going one level deeper in any part of the claim record and asking: “Can this be substantiated?” If the prospective search plan says “searched online databases” without specifying what was searched and what terms were used, it is a compliance artifact, not a real plan. The verification process is the primary mechanism for catching compliance surface — which is why verification independence matters so much at Level 3.
The Significance Threshold Creep
Organizations tend to find reasons to classify more and more claims as below the significance threshold — and therefore not requiring VERA treatment. This is natural. VERA work takes time. Not everything needs it. But if the threshold creeps to the point where very few claims are actually documented, the mandate becomes meaningless.
The significance threshold must be reviewed by the VERA owner at regular intervals. An organization whose threshold has effectively covered 10% of significant decisions — when the policy intended it to cover 80% — has a governance failure, not a VERA failure.
The Registry Graveyard
A claim registry that is not maintained becomes worse than no registry: it gives the impression that claims have been vetted when some of them are stale, superseded, or no longer relevant. Review cadences (established in Verification Protocol Phase 5) are the mechanism. Level 3 governance must include enforcement of review cadences, not just their documentation.
Moving from Level 3 to Level 4
The Level 3-to-4 transition is fundamentally a shift from managing practice to managing quality. Level 3 ensures that VERA is applied. Level 4 ensures that VERA is applied well and improving.
Specific transitions required:
Evidence → L4: Evidence quality metrics are tracked as aggregate data, not just documented in individual claims. The organization knows: what percentage of its claims have Tier 1 evidence as part of their evidence set? What is the average evidence tier distribution? How has this changed over the past six months?
Reasoning → L4: Reasoning quality is assessed against written criteria at a program level. Common reasoning gaps — the errors that appear most frequently across claims — are tracked and addressed through targeted training or pattern development.
Verification → L4: Verification quality metrics exist: pass rate (what percentage of submitted claims are verified on first submission?), rework rate, time-to-verify, contested claim rate. These metrics are reviewed on a regular cadence by the VERA governance function.
Governance → L4: A governance committee or equivalent body exists with defined membership, meeting cadence, and decision authority. VERA is included in organizational reporting at a level of granularity that allows trend monitoring.
Sovereignty → L4: Sovereignty gaps identified in the Level 3 assessment are being actively remediated. Vendor dependencies are managed with documented exit plans. AI tool sovereignty is assessed.
Integration → L4: VERA is part of formal organizational processes — not just visible in artifacts, but required at decision gates, included in project management templates, and represented in governance reporting.
Level 3 Self-Assessment Checklist
Evidence (all must be Yes for Level 3):
- Every significant claim produced in the last 90 days has a rated evidence set
- Prospective search plans are documented before evidence collection for all significant claims
- Absent evidence is recorded for all significant claims
- Domain-specific evidence quality examples exist and are used to calibrate ratings across practitioners
Reasoning (all must be Yes for Level 3):
- Every significant claim has an explicit step-by-step reasoning chain with labeled inference types
- Reasoning chains can be evaluated by VERA-competent non-specialists
- Peer reasoning review is in place for high-significance claims before formal verification
Verification (all must be Yes for Level 3):
- Verification Protocol is applied to all significant claims; no exceptions
- All verification records are complete with per-criterion findings and confidence ratings
- A non-trivial percentage of submitted claims are returned for revision (verification is not a rubber stamp)
- Contested claim process exists in writing and has been invoked at least once
Governance (all must be Yes for Level 3):
- A documented VERA policy exists with named ownership
- Significance threshold is defined in writing and applied consistently
- VERA training is included in onboarding for relevant roles
- Non-compliant significant claims are not treated as verified in decision-making
Sovereignty (all must be Yes for Level 3):
- Formal sovereignty assessment completed within the last 18 months
- All five principles have specific ratings with evidence
- Remediation plan with owners and dates exists for any gaps
- Claim records and verification records are accessible to affected stakeholders
Integration (all must be Yes for Level 3):
- Claim registry exists, is maintained, and is actively used
- Decision documents reference claim records by ID
- New evidence triggers a defined review process for affected claims
Proceed to Level 4 — Governing to understand what measured, continuously improving VERA practice looks like.
Level 4 — Governing
At Level 4, VERA practices are no longer managed by the people doing them. They are managed by the organization that depends on them.
What Level 4 Feels Like
The transition from Level 3 to Level 4 is the transition from doing VERA to governing VERA. Level 3 ensures that the right things happen. Level 4 ensures that the organization can observe whether the right things are happening, measure how well they are happening, and systematically improve how well they happen.
In concrete terms: at Level 3, practitioners apply the Verification Protocol and produce Verification Records. At Level 4, the governance function reviews those records in aggregate, asks “what patterns do we see in the failures?”, and uses those patterns to improve training, update criteria, commission new patterns, or revise the significance threshold.
Level 4 feels different from inside the organization. Practitioners at Level 4 are still doing the same work — assembling evidence sets, writing reasoning chains, conducting verification — but they are doing it with the awareness that their work is part of a measured system. They receive feedback on their verification records that is calibrated against organizational standards, not just against the practitioner’s own judgment. They see metrics on VERA quality that tell them where the organization is improving and where it is not.
A Level 4 organization has answered a question that Level 3 organizations leave implicit: How good is good enough? Level 3 requires that VERA is applied; Level 4 defines quality standards and measures whether practice meets them. This shift from required-presence to quality-standard is the defining characteristic of Level 4.
For individual contributors, Level 4 can feel more constraining than Level 3 — there are now explicit quality metrics, standards, and feedback loops. For the organization, Level 4 produces capabilities that Level 3 cannot: the ability to certify the quality of its epistemic work to external parties, to compare its VERA quality across teams and periods, and to make evidence-based investments in VERA improvement.
The Six Domains at Level 4
Evidence
At Level 4, evidence quality is a managed metric, not just a per-claim practice. The governance function tracks the distribution of evidence quality tiers across the claim registry: what percentage of claims have Primary-tier evidence in their evidence sets? How has this percentage changed over time? Which teams or claim types consistently produce lower-tier evidence, and what are the structural reasons?
A library of trusted source classifications exists and is maintained. Rather than requiring each practitioner to independently assess whether a given source type qualifies as Primary or Secondary in their domain, the organization maintains a documented classification table: for claims in domain X, sources of type Y are classified as Tier Z, with the rationale. This library is reviewed and updated as the domain’s evidence landscape changes.
Absence evidence is treated as a systematic signal, not just a per-claim footnote. When significant evidence types are absent from multiple claims in a domain, the governance function investigates: Is there a structural gap in the organization’s evidence access? Is there a prospective search plan problem (practitioners aren’t looking for certain evidence types)? Is there an availability problem (the expected evidence doesn’t exist yet)?
Evidence chain-of-custody documentation has matured by Level 4. It is not just documented in individual claim records — it is auditable. An external auditor should be able to retrieve any evidence item from any claim in the registry, trace it to its original source, and confirm that the transformation from source to evidence item was accurately described.
Level 4 Evidence indicators:
- Evidence quality distribution is tracked as an organizational metric, reviewed on a regular cadence
- A trusted source classification library exists, is maintained, and is actively used to calibrate evidence tier ratings
- Patterns in absent evidence across multiple claims are identified and investigated at the governance level
- Evidence chain-of-custody documentation is audit-ready for all claims in the registry
Reasoning
At Level 4, reasoning quality is assessed systematically — not just in individual verification events, but across the claim population. The governance function maintains a taxonomy of reasoning errors encountered in verification: which reasoning gap types appear most frequently? Which inference type errors are most common? Which assumption categories are most often undisclosed?
This error taxonomy feeds directly into training. Rather than training practitioners on VERA reasoning concepts in the abstract, Level 4 training focuses on the specific errors that the organization’s practitioners actually make. A team that consistently leaves hidden deductive steps in reasoning chains about market projections receives training specifically designed to develop the habit of explicit step-documentation for inductive inferences from market data.
New patterns are systematically developed at Level 4. When the error taxonomy reveals a recurring reasoning challenge that has been resolved in practice, the governance function ensures that resolution is documented as a pattern. The organization maintains an active internal pattern library, and contributes patterns to the VERA community library when they are sufficiently generalized.
Reasoning review is calibrated at Level 4. Different significance levels of claims receive different levels of reasoning review: simple, low-stakes claims may receive self-verification of reasoning; complex, high-stakes claims receive expert-level peer review. The calibration is documented and applied consistently.
Level 4 Reasoning indicators:
- A reasoning error taxonomy is maintained from verification data; common errors are tracked
- Training is updated based on the error taxonomy, targeting actual organizational error patterns
- Patterns are developed from recurring reasoning challenges and contributed to internal and community libraries
- Reasoning review calibration is documented: which claims receive which level of reasoning review
Verification
At Level 4, verification is a measured process. The governance function tracks:
- First-pass verification rate: the percentage of submitted claims that are verified on first submission. A rate above ~90% suggests standards are too low; below ~60% suggests practitioners are submitting too early.
- Rework rate and type: which criteria cause most failed verifications? Where are practitioners most frequently underprepared?
- Time-to-verify: from submission to completed Verification Record. Trends here reveal capacity and process problems.
- Contested claim rate: the percentage of verified claims that are formally contested. A very low rate may indicate that the challenge process is too inaccessible.
- Confidence rating distribution: the distribution of confidence ratings across verified claims. A clustering of ratings at the top of the scale suggests calibration problems.
The verifier pool is actively managed. The governance function tracks which practitioners are conducting verifications, assesses their calibration (do different verifiers applying the same criteria reach the same results?), and develops verifier capability through targeted review, training, and calibration exercises.
Verification criteria are subject to periodic formal review. The criteria in the Verification Protocol represent Version 1.0’s best judgment about what constitutes adequate evidence and reasoning. By Level 4, the organization has encountered enough edge cases to know where the criteria need refinement. These refinements are documented as organizational amendments to the standard criteria, with rationale, and the changes are submitted to the VERA community as proposed protocol improvements.
Level 4 Verification indicators:
- Verification quality metrics (first-pass rate, rework rate, time-to-verify, contested rate, confidence distribution) are tracked and reviewed on a regular cadence
- Verifier pool is actively managed; verifier calibration is assessed
- Verification criteria have been reviewed at least once since Level 3 was achieved; any refinements are documented
- Contested claim process data is reviewed; process accessibility is evaluated
Governance
Level 4 Governance is the most structurally complex domain. It requires a functioning governance body with defined scope, mandate, resources, and reporting mechanisms.
The governance body may take many forms — a VERA steering committee, a Chief Epistemic Officer function, an epistemic quality team embedded in a research or risk function — but it must have four capabilities: the authority to mandate VERA standards, the resources to support VERA practice (training, tooling, practitioner time), the visibility to assess VERA quality across the organization, and the accountability to report VERA quality to organizational leadership.
Metrics governance: The governance body defines which metrics are tracked, establishes targets and thresholds, reviews metrics on a defined cadence, and takes action when metrics deviate from targets. This is not passive monitoring — it is active management. A first-pass verification rate of 45% triggers an investigation into why practitioners are submitting underprepared claims, not just a note in the quarterly report.
Standards governance: The governance body owns the VERA standards that apply in the organizational context. This includes the significance threshold (reviewed at least annually), the evidence quality classification library (reviewed when domain evidence landscape changes), and the verification criteria (reviewed annually and when persistent criterion-specific failures are detected).
VERA in organizational cadences: At Level 4, VERA quality appears in organizational governance reporting at a level of granularity that allows meaningful discussion. Not just “we are doing VERA” but: verified claims this period, first-pass rate, confidence rating distribution, sovereignty assessment status, and actions underway to address identified gaps.
VERA is in onboarding, training, and performance frameworks: Practitioners are assessed on VERA competency as part of their role. This does not mean VERA compliance is a performance management hammer — it means that VERA competency is treated as a professional capability, like domain knowledge, that is developed and assessed over time.
Level 4 Governance indicators:
- A formal governance body exists with defined mandate, membership, cadence, and reporting relationships
- VERA quality metrics are reviewed by the governance body at each meeting
- VERA quality is reported to organizational leadership (at the level appropriate for the organization’s size and structure) at defined intervals
- VERA competency is assessed as part of practitioner role expectations; VERA development is supported through explicit training investment
Sovereignty
At Level 4, the sovereignty gaps identified in the Level 3 assessment are being actively remediated according to the documented plan. The remediation is tracked at the governance level — not just as a practitioner responsibility but as an organizational commitment with owner accountability.
Data Sovereignty (S1) remediation at Level 4 typically involves: auditing all tools used to store VERA artifacts for export capability and vendor dependency risk; establishing documented exit plans for tools with significant lock-in risk; ensuring that evidence item chain-of-custody documentation is maintained in an organization-controlled format, not just in a vendor’s system.
Reasoning Sovereignty (S2) at Level 4 requires that the claim registry — and the reasoning chains within it — is accessible to all affected stakeholders. This is not limited to practitioners: anyone whose decisions are informed by a VERA-documented claim should be able to access that claim’s reasoning chain. Level 4 organizations have typically resolved the question of how to provide appropriate access to non-practitioner stakeholders without compromising the claim record’s integrity.
AI tool sovereignty at Level 4 is explicitly assessed. The organization uses AI tools in ways that expose, not conceal, the AI’s reasoning. Any AI system contributing to VERA work has its reasoning captured and documented according to the Verification Protocol’s AI-assisted claim requirements. The organization can enumerate which AI systems it uses, what role they play in VERA work, and how sovereignty is maintained over each.
Level 4 Sovereignty indicators:
- Sovereignty gaps from the Level 3 assessment are remediated on documented schedule; completion is tracked by the governance body
- Tool sovereignty has been assessed; tools with significant vendor dependency risk have documented exit plans or are being replaced
- AI tool sovereignty is explicitly assessed; AI reasoning contributions are documented and challengeable
- Claim records and verification records are accessible to non-practitioner stakeholders affected by those claims
Integration
At Level 4, VERA is part of the organization’s formal decision-making infrastructure. This goes beyond VERA artifacts being accessible — VERA verification status is a required input to defined decision processes.
Decision gates: High-stakes decisions above defined thresholds require that the claims supporting them be at a specified verification state before the decision is made. A capital allocation above $X, a regulatory submission, a public commitment — each has a defined VERA gate. Decision-making without meeting the gate triggers an explicit escalation, not an implicit exception.
Project and program management: VERA work is explicitly planned and resourced in project management. The time required to document significant claims is estimated and allocated, not treated as overhead that has to be absorbed. This is the Level 4 resolution of Champion Fatigue: VERA work is budgeted work, not extra work.
Reporting integration: VERA metrics appear in organizational governance reporting. The quarterly or annual review that covers financial performance, operational quality, and risk management also covers epistemic quality — because epistemic quality is now recognized as a managed organizational capability.
Tool integration: Level 4 integration means that VERA is embedded in the tools practitioners use, not maintained as a parallel system. The claim registry is integrated with the knowledge management system. Evidence items are linked to the organization’s reference management system. Verification workflows may be automated in part — reminder systems for review cadences, notification systems for downstream claim alerts when upstream claims change state.
Level 4 Integration indicators:
- Formal decision gates require specified VERA verification state for claims above defined stakes thresholds
- VERA work is budgeted and planned in project management, not treated as overhead
- VERA metrics appear in governance reporting
- Claim registry is integrated with organizational knowledge management tools; evidence management is connected to reference management systems
The Governance Trap to Avoid
The most significant Level 4 failure mode is governance without substance: a sophisticated governance structure that produces well-formatted metrics about poorly constructed claims. If the governance function is measuring VERA compliance (are forms being filled out?) rather than VERA quality (are evidence sets genuinely complete? are reasoning chains genuinely explicit?), it is creating an elaborate system for managing a compliance surface rather than managing epistemic quality.
The antidote is to ensure that the governance metrics capture quality signals, not activity signals. “Percentage of significant claims with Verification Records” is an activity metric. “First-pass verification rate” is a quality signal. “Average confidence rating” is a quality signal. “Percentage of contested claims whose contestation led to a state change” is a quality signal. The governance body’s agenda should be dominated by quality metrics, not activity metrics.
Moving from Level 4 to Level 5
The Level 4-to-5 transition is the most conceptually demanding in the model. Level 5 adds two things that Level 4 does not have: complete sovereignty across all five principles and self-referential VERA application — using VERA methods to evaluate VERA practice itself.
Specific transitions:
Evidence → L5: Evidence infrastructure is fully sovereign — audit-ready, exportable, with no significant vendor lock-in risk. The organization contributes evidence quality standards to the VERA community.
Reasoning → L5: Reasoning sovereignty is fully implemented; any affected stakeholder can trace any significant claim’s reasoning chain. The organization uses VERA methods to evaluate the quality of its reasoning practices — meta-level VERA application.
Verification → L5: Verification criteria are published externally, not just documented internally. The challenge process is accessible to external stakeholders where relevant. Verifier calibration is strong enough that the organization can serve as a verifier for other organizations’ claims.
Governance → L5: The governance function evaluates itself using VERA methods. Its own claims about VERA quality — “our first-pass rate is improving,” “our sovereignty gaps are being remediated” — are treated as VERA claims and verified accordingly. The organization participates in the VERA governance community.
Sovereignty → L5: All five principles are fully met. The sovereignty assessment is continuous rather than periodic — part of the ongoing governance process rather than an annual event.
Integration → L5: VERA is the epistemic layer of the organization. Non-VERA claims — assertions made without documentation — are explicitly marked as such in organizational outputs. The distinction between verified claims and unverified assertions is consistently made in all significant communications.
Level 4 Self-Assessment Checklist
Evidence (all must be Yes for Level 4):
- Evidence quality distribution is tracked as an organizational metric and reviewed on a defined cadence
- A trusted source classification library exists for the organization’s primary domains
- Patterns in absent evidence across claims are investigated at the governance level
- Evidence chain-of-custody documentation is audit-ready for all registry claims
Reasoning (all must be Yes for Level 4):
- A reasoning error taxonomy is maintained from verification data
- Training is updated based on the error taxonomy
- At least one pattern has been developed from a recurring organizational reasoning challenge
- Reasoning review calibration is documented and applied consistently
Verification (all must be Yes for Level 4):
- All five verification quality metrics are tracked and reviewed on cadence
- Verifier pool is actively managed; calibration is assessed
- Verification criteria have been formally reviewed at least once; any refinements are documented
Governance (all must be Yes for Level 4):
- A formal governance body exists with defined mandate, membership, cadence, and reporting relationships
- VERA quality metrics are reviewed at each governance meeting
- VERA quality is reported to organizational leadership at defined intervals
- VERA competency is explicitly assessed in relevant practitioner roles
Sovereignty (all must be Yes for Level 4):
- All Level 3 sovereignty gaps are remediated on documented schedule
- Tool sovereignty assessed; significant lock-in risks have documented exit plans
- AI tool sovereignty is explicitly assessed and documented
- Claim records are accessible to non-practitioner affected stakeholders
Integration (all must be Yes for Level 4):
- Formal decision gates requiring VERA verification status exist for high-stakes decisions
- VERA work is budgeted and resourced in project management
- VERA metrics appear in organizational governance reporting
- Claim registry is integrated with organizational knowledge management tools
Proceed to Level 5 — Sovereign to understand what full epistemic sovereignty and self-governing VERA practice look like.
Level 5 — Sovereign
At Level 5, epistemic quality is not a practice the organization maintains. It is a property the organization embodies.
What Level 5 Means
Level 5 is the level at which VERA’s foundational commitments are fully realized. It is not merely the level at which VERA is applied comprehensively, measured rigorously, and governed well — Level 4 achieves all of that. Level 5 adds three things that Level 4 does not have:
Complete sovereignty. All five Sovereignty Principles are fully met — not partially, not with documented gaps, not with remediation plans in progress. The organization has genuine authority over its evidence, its reasoning, its conclusions, and its verification processes. This is not a state that was briefly achieved and maintained. It is a continuous condition, actively sustained.
Self-referential application. VERA is applied to VERA. The claims the organization makes about its own epistemic quality — “our verification is rigorous,” “our evidence is complete,” “our reasoning chains are explicit” — are themselves VERA claims, with evidence sets, reasoning chains, and verification records. Level 5 organizations do not accept their own governance reports as truth; they verify them.
Community contribution. A Level 5 organization has moved from being a consumer of the VERA framework to being a contributor. It develops patterns that are published to the community. It proposes protocol improvements based on verified experience. It participates in the governance of VERA as a framework. Level 5 carries a contribution obligation that earlier levels do not.
What Level 5 Is Not
Before examining each domain, it is worth being precise about what Level 5 is not.
Level 5 is not perfect. Level 5 organizations still make reasoning errors, still encounter evidence gaps, still have claims that require revision after verification. What distinguishes Level 5 is not the absence of errors but the robustness of the system that catches and corrects them.
Level 5 is not static. The claim that an organization is at Level 5 is itself a VERA claim — one that requires ongoing re-verification. An organization that achieved Level 5 last year and has not actively maintained the conditions for it is not at Level 5 today. Sovereignty is continuously asserted, not permanently granted.
Level 5 is not universal. No organization applies VERA to every claim it makes. Level 5 means that all claims within the defined scope — the significant claims that were the target of Level 3 institutionalization — are handled at the highest level of VERA quality, and that the scope definition itself is honest. Level 5 is not achieved by narrowing scope until 100% compliance is trivial.
The Six Domains at Level 5
Evidence
At Level 5, the Evidence domain has achieved what the Sovereignty Principles require in full: every evidence item in every in-scope claim is accessible, exportable, and audited for chain-of-custody integrity. Not most evidence. All evidence.
The trusted source classification library is not just maintained — it has been verified. The claim “source type X qualifies as Secondary-tier evidence for claims about domain Y” is itself a VERA claim with an evidence set (why Secondary rather than Primary or Tertiary?), a reasoning chain (what characteristics of the source and domain justify this classification?), and a Verification Record. The organization is not simply asserting its classification framework — it has earned confidence in it through the same process it applies to other claims.
Evidence quality improvement is a continuous program at Level 5. The governance function not only tracks the distribution of evidence tiers but actively invests in moving the distribution toward higher tiers where that is possible. When Testimonial-tier evidence is the best available for important claims, the organization has a research program — or a collaboration program — to develop better evidence. The goal is not merely to accept the evidence landscape as given; it is to improve it.
Level 5 organizations contribute to the VERA community’s understanding of evidence quality. Domain-specific evidence classification libraries, validated by Level 5 evidence practices, are made available to the community. The worked examples and calibration tools that help Level 2 practitioners develop good evidence judgment are developed by organizations with Level 5 evidence capability.
Level 5 Evidence indicators:
- All in-scope claims have fully audit-ready evidence documentation; this is confirmed by external audit rather than self-assessment
- The trusted source classification library has been verified using VERA methods
- A continuous evidence quality improvement program exists and is funded
- Domain-specific evidence calibration tools have been contributed to the VERA community
Reasoning
At Level 5, the Reasoning domain is characterized by three qualities: depth, calibration, and reflexivity.
Depth means that reasoning chains are not merely structurally complete — they are substantively excellent. The inference steps are tight. The assumptions are minimal and clearly necessary. The engagement with contrary evidence is thorough and honest. This is the difference between a reasoning chain that passes verification criteria and one that would persuade a skeptical expert.
Calibration means that confidence ratings are accurate predictors of future outcomes. An organization with calibrated reasoning confidence assigns high confidence (0.85–0.95) to claims that turn out to be right, and low confidence (0.40–0.55) to claims that turn out to need significant revision. Calibration is measured at Level 5 by comparing historical confidence ratings to subsequent claim outcomes — a practice that requires a claim registry with enough history and enough re-verified claims to provide meaningful data.
Reflexivity is the Level 5 characteristic. The organization uses VERA reasoning methods to reason about its own reasoning practices. When the governance function concludes that “our reasoning quality is improving,” that conclusion must itself have an evidence set (the quality metrics data), a reasoning chain (why do these metric improvements indicate genuine quality improvement rather than measurement gaming?), and a verification state. The governance function cannot exempt its own conclusions from the standards it applies to everyone else’s.
Level 5 Reasoning indicators:
- Reasoning chain quality assessments show substantive improvement over time on expert-review dimensions, not just formal compliance dimensions
- Confidence rating calibration is measured by comparing historical ratings to claim outcomes; miscalibration is investigated and addressed
- The governance function applies VERA reasoning standards to its own conclusions about VERA quality
- Reasoning patterns contributed to the community library have accumulated verified Known Uses from multiple organizations
Verification
At Level 5, the Verification domain has achieved external auditability. An external party — a regulator, a partner organization, a VERA community reviewer — can examine the organization’s verification process and conclude, based on the evidence available, that it is rigorous, independent, and capable of producing negative results.
This external auditability requires that verification criteria be published, not just documented internally. “We use the VERA Verification Protocol” is insufficient for Level 5. The specific criteria interpretations, the verifier independence standards, and the contested claim process must be accessible to external parties who want to understand how claims were verified.
The verifier pool at Level 5 is not just managed — it is capable of serving external verification needs. Level 5 organizations can serve as peer verifiers for other organizations’ claims in domains where they have recognized expertise. This is a meaningful test of verification capability: it requires that verification practices be sufficiently documented and calibrated to be applied outside the original organizational context.
The contested claim process at Level 5 has produced a body of decisions — claims that were contested, re-evaluated, and either confirmed or changed in state. This body of decisions is itself valuable evidence about the quality and fairness of the verification process. Level 5 organizations maintain and publish (with appropriate anonymization) their contested claim decision records as a demonstration of process integrity.
Level 5 Verification indicators:
- Verification criteria, independence standards, and contested claim process are published externally
- External audit of the verification process has been conducted and its findings addressed
- The organization has served as peer verifier for at least one external organization’s claims
- Contested claim decision records are maintained and available for review; the process has demonstrably produced changed outcomes
Governance
At Level 5, the governance function has achieved the most demanding test: it governs itself. The claims that the governance function makes about VERA quality — in metrics reports, in performance assessments, in external communications — are themselves VERA claims. The governance function’s conclusions are not exempt from the standards it applies to everyone else.
This self-referential application changes the nature of governance reporting. A Level 5 governance report does not assert that “verification quality has improved.” It presents the evidence for this claim (the metrics data), the reasoning chain connecting the evidence to the conclusion (why does this pattern in the data mean quality has improved rather than that practitioners have learned to game the metrics?), and the verification state (who reviewed this conclusion and how?).
The governance function at Level 5 also participates in the external VERA governance community. This means: attending community forums, sharing metrics data (aggregated and anonymized where appropriate), proposing protocol improvements, and accepting external review of its own VERA practices. The governance function’s sovereignty includes the right to participate in shaping the framework it depends on.
Leadership-level accountability at Level 5 is substantive, not ceremonial. Board-level or C-suite-level ownership of epistemic quality means that the organization’s senior leadership understands VERA metrics, can discuss them meaningfully, and treats persistent gaps in epistemic quality the way they would treat persistent gaps in financial quality or operational safety — as a leadership responsibility, not a practitioner problem.
Level 5 Governance indicators:
- Governance function applies VERA methods to its own conclusions about VERA quality
- Governance reports present claims, evidence, and reasoning — not just conclusions
- The organization participates actively in VERA community governance
- Senior leadership (board or C-suite level) demonstrates substantive, not ceremonial, accountability for epistemic quality
Sovereignty
At Level 5, the Sovereignty domain is fully realized. This is definitional: a Level 5 organization meets all five Sovereignty Principles completely. If any principle has a documented gap — even one under active remediation — the organization is at Level 4 in the Sovereignty domain.
What makes Level 5 Sovereignty distinctive from a high Level 4 Sovereignty is not the presence of full compliance but the continuity and self-maintenance of that compliance. The sovereignty assessment at Level 5 is not an annual event — it is a continuous monitoring process, integrated into the governance cadence, that detects sovereignty erosion before it becomes a gap.
Specific Level 5 Sovereignty conditions:
S1 (Data Sovereignty): No in-scope evidence item is stored in a system that the organization cannot audit, control, and exit on its own terms. No significant vendor dependency exists without a tested exit plan. The organization has successfully executed at least one evidence migration — demonstrating that its portability commitment is real, not theoretical.
S2 (Reasoning Sovereignty): Any person whose decisions are affected by an in-scope claim can access that claim’s complete reasoning chain without requiring special approval. The mechanism for this access is documented, tested, and maintained. No AI-generated reasoning contribution is used without full exposure of what the AI produced, how it was used, and what human review was applied.
S3 (Conclusion Sovereignty): The contested claim process is used, not merely available. The governance function actively monitors whether the challenge process is accessible and invites its use rather than discouraging it. At least one challenged claim has changed state as a result of the process in the past 24 months.
S4 (Process Sovereignty): External parties can audit the verification process. The organization has completed at least one external verification audit and has publicly documented its response to the audit’s findings.
S5 (Temporal Sovereignty): All in-scope claims are reviewed on documented schedules. No claim is stale — past its review date without a documented extension justification. The governance function tracks review cadence compliance as a standard metric.
Level 5 Sovereignty indicators:
- All five Sovereignty Principles are fully met with no documented gaps
- Sovereignty monitoring is continuous; erosion is detected and remediated within defined SLAs
- At least one evidence migration has been successfully completed (demonstrating real portability)
- S3 contest process is active; at least one claim has changed state via the challenge process in the past 24 months
- S4 external audit has been completed and findings addressed; results are public
Integration
At Level 5, the Integration domain completes its trajectory: VERA is the organization’s epistemic layer. This means that the distinction between VERA practice and organizational knowledge practice has effectively dissolved — significant knowledge work is VERA work, and VERA work is how significant knowledge is produced.
Non-VERA claims — assertions made without documentation — are explicitly marked as such in organizational outputs. This is a Level 5 Integration requirement that has no equivalent at earlier levels. At Level 3, claims and assertions coexist without the distinction being marked. At Level 5, the distinction is marked: when a document, presentation, or communication contains unverified assertions, they are labeled as such. The reader can distinguish between “this was verified” and “this is the author’s view, not yet documented.”
External communications — reports, publications, regulatory submissions, public statements — reflect VERA verification status where appropriate. Organizations at Level 5 in regulated industries or with public accountability may explicitly reference VERA verification in external documents, creating an auditable epistemic record of public claims.
The knowledge management system at Level 5 is VERA-native. The default format for significant knowledge artifacts is a VERA-formatted claim record. Practitioners produce VERA-formatted outputs as their primary deliverable, not as supplementary documentation. The claim registry is the organization’s primary knowledge asset, not a secondary documentation system maintained alongside the “real” outputs.
VERA is also integrated into the organization’s learning and development function at Level 5. When practitioners develop expertise, VERA is part of how that expertise is documented: expert practitioners produce verified claims in their domain of expertise, not just informal knowledge that is inaccessible to others.
Level 5 Integration indicators:
- Non-VERA assertions in organizational outputs are explicitly marked as unverified
- External communications reflect VERA verification status where appropriate and auditable
- The claim registry is a primary knowledge asset; VERA-formatted outputs are standard deliverables
- Expert practitioner knowledge development is documented through verified VERA claims
The Contribution Obligation
Level 5 carries obligations that earlier levels do not. An organization that has achieved Level 5 VERA capability has benefited from the work of earlier practitioners who developed the framework, documented patterns, and resolved the edge cases that the current framework handles. That debt is repaid through contribution.
Pattern contribution. Level 5 organizations maintain an active pattern development program. Recurring challenges that were resolved in practice are documented as patterns, verified against the pattern template standards, and contributed to the VERA community library. The expectation is not occasional contribution — it is a cadenced program that produces multiple contributed patterns per year.
Protocol improvement. Level 5 organizations propose improvements to VERA’s core protocols based on verified experience. When experience reveals that a verification criterion needs refinement, or that a phase of the protocol is consistently misapplied, the organization documents the problem and proposes a solution through the community governance process. These proposals are VERA claims: they must have evidence (the specific experience that motivates the improvement) and reasoning (why the proposed change improves the protocol).
Community verification. Level 5 organizations contribute their verification capacity to the VERA community. They serve as peer verifiers for other organizations’ claims in their domains of expertise. They provide calibration assistance — helping Level 2 and 3 organizations develop consistent evidence rating and reasoning quality practices.
Level 5 organizations do not hoard epistemic quality. The framework grows stronger when high-capability organizations share what they have learned. This is not altruism — it is the acknowledgment that VERA’s value depends on a community of practice that is continuously improving, and that organizations at Level 5 have both the capacity and the responsibility to drive that improvement.
Sustaining Level 5
Level 5 is not a destination. It is a condition that requires active maintenance. The most significant threats to sustained Level 5 are:
Sovereignty erosion. Tools change. Vendors change terms. New AI systems are adopted without sovereignty assessment. Over time, the conditions that supported full Sovereignty compliance erode through accumulation of small decisions that individually seem inconsequential. Continuous sovereignty monitoring — the Level 5 Sovereignty requirement — exists to catch this erosion.
Governance fatigue. The governance function at Level 5 has significant responsibilities. If those responsibilities are not adequately resourced, or if the people carrying them burn out and are not replaced with equal capability, the governance quality degrades. Level 5 organizations should treat VERA governance capacity as a critical resource, planned and protected as such.
Standards drift. Over time, the standards for what constitutes “complete” evidence, “explicit” reasoning, and “rigorous” verification can drift — usually downward, in the direction of what is easier to achieve. Regular external calibration — including serving as peer verifiers for other organizations and having external verifiers review your own claims — is the mechanism for detecting and correcting standards drift.
Framework evolution. VERA as a framework will evolve. New protocol versions will be released. Community standards will change. Level 5 organizations must stay engaged with framework development or risk finding that their practices, however excellent they were at the time they were developed, no longer represent current best practice. Participation in the VERA governance community is the mechanism for both staying current and shaping what current means.
Level 5 Self-Assessment Checklist
Because Level 5 self-assessment is itself a VERA claim, the checklist is structured around verifiable evidence, not self-report.
Evidence (all must be Yes, with evidence, for Level 5):
- External audit has confirmed that all in-scope claims have audit-ready evidence documentation
- The trusted source classification library has been verified using VERA methods (Verification Record exists)
- Evidence quality improvement program is funded and has documented outcomes
- Domain-specific evidence tools have been contributed to the VERA community library
Reasoning (all must be Yes, with evidence, for Level 5):
- Confidence rating calibration has been measured by comparing historical ratings to claim outcomes
- Governance function applies VERA reasoning methods to its own quality conclusions (Verification Records exist for governance claims)
- At least two patterns contributed to the community library have accumulated verified Known Uses from other organizations
Verification (all must be Yes, with evidence, for Level 5):
- Verification criteria, independence standards, and contested claim process are published externally
- External verification audit has been completed; findings and responses are documented
- Organization has served as peer verifier for at least one external organization’s claims
Governance (all must be Yes, with evidence, for Level 5):
- Governance reports present claims with evidence sets, reasoning chains, and verification states — not just conclusions
- Organization participates in VERA community governance (documented participation)
- Senior leadership accountability for epistemic quality is substantive: leadership can discuss VERA metrics and has taken action based on them
Sovereignty (all must be Yes, with evidence, for Level 5):
- All five Sovereignty Principles are rated as fully met in the most recent sovereignty assessment (no gaps)
- Sovereignty monitoring is continuous; the mechanism and cadence are documented
- At least one evidence migration has been successfully completed
- S3 contest process has produced at least one changed claim state in the past 24 months
- S4 external audit has been completed; results are published
Integration (all must be Yes, with evidence, for Level 5):
- Non-VERA assertions in significant organizational outputs are explicitly marked as unverified
- Claim registry is treated as a primary knowledge asset; evidence for this exists in how the registry is resourced and used
- External communications reference VERA verification status where appropriate
Level 5 is not the end of the VERA journey — it is the point at which the journey becomes part of what you contribute to others. Return to the Pattern Catalog to find patterns that support Level 5 practice, and to the Implementation section if you are beginning this journey from earlier levels.
Pattern Catalog
The Pattern Catalog is the master index of all VERA patterns. Patterns are reusable, documented solutions to recurring challenges in evidence management, reasoning construction, and verification practice. Every pattern in this catalog follows the canonical Pattern Template and is itself a verified claim.
Use this catalog to find the right pattern for a situation you are facing. Browse by domain, maturity level, or use case.
Complete Pattern Index
| ID | Name | Domain | Complexity | Min. Level | Status |
|---|---|---|---|---|---|
| VERA-P-0001 | Absence-of-Evidence Assessment | Evidence | Moderate | 2 | Verified |
| VERA-P-0002 | Conflicted Source Disclosure | Evidence | Moderate | 2 | Verified |
| VERA-P-0003 | AI-Generated Evidence Documentation | Evidence | Moderate | 2 | Verified |
| VERA-P-0004 | Source Collapse Detection and Remediation | Evidence | Simple | 2 | Verified |
| VERA-P-0005 | Time-Sensitive Evidence Management | Evidence | Moderate | 3 | Verified |
| VERA-P-0006 | Compound Claim Decomposition | Reasoning | Moderate | 2 | Verified |
| VERA-P-0007 | Hidden Assumption Excavation | Reasoning | Moderate | 2 | Verified |
| VERA-P-0008 | Contrary Evidence Integration | Reasoning | Moderate | 2 | Verified |
| VERA-P-0009 | Analogical Reasoning Validation | Reasoning | Complex | 3 | Verified |
| VERA-P-0010 | Self-Verification with Adversarial Stance | Verification | Moderate | 2 | Verified |
| VERA-P-0011 | Expert Verifier Onboarding | Verification | Complex | 3 | Verified |
| VERA-P-0012 | Cascading Claim Update | Verification | Complex | 3 | Verified |
| VERA-P-0013 | Claim Confidence Calibration | Verification | Complex | 4 | Verified |
Patterns by Domain
Evidence Domain
Patterns for evidence identification, assembly, rating, and maintenance.
- VERA-P-0001 — Absence-of-Evidence Assessment: How to assess the epistemic significance of evidence you expected to find but didn’t.
- VERA-P-0002 — Conflicted Source Disclosure: How to handle evidence from sources with a financial or institutional stake in the claim’s outcome.
- VERA-P-0003 — AI-Generated Evidence Documentation: How to document evidence that was retrieved, summarized, or synthesized by an AI system.
- VERA-P-0004 — Source Collapse Detection and Remediation: How to identify and correct evidence sets where multiple items trace to the same underlying source.
- VERA-P-0005 — Time-Sensitive Evidence Management: How to manage evidence in rapidly changing domains where evidence decays in reliability over time.
Reasoning Domain
Patterns for constructing, documenting, and evaluating reasoning chains.
- VERA-P-0006 — Compound Claim Decomposition: How to systematically break compound assertions into independently verifiable atomic claims.
- VERA-P-0007 — Hidden Assumption Excavation: A systematic protocol for surfacing the assumptions embedded in a reasoning chain that the author doesn’t realize they are making.
- VERA-P-0008 — Contrary Evidence Integration: A decision framework for evaluating contrary evidence and selecting the appropriate response: outweigh, distinguish, qualify, or concede.
- VERA-P-0009 — Analogical Reasoning Validation: How to document and evaluate analogical inferences rigorously, including similarity scoring and conclusion scope determination.
Verification Domain
Patterns for verification process design, execution, and governance.
- VERA-P-0010 — Self-Verification with Adversarial Stance: A structured protocol for self-verification that mitigates optimism bias when independent verifiers are unavailable.
- VERA-P-0011 — Expert Verifier Onboarding: How to engage a domain expert as a VERA verifier when the expert lacks VERA process training.
- VERA-P-0012 — Cascading Claim Update: How to identify and re-evaluate downstream claims when an upstream claim changes verification state.
- VERA-P-0013 — Claim Confidence Calibration: A program for ensuring that confidence ratings assigned by different verifiers are consistent and predictive.
Patterns by Maturity Level
The Maturity Level column in each pattern indicates the minimum VERA maturity level at which the pattern is typically applied. Practitioners at lower levels may encounter the pattern’s problem but will not yet have the infrastructure to apply the solution consistently.
Available at Level 2 (Exploring)
Practitioners beginning systematic VERA work will most often need these:
| Pattern | Why it’s needed early |
|---|---|
| VERA-P-0001 Absence-of-Evidence Assessment | The most common Phase 2 gap; first evidence set review almost always reveals absent expected evidence |
| VERA-P-0002 Conflicted Source Disclosure | Industry-produced, advocacy-produced, and vendor-produced evidence is ubiquitous; handling it correctly is foundational |
| VERA-P-0003 AI-Generated Evidence Documentation | Most practitioners now use AI tools; the evidence chain-of-custody problem appears immediately |
| VERA-P-0004 Source Collapse Detection and Remediation | Source collapse is the most common Level 2 evidence error; early detection prevents it from compounding |
| VERA-P-0006 Compound Claim Decomposition | Virtually every significant assertion contains multiple claims; decomposition is needed in Phase 1 |
| VERA-P-0007 Hidden Assumption Excavation | Undocumented assumptions are the most common Level 2 reasoning failure |
| VERA-P-0008 Contrary Evidence Integration | Contrary evidence always appears; practitioners need a framework for addressing it before their first verification |
| VERA-P-0010 Self-Verification with Adversarial Stance | Independent verification is rarely available at Level 2; this pattern makes self-verification rigorous |
Available at Level 3 (Practicing)
These patterns require systematic practice infrastructure before they are useful:
| Pattern | Why it’s needed at Level 3 |
|---|---|
| VERA-P-0005 Time-Sensitive Evidence Management | Review cadence infrastructure must exist before evidence expiry tracking is practical |
| VERA-P-0009 Analogical Reasoning Validation | Requires reasoning chain fluency; premature application produces mechanical similarity scoring without judgment |
| VERA-P-0011 Expert Verifier Onboarding | Requires a verification process mature enough to be explained to an expert; Level 2 verification is not |
| VERA-P-0012 Cascading Claim Update | Requires a populated claim registry with dependency tracking; not applicable without one |
Available at Level 4 (Governing)
This pattern requires governance infrastructure and a multi-verifier pool:
| Pattern | Why it’s needed at Level 4 |
|---|---|
| VERA-P-0013 Claim Confidence Calibration | Requires multiple verifiers, historical data, and a governance function to manage the calibration program |
Patterns by Use Case
Use these guides to find patterns for the situation you are facing right now.
“I’m in Phase 2 and my evidence search has a problem.”
| Situation | Pattern |
|---|---|
| Expected evidence type not found | VERA-P-0001 |
| Evidence comes from a conflicted source | VERA-P-0002 |
| AI tool was used to find or summarize evidence | VERA-P-0003 |
| Multiple evidence items seem to come from the same source | VERA-P-0004 |
| Evidence is current but may become stale | VERA-P-0005 |
| Evidence supports the claim but is contradicted by other evidence | VERA-P-0008 |
“I’m in Phase 3 and my reasoning chain has a problem.”
| Situation | Pattern |
|---|---|
| The claim is too complex to address as a single statement | VERA-P-0006 |
| I can’t identify all the assumptions my reasoning is making | VERA-P-0007 |
| Some evidence contradicts my claim; I’m not sure how to handle it | VERA-P-0008 |
| My reasoning chain uses “this is like that other case” logic | VERA-P-0009 |
“I’m in Phase 4 and my verification process has a problem.”
| Situation | Pattern |
|---|---|
| No independent verifier is available | VERA-P-0010 |
| The only available verifier has domain expertise but no VERA training | VERA-P-0011 |
| I’m verifying a claim that uses a recently-changed upstream claim | VERA-P-0012 |
| Confidence ratings across the team seem inconsistent | VERA-P-0013 |
“I’ve discovered a problem after verification.”
| Situation | Pattern |
|---|---|
| A verified claim’s evidence is now stale | VERA-P-0005 |
| A verified claim’s upstream evidence source has changed state | VERA-P-0012 |
| Different verifiers are giving very different confidence ratings | VERA-P-0013 |
Pattern Entries (Quick Reference)
The following gives the one-sentence summary for each pattern. Click the ID to go to the full pattern.
VERA-P-0001 — Absence-of-Evidence Assessment (Evidence / Moderate / Level 2) Assessing the epistemic significance of evidence types expected but not found, and translating that significance into a calibrated materiality rating and confidence adjustment. Full pattern in Evidence Patterns and as worked example in Pattern Template.
VERA-P-0002 — Conflicted Source Disclosure (Evidence / Moderate / Level 2) A graded disclosure-and-corroboration framework for evidence from sources with a financial, institutional, or personal stake in the claim’s outcome. Full pattern in Evidence Patterns.
VERA-P-0003 — AI-Generated Evidence Documentation (Evidence / Moderate / Level 2) A three-tier handling protocol for evidence retrieved, summarized, or synthesized by AI systems, with chain-of-custody and quality-rating requirements for each tier. Full pattern in Evidence Patterns.
VERA-P-0004 — Source Collapse Detection and Remediation (Evidence / Simple / Level 2) A backward-trace audit that reveals shared roots among evidence items and consolidates dependent items into an accurate count of independent sources. Full pattern in Evidence Patterns.
VERA-P-0005 — Time-Sensitive Evidence Management (Evidence / Moderate / Level 3) An evidence-level expiry annotation system with three decay-rate categories and monitoring protocols linked to claim review triggers. Full pattern in Evidence Patterns.
VERA-P-0006 — Compound Claim Decomposition (Reasoning / Moderate / Level 2) A three-test decomposition method (independence test, evidence test, verification test) applied recursively until each component is atomic and independently verifiable. Full pattern in Reasoning Patterns.
VERA-P-0007 — Hidden Assumption Excavation (Reasoning / Moderate / Level 2) A systematic interrogation of each reasoning step using three question types designed to surface assumptions that feel like facts. Full pattern in Reasoning Patterns.
VERA-P-0008 — Contrary Evidence Integration (Reasoning / Moderate / Level 2) A structured evaluation framework for contrary evidence based on quality tier, relevance, and independence, with a decision matrix for selecting the appropriate response. Full pattern in Reasoning Patterns.
VERA-P-0009 — Analogical Reasoning Validation (Reasoning / Complex / Level 3) Structured similarity-disanalogy analysis with a similarity scoring rubric that determines the permissible scope of an analogical conclusion. Full pattern in Reasoning Patterns.
VERA-P-0010 — Self-Verification with Adversarial Stance (Verification / Moderate / Level 2) A role-switch protocol with a mandatory time gap, adversarial criteria checklist, and explicit confidence penalty that makes self-verification meaningfully rigorous. Full pattern in Verification Patterns.
VERA-P-0011 — Expert Verifier Onboarding (Verification / Complex / Level 3) A structured briefing and interview protocol that translates VERA verification criteria into domain-specific language for an expert who has not been trained in VERA. Full pattern in Verification Patterns.
VERA-P-0012 — Cascading Claim Update (Verification / Complex / Level 3) A dependency-registration and impact-triage protocol that identifies downstream claims affected by an upstream claim’s state change and prioritizes re-verification. Full pattern in Verification Patterns.
VERA-P-0013 — Claim Confidence Calibration (Verification / Complex / Level 4) An anchor-example calibration program with verifier consistency metrics and a confidence committee process for high-stakes claims. Full pattern in Verification Patterns.
How to Contribute a New Pattern
Patterns emerge from practice. If you have encountered a recurring challenge that is not addressed by an existing pattern, and you have resolved it in at least two distinct cases, you have the raw material for a new pattern.
See Pattern Template: How to Propose a New Pattern for the submission process.
Before proposing, search this catalog for existing patterns that might address your situation. A proposed pattern that duplicates an existing one — even partially — should be presented as a proposed revision or extension of the existing pattern, not as a new one.
Evidence Patterns
Evidence Patterns address recurring challenges in the Evidence domain: identifying what evidence to look for, retrieving and rating it, assessing the independence of evidence items, handling evidence from problematic sources, and managing evidence whose reliability changes over time.
All patterns in this chapter follow the canonical Pattern Template. Evidence quality tier ratings (Primary, Secondary, Tertiary, Testimonial) and evidence independence classifications (Independent, Correlated, Dependent) are defined in the Lexicon.
VERA-P-0001 — Absence-of-Evidence Assessment
Pattern ID: VERA-P-0001
| Field | Value |
|---|---|
| Domain | Evidence |
| Applicability | All |
| Complexity | Moderate |
| Maturity Level | 2 |
The full text of this pattern appears in Pattern Template: A Worked Example, where it serves as the canonical worked example of the pattern format. The summary below is provided for catalog completeness.
Problem in brief: When a category of evidence listed in the prospective search plan yields no results, its absence may be non-material, moderately significant, or highly significant to the claim’s confidence — but there is no standard method for assessing which.
Solution in brief: Assess each absent evidence item on three dimensions — whether it was expected versus unexpected, whether it is substitutable by a different evidence type, and what direction it would likely point if it existed — and assign a materiality rating (Non-material / Moderate / Significant) with corresponding documentation and confidence adjustment requirements.
Verification Status:
| Field | Value |
|---|---|
| State | Verified |
| Verifier | VERA Founding Review |
| Verified on | 2026-02-21 |
| Record | VERA-V-0001 |
| Confidence | 0.88 |
VERA-P-0002 — Conflicted Source Disclosure
Pattern ID: VERA-P-0002
| Field | Value |
|---|---|
| Domain | Evidence |
| Applicability | All |
| Complexity | Moderate |
| Maturity Level | 2 |
Context
Practitioners regularly encounter evidence from sources that have a financial, institutional, or personal stake in the claim’s outcome. Industry-funded clinical studies, vendor-produced benchmark reports, advocacy group research, regulatory submissions by regulated entities, and expert testimony from retained consultants are all examples. This situation is not unusual — in many domains, the parties with the most relevant data are also parties with the most interest in how it is interpreted.
The standard VERA Evidence criteria require that evidence items be quality-rated (E2) and that their independence be assessed (E3). Neither criterion directly addresses conflict of interest as a distinct category. Conflict of interest reduces — but does not eliminate — an evidence item’s epistemic value. An industry-funded study is not automatically wrong; a regulatory submission is not automatically self-serving. But the conflict changes how the evidence should be weighted and documented.
Problem
When the most relevant available evidence for a claim component comes from a source with a stake in the claim’s outcome, practitioners face a genuine dilemma: treating the evidence as equivalent to unconflicted evidence understates the epistemic risk; discarding all conflicted evidence may leave a claim without support for an important component. Neither extreme is correct. The framework needs a middle path.
Forces
- Conflicted sources often produce the most thorough and current evidence in a domain, because they have the most investment in understanding it.
- Discarding all conflicted evidence may constitute selective citation — if the conflicted evidence points against the claim, discarding it is a violation of Evidence Primacy.
- The degree of conflict varies enormously: minor institutional affiliation is not equivalent to direct financial stake in the claim’s outcome.
- Disclosure without corroboration is insufficient for high-conflict sources; corroboration without disclosure obscures a legitimate epistemic concern.
- Requiring corroboration for all conflicted evidence creates a research burden disproportionate to the conflict’s significance in some cases.
Solution
Apply a conflict severity rating to each evidence item identified as coming from a conflicted source. Then apply the corresponding disclosure and corroboration requirements for that severity level.
Conflict Severity Scale:
| Severity | Description | Examples |
|---|---|---|
| Low | Institutional connection without financial stake in claim outcome | Author is affiliated with an organization that advocates related positions; publication is associated with a think tank with a general ideological orientation |
| Moderate | Indirect financial or reputational stake | Organization that funded the study would benefit from the claim being true; author has consulting relationship with an industry the claim concerns |
| High | Direct financial stake in the specific claim’s outcome | Study funded by company whose product is the subject of the claim; evidence produced by a party in a dispute about the claim’s subject matter |
Requirements by Severity Level:
| Severity | Disclosure | Corroboration | Confidence Impact |
|---|---|---|---|
| Low | Note affiliation in evidence item record | None required | ≤ 0.05 reduction |
| Moderate | Note conflict and nature of interest in evidence item record | Attempt to identify at least one unconflicted corroborating source; if none found, document the search | 0.05–0.10 reduction |
| High | Note conflict prominently in evidence item record and in reasoning chain | Corroboration from unconflicted source required for any evidence item at this severity level; if unavailable, claim confidence reduced and limitation disclosed in claim statement | 0.10–0.20 reduction |
For High severity evidence with no available unconflicted corroboration: the evidence may still be used, but the reasoning chain must explicitly argue why the conflicted source’s evidence is credible despite the conflict (e.g., the evidence is verifiable through independent means, the conflict is disclosed in the source’s own methodology, the claim is against the source’s interest).
Implementation
-
Identify the source’s relationship to the claim. For each evidence item, research whether the source organization or author has a financial, institutional, or reputational stake in whether the claim is true. Document the relationship or its absence.
-
Rate conflict severity. Apply the three-point scale. When uncertain between two levels, apply the higher level.
-
Document in the evidence item record. Add a “Conflict” field to the evidence item record with: severity rating, description of the conflict, and the basis for the rating.
-
Apply corroboration requirements. For Moderate severity: conduct a targeted search for unconflicted corroboration. Document the search and its outcome. For High severity: corroboration is required; if not found, initiate the confidence reduction and limitation documentation requirements.
-
Adjust confidence. Apply the confidence reduction for the severity level. The reduction applies to the claim component supported by the conflicted evidence, not to the whole claim unless the conflict is pervasive.
-
Document in the reasoning chain. For Moderate and High conflicts, include a step in the reasoning chain that explicitly acknowledges the conflict and explains why the evidence was used despite it.
Evidence Requirements
- Documentation of the source’s conflict, including the nature and severity of the relationship
- For Moderate: documentation of corroboration search and its outcome
- For High: corroboration from an unconflicted source, or explicit documentation of why the conflicted evidence is credible despite the conflict
Verification Criteria
- Every evidence item from a source with any conflict has a documented conflict severity rating in the evidence item record
- For Moderate and High severity items, the corroboration requirement has been met or explicitly documented as unmet with explanation
- The confidence rating reflects adjustments for conflict severity as specified
- High-severity conflicts are disclosed in the claim statement, not only in the evidence item record
Consequences
Benefits:
- Evidence from conflicted sources is not discarded, preserving information that may be accurate and valuable.
- Conflicts are visible to downstream users and verifiers, enabling informed judgment about their significance.
- The graded approach prevents both dismissal and uncritical acceptance of conflicted evidence.
Liabilities:
- Conflict severity rating involves judgment; the same source may be rated differently by different practitioners.
- The corroboration requirement for High-severity evidence can be resource-intensive when unconflicted sources are scarce.
- Disclosing conflicts prominently may make a claim appear weaker than it is when the conflicted evidence is actually accurate.
Known Uses
- Policy research team, public health agency (2025): Applied to a claim about a pharmaceutical intervention where all available clinical trials were industry-funded. Three trials were rated High severity; one was rated Moderate. The team documented the conflict for all four, conducted a systematic search for unconflicted corroboration (none found), applied the confidence reduction, and disclosed the limitation in the claim statement. The disclosure was cited favorably in subsequent regulatory review as evidence of epistemic integrity.
- Competitive intelligence function, technology firm (2025): Applied to benchmark evidence produced by a competitor. Rated High severity; corroborated with independent testing. The independent test results agreed with the competitor’s benchmark on three of five dimensions, providing partial corroboration with documented scope.
Related Patterns
- VERA-P-0001 — Absence-of-Evidence Assessment: Use when corroboration search (Step 4) finds no unconflicted sources; the absence of corroboration has its own materiality that must be assessed.
- VERA-P-0004 — Source Collapse Detection: Use alongside P-0002 when multiple conflicted sources are suspected to derive from the same underlying data.
Verification Status:
| Field | Value |
|---|---|
| State | Verified |
| Verifier | VERA Founding Review |
| Verified on | 2026-02-21 |
| Record | VERA-V-0002 |
| Confidence | 0.84 |
VERA-P-0003 — AI-Generated Evidence Documentation
Pattern ID: VERA-P-0003
| Field | Value |
|---|---|
| Domain | Evidence |
| Applicability | All |
| Complexity | Moderate |
| Maturity Level | 2 |
Context
Practitioners increasingly use AI systems — large language models, AI-powered search tools, document summarization systems, and analysis platforms — at every stage of evidence work: to identify relevant sources, to summarize large bodies of literature, to extract key claims from documents, and to synthesize evidence across multiple sources. AI tools can dramatically accelerate evidence work. They also introduce chain-of-custody and quality-rating challenges that the standard evidence framework does not directly address.
The core problem is that AI output is a transformation of sources, not a source itself. When an AI system produces a summary of ten studies, that summary is not a Secondary-tier evidence item — it is a machine-generated interpretation of Secondary sources. When an AI retrieves what it describes as a quotation from a document, that quotation may or may not accurately reflect the original. The chain between AI output and primary source is opaque by default.
This pattern applies whenever an AI system has meaningfully shaped the evidence in a VERA claim — whether by finding it, summarizing it, synthesizing it, or generating it.
Problem
Evidence retrieved, summarized, or synthesized by AI systems cannot be quality-rated using the standard four-tier scale without modification. The AI output is not Primary evidence (it is not firsthand observation or original data), not Secondary (it is not peer-reviewed expert interpretation), not Tertiary (it is not an editorially curated synthesis), and not precisely Testimonial (the AI is not a human expert). Yet practitioners frequently cite AI outputs as if they were one of these types, or — more problematically — cite them without noting the AI provenance at all.
Forces
- AI tools genuinely accelerate evidence discovery and synthesis; prohibiting their use would impose disproportionate costs.
- Citing “the AI said” as an evidence item violates chain-of-custody requirements; the AI is not a source.
- The underlying sources an AI references often exist and can be retrieved — but doing so for every AI-mediated evidence item is costly.
- AI systems can hallucinate sources, misattribute quotations, and misrepresent the findings of real studies. Human verification of AI claims is not optional.
- Different AI systems have different reliability characteristics; the system identity and version are relevant to the chain-of-custody assessment.
- The prompter’s framing influences what the AI finds and how it summarizes; the prompt is part of the chain of custody.
Solution
Apply a three-tier handling protocol based on whether the underlying primary source can be retrieved and independently verified.
Tier A: Source Retrieved and Verified
The AI identified the evidence; the practitioner retrieved and directly examined the original source.
- Create the evidence item record for the original source, not the AI output.
- Note in the chain-of-custody field: “Source identified by [AI system, version, date]. Original source retrieved and verified by [practitioner] on [date].”
- Apply the standard quality tier rating to the original source.
- The AI’s role is discovery assistance; it does not affect the quality tier of the original source.
Tier B: Source Identified but Not Retrievable
The AI cited a source that the practitioner attempted but failed to retrieve (paywalled, deleted, or possibly hallucinated).
- Create an evidence item record for the AI output with the following fields:
- Type: AI-Mediated (Unverified) — not a standard quality tier
- AI System: [Name, version, date]
- Prompt: [Verbatim or summarized prompt that produced this output]
- Output: [Verbatim AI output]
- Retrieval attempt: [What was done to locate the original source, and why it failed]
- Apply a mandatory confidence penalty of 0.15 to any claim component supported only by Tier B evidence.
- Tier B evidence cannot be used as the sole support for a claim component at Verification criteria E2 (Evidence Quality Adequacy).
Tier C: AI-Synthesized Analysis (No Specific Source)
The AI generated analytical content — synthesis, interpretation, pattern identification — that does not correspond to a specific retrievable source.
- Treat as Testimonial tier with the AI system as the “expert.”
- Document: AI system, version, date, prompt (verbatim), and the complete output used.
- Note in the evidence item record that the “expert” is an AI system, with its known limitations.
- Apply all Testimonial-tier evidence requirements.
- AI Testimonial evidence carries an inherent independence limitation: the same AI system queried multiple times about the same topic is not independent.
Implementation
-
Inventory AI involvement. Before finalizing the evidence set, identify every evidence item where AI played a role in discovery, retrieval, summarization, or synthesis.
-
Classify each AI-involved item. For each item, determine whether it is Tier A (source retrieved), Tier B (source identified but not retrievable), or Tier C (AI synthesis without specific source).
-
Document AI provenance. For all three tiers, record: AI system name, version or model identifier, date of query, and the verbatim prompt (or, if the prompt is long, a summary that would allow reconstruction of the query).
-
Retrieve original sources (Tier A work). For all Tier B items, make at least one genuine attempt to retrieve the underlying source before accepting Tier B status. Document the attempt.
-
Apply quality ratings. Tier A: use the original source’s quality tier. Tier B: document as AI-Mediated (Unverified). Tier C: document as Testimonial (AI).
-
Apply confidence adjustments. Tier B items trigger the 0.15 confidence penalty per claim component. Tier C items are subject to standard Testimonial-tier confidence assessment.
-
Flag in the reasoning chain. For Tier B and Tier C evidence items, include an explicit note in the reasoning chain step that uses them, identifying the AI provenance and the epistemic limitation it creates.
Evidence Requirements
- AI system identity, version, and query date for all AI-involved evidence items
- Verbatim prompts for all Tier B and Tier C items; verbatim prompts or reconstruction-sufficient summaries for Tier A items
- Documentation of retrieval attempts for all Tier B items
- Complete verbatim AI output for all Tier B and Tier C items used as evidence
Verification Criteria
- No evidence item in the claim’s evidence set is cited as Primary, Secondary, or Tertiary when the practitioner did not directly access the original source — AI-mediated discovery alone does not constitute direct access.
- All Tier B and Tier C evidence items are documented with AI provenance fields.
- The confidence rating reflects the 0.15 penalty for Tier B evidence and the Testimonial-tier calibration for Tier C evidence.
- No Tier B evidence item is the sole support for a claim component (criterion E2 violation).
Consequences
Benefits:
- AI tools can be used without compromising chain-of-custody requirements.
- The three-tier system preserves the value of AI discovery work while requiring human verification of AI claims.
- Downstream users and verifiers can assess the epistemic weight of AI-mediated evidence.
Liabilities:
- Tier A retrieval for all AI-discovered evidence is time-intensive; the acceleration benefit of AI discovery is partly offset by verification costs.
- The prompt-capture requirement adds documentation overhead to AI-assisted research workflows.
- AI system versioning is inconsistent; identifying the model and version used may not be possible in all tools.
Known Uses
- Research team, management consulting firm (2025): Developed an internal prompt logging tool after applying this pattern; Tier B rates dropped from ~40% to ~12% as practitioners built habits of clicking through to original sources before closing AI sessions.
- Policy analyst, government research office (2025): Applied Tier C to AI-generated synthesis of legislative history; noted that three queries to the same AI produced partially inconsistent syntheses, confirming the independence limitation; used the inconsistency itself as a Tier B evidence item about the claim’s uncertainty.
Related Patterns
- VERA-P-0002 — Conflicted Source Disclosure: AI systems trained on biased corpora may exhibit systematic conflicts; applies when AI provenance suggests directional bias.
- VERA-P-0007 — Hidden Assumption Excavation: AI-generated reasoning chains typically contain hidden assumptions; apply P-0007 to any AI-generated reasoning used in a VERA claim.
Verification Status:
| Field | Value |
|---|---|
| State | Verified |
| Verifier | VERA Founding Review |
| Verified on | 2026-02-21 |
| Record | VERA-V-0003 |
| Confidence | 0.86 |
VERA-P-0004 — Source Collapse Detection and Remediation
Pattern ID: VERA-P-0004
| Field | Value |
|---|---|
| Domain | Evidence |
| Applicability | All |
| Complexity | Simple |
| Maturity Level | 2 |
Context
Evidence sets assembled from multiple sources frequently contain items that appear independent but ultimately derive from the same underlying source. This is especially common in domains dominated by a small number of authoritative primary sources — a single landmark study that spawned many secondary analyses, a single regulatory document that multiple organizations reference, a single dataset that multiple publications draw from.
The Lexicon defines this error as source collapse and defines the Independence assessment as a required part of evidence set assembly (Verification Protocol Step 2.4). This pattern specifies how to conduct that assessment systematically, and how to remediate collapse when it is found.
Problem
When multiple evidence items in a set trace to the same underlying source, the evidence set overstates the degree of independent support for the claim. A practitioner who assembles five evidence items all derived from the same original study believes they have built a robust multi-source case; they have built a single-source case with decorative citations. Verification criterion E3 (Independence Adequacy) will catch this — but only if the verifier knows to look for it and has a method for detecting it.
Forces
- Source collapse often occurs innocuously: different publications genuinely cite the same study as the best available evidence, making independent citation count unreliable as an independence signal.
- Detecting source collapse requires backward-tracing each evidence item through its citation chain, which is time-consuming.
- Consolidated evidence sets have fewer items, which may superficially appear weaker even when the consolidation more accurately represents the independent support.
- True independence is sometimes impossible to achieve in domains where a single primary source exists; this must be documented rather than hidden.
Solution
Conduct an evidence source tree for any evidence set containing four or more items, or when any subset of items appears to address the same claim component from what may be related sources.
The source tree maps each evidence item backward to its ultimate origin: the original dataset, study, document, or observation. Items that share an ultimate origin are source-collapsed and must be consolidated.
Implementation
-
Trigger assessment. Apply this pattern whenever: (a) the evidence set has four or more items, (b) two or more items reference the same study or dataset, or (c) verification criterion E3 is flagged during review.
-
Build the source tree. For each evidence item, trace its citations backward:
- What does this item cite as its primary source?
- What does that source cite?
- Continue until you reach an item with no further citations — a direct observation, original dataset, or primary document.
- Record the ultimate source for each evidence item.
-
Identify shared roots. Group evidence items that share the same ultimate source.
-
Apply the independence classification. Within each group:
- If items independently transform or interpret the shared source in meaningfully different ways, classify as Correlated (keep all items, note the shared root).
- If items are essentially transmissions of the same content from the same source, classify as Dependent (consolidate into one item).
-
Consolidate Dependent items. Replace the group of dependent items with a single evidence item representing the underlying source, with a note: “N items in the original evidence set were consolidated; all derived from [source].”
-
Recount independent sources. After consolidation, record the true count of independent evidence items and the count of correlated items. This true count is used in the confidence assessment.
-
Document in the evidence set. Record the source tree as an appendix to the evidence set documentation. Any verifier can then reproduce the independence assessment without repeating the full trace.
Evidence Requirements
- The citation chain for each evidence item, traced to its ultimate source
- Documentation of the independence classification for each item
- The consolidation record showing which items were merged and why
Verification Criteria
- A source tree exists for evidence sets of four or more items
- All evidence items have an ultimate source identified
- No evidence item is classified as Independent when its ultimate source is shared with another item in the set
- The confidence rating is calculated from the consolidated (true) independent source count, not the raw item count
Consequences
Benefits:
- Eliminates a systematic form of confidence inflation.
- Makes the true evidential basis of a claim visible and auditable.
- Reduces evidence sets to their genuine information content, making reasoning chains cleaner.
Liabilities:
- Source tracing is time-intensive for evidence items with long citation chains.
- Consolidation visibly reduces evidence set size, which can create internal resistance if practitioners believe “more sources = stronger claim.”
- In some domains, consolidation reveals that a widely-held claim rests on a single primary source — a finding that is uncomfortable but epistemically important.
Known Uses
- Internal audit team, financial institution (2025): Applied to a claim about regulatory compliance that cited eight secondary sources; source tree revealed all eight derived from two primary regulatory interpretations; consolidated to two items and revised confidence downward, prompting additional primary-source research.
- Academic researcher, social sciences (2024): Applied during systematic review construction; identified that a meta-analysis included fifteen papers drawing from the same longitudinal dataset, counted as fifteen independent data points in the original review’s analysis.
Related Patterns
- VERA-P-0001 — Absence-of-Evidence Assessment: Use when source collapse remediation reveals that a claim component has fewer independent sources than expected; the reduced count may reach the threshold for Significant absence.
- VERA-P-0002 — Conflicted Source Disclosure: Source collapse and conflict of interest frequently co-occur when an industry produces or funds the dominant primary sources in a domain.
Verification Status:
| Field | Value |
|---|---|
| State | Verified |
| Verifier | VERA Founding Review |
| Verified on | 2026-02-21 |
| Record | VERA-V-0004 |
| Confidence | 0.91 |
VERA-P-0005 — Time-Sensitive Evidence Management
Pattern ID: VERA-P-0005
| Field | Value |
|---|---|
| Domain | Evidence |
| Applicability | All |
| Complexity | Moderate |
| Maturity Level | 3 |
Context
Claims in many domains are supported by evidence that changes in reliability over time. Regulatory approval status changes. Market data becomes stale. Living clinical guidelines are updated. Organizational policies are superseded. Technology benchmarks are invalidated by new releases. An evidence item that was Primary-tier and current at time of assembly may be Secondary-tier or effectively misleading six months later.
The Verification Protocol establishes review cadences at the claim level (Phase 5, Step 5.4), specifying how frequently the whole claim should be reviewed. This claim-level cadence is insufficient when different evidence items within the same claim decay at different rates: a claim about market conditions might have some evidence (a regulatory ruling) that is stable for years and other evidence (a pricing dataset) that becomes stale in weeks.
Problem
Standard claim-level review cadences do not account for evidence-level decay. A claim reviewed on schedule may still be supporting decisions with stale evidence if some items in its evidence set have become unreliable between reviews. Conversely, practitioners who flag all evidence as time-sensitive create review burdens disproportionate to actual epistemic risk.
Forces
- Evidence decay rates vary by source type and domain; one-size-fits-all cadences either over-burden or under-protect.
- Notifying downstream claims when upstream evidence becomes stale requires active monitoring infrastructure not available at all maturity levels.
- The boundary between “stale” and “superseded” is not always clear; stale evidence may still be directionally correct.
- Practitioners in rapidly changing domains face continuous evidence churn; alert fatigue is a real risk if the system triggers too frequently.
Solution
Annotate each evidence item with an evidence decay class and associated expiry date at the time of evidence set assembly. Link evidence expiry to automatic claim review triggers.
Evidence Decay Classes:
| Class | Description | Default expiry window | Examples |
|---|---|---|---|
| Stable | Evidence is not expected to change materially | 36 months | Historical records, mathematical proofs, long-established scientific consensus, statutory text |
| Drifting | Evidence may change gradually; periodic verification needed | 12 months | Policy documents, organizational standards, expert consensus guidelines, established market structures |
| Volatile | Evidence changes frequently; continuous monitoring required | 3 months | Market prices, regulatory approval status, software version specifics, living guidelines, survey data |
Practitioners may override the default expiry window with a documented rationale. The override must be conservative (shorter, not longer) unless there is specific justification for extending it.
Implementation
-
Assign decay class at assembly. For each evidence item, assign a decay class (Stable / Drifting / Volatile) and calculate the expiry date (assembly date + default window, or override date with rationale).
-
Record expiry in evidence item documentation. Add an “Expiry” field to each evidence item record: decay class, expiry date, and monitoring method.
-
Set monitoring for Volatile items. For each Volatile evidence item, establish an active monitoring method: a saved search, a regulatory alert subscription, a calendar reminder for manual check-in. Document the method.
-
Link expiry to claim review triggers. The claim’s effective review date is the earliest expiry date across all evidence items with a decay class of Volatile or Drifting. This may be earlier than the claim-level review cadence established in Verification Protocol Phase 5.
-
Handle expiry events. When an evidence item reaches its expiry date:
- Check whether the source has changed.
- If unchanged: update the expiry date for the next window; note the reconfirmation in the evidence item record.
- If changed: assess the impact on the claim. Update the evidence item or replace it. Reassess verification state. Notify downstream claims via VERA-P-0012.
-
Mark stale items. An evidence item that has passed its expiry date without a reconfirmation check is marked Stale. Claims with Stale evidence items are marked Stale and should not be used in reasoning chains without re-verification.
Evidence Requirements
- Decay class assignment for each evidence item, with documented rationale for any override of the default window
- Monitoring method documented for all Volatile evidence items
- Reconfirmation records for evidence items that have been refreshed at least once
Verification Criteria
- Every evidence item in the set has an assigned decay class and expiry date
- No evidence item is Stale (past expiry without reconfirmation) at the time of verification
- The claim’s effective review trigger reflects the earliest expiry date in the evidence set, not only the claim-level cadence
- Monitoring methods for Volatile items are documented and plausibly executable
Consequences
Benefits:
- Evidence-level tracking prevents claims from becoming supported by stale evidence between claim-level reviews.
- The decay class system calibrates monitoring effort to actual risk, reducing alert fatigue.
- Expiry tracking provides an automatic early-warning system for downstream claims.
Liabilities:
- Adds annotation work at evidence assembly time.
- Volatile evidence monitoring requires active infrastructure (alerts, saved searches) that must be maintained.
- Determining the correct decay class for novel evidence types involves judgment; practitioners may systematically underestimate volatility in unfamiliar domains.
Known Uses
- Research function, global consulting firm (2025): Implemented evidence expiry as part of their claim registry tooling after discovering three active claims were being cited in client deliverables with market data that was 18 months old. The tooling flagged 23 evidence items as Stale in the first month of implementation.
- Regulatory affairs team, pharmaceutical company (2025): Applied to a claim about competitor approval status; Volatile classification with 3-month expiry triggered a reconfirmation check that caught a label modification, allowing a filing to be updated before submission.
Related Patterns
- VERA-P-0001 — Absence-of-Evidence Assessment: When evidence becomes stale and no current replacement is found, treat the absent current evidence using P-0001 materiality assessment.
- VERA-P-0012 — Cascading Claim Update: Use to notify and re-evaluate downstream claims when Volatile evidence changes force a claim to a new verification state.
Verification Status:
| Field | Value |
|---|---|
| State | Verified |
| Verifier | VERA Founding Review |
| Verified on | 2026-02-21 |
| Record | VERA-V-0005 |
| Confidence | 0.82 |
Reasoning Patterns
Reasoning Patterns address recurring challenges in constructing and evaluating the logical connection between evidence and conclusions. They cover claim decomposition, assumption identification, contrary evidence handling, and the documentation of analogical arguments — the most common places where reasoning chains break down in practice.
All patterns follow the canonical Pattern Template. Inference types (deductive, inductive, abductive, analogical), reasoning chain structure, and quality dimensions are defined in the Lexicon.
VERA-P-0006 — Compound Claim Decomposition
Pattern ID: VERA-P-0006
| Field | Value |
|---|---|
| Domain | Reasoning |
| Applicability | All |
| Complexity | Moderate |
| Maturity Level | 2 |
Context
Virtually every significant assertion made in organizational contexts is a compound claim: it asserts multiple things simultaneously, often with different levels of evidential support for each component. “Our AI system is accurate, efficient, and ready for production deployment” is not one claim — it is at least three. “This market strategy will capture significant share in the next two years” asserts both the strategic mechanism and the time horizon, which may have very different evidence bases.
Verification Protocol Phase 1, Step 1.2 requires decomposing compound claims before proceeding. This pattern specifies how to decompose effectively, addressing the most common failure mode: under-decomposition, where practitioners split the most obvious compound but stop before reaching independently verifiable atomic claims.
Problem
Practitioners under-decompose for two reasons. First, it is not obvious where decomposition is complete — there is no clear signal that an atomic claim has been reached. Second, the work of tracking multiple sub-claims and their dependencies is more complex than tracking one compound, so there is incentive to stop decomposing before the components are truly independent.
The consequence of under-decomposition is that claims are verified as units when their components have very different evidential strength. A compound claim with one well-supported and one poorly-supported component receives an averaged verification judgment that obscures the weak component.
Forces
- Over-decomposition produces too many micro-claims with complex dependency trees that are harder to navigate than the original compound.
- Under-decomposition hides differential evidential support across components.
- Some claims genuinely cannot be decomposed without losing their meaning (the conjunction is the claim).
- Sub-claim dependencies must be tracked and managed; if Sub-claim B is true only if Sub-claim A is true, this dependency must be explicit.
- The decomposition must be complete before evidence assembly begins; decomposing after evidence is gathered risks reverse-engineering the decomposition to fit the evidence found.
Solution
Apply three tests to each claim component during decomposition. A component passes all three tests only when it has reached an appropriate level of atomicity.
Test 1 — Independence Test: Can this component be true while the other components are false? If yes, it is independent enough to stand as a separate claim. If no, the components are logically entangled and must be treated as a conjunction claim.
Test 2 — Evidence Test: Can you identify a distinct type of evidence that would specifically address this component — evidence that would not be relevant to the other components? If yes, the component has a distinct evidential basis and should be separated. If no, the components share an evidential basis and may appropriately remain together.
Test 3 — Verification Test: If this component were false, would the parent claim’s overall verification state change? If yes, it is significant enough to track separately. If no, it may be a subordinate qualification rather than a component.
Apply these three tests iteratively. After each round of decomposition, apply the tests again to each new component. Stop when all components pass all three tests or when further decomposition would produce components too small to be meaningful.
Implementation
-
State the compound claim precisely. Write out the full assertion. Underline every conjunction (and, but, while, as well as), every implicit comparison, and every embedded causal or temporal claim.
-
Draft candidate components. Separate the underlined elements into candidate atomic claims. Write each as a complete declarative sentence.
-
Apply the three tests to each candidate. For each candidate component, work through the Independence, Evidence, and Verification tests. Document the result of each test.
-
Separate independent components. Components that pass all three tests become distinct VERA claims, each receiving its own Claim ID and proceeding through the Protocol independently.
-
Handle entangled components. Components that fail the Independence Test (cannot be true independently) should be analyzed: Is the entanglement conceptual (the claim is genuinely conjunctive) or definitional (one term is being used to mean both)? Conceptually conjunctive claims are kept together with a note. Definitional entanglements require rephrasing.
-
Map sub-claim dependencies. Where Sub-claim B is true only if Sub-claim A is true, document this dependency explicitly. The verification of B depends on A being Verified; if A is Contested or Refuted, B must be re-evaluated.
-
Reconstruct the parent claim. Document how the atomic sub-claims combine to produce the original compound assertion. This reconstruction step is verified at Phase 4 to ensure no meaning was lost or added during decomposition.
Evidence Requirements
- The original compound assertion documented verbatim
- Test results for each candidate component
- Dependency map showing which sub-claims are prerequisites for others
- Reconstruction record showing how sub-claims combine to the parent
Verification Criteria
- Every sub-claim passes all three tests at its current level of decomposition
- Sub-claim dependencies are documented and are consistent with the logical structure of the parent claim
- The reconstruction is complete: all elements of the original assertion are accounted for in the sub-claims
- No sub-claim asserts more than the evidence for it specifically supports
Consequences
Benefits:
- Differential evidential strength across claim components is made visible rather than averaged.
- Weak components are identified and can be researched independently or acknowledged as limitations.
- Sub-claim reuse becomes possible: a well-supported atomic claim can serve as evidence in multiple parent claims.
Liabilities:
- Decomposition adds Phase 1 work before evidence assembly begins.
- Dependency tracking adds complexity to the claim registry, especially for multi-level decompositions.
- Sub-claims with low evidential support may, when isolated, cause practitioners to abandon claims that would have survived as compounds — which is not always the wrong outcome, but can feel like a loss.
Known Uses
- Strategy team, global technology company (2024): Applied to a strategic recommendation that asserted three distinct market conditions plus a causal mechanism. Decomposition into five sub-claims revealed that one (the causal mechanism) had no supporting evidence and was an assumption. The assumption was documented, and the parent claim was revised to acknowledge it explicitly.
- Research analyst, public policy institute (2025): Applied during systematic review of a government program; identified that the program’s “success” claim contained four components with different evidence bases, two of which were not supported by available evidence. The decomposed version was more useful for policy recommendations than the aggregate claim.
Related Patterns
- VERA-P-0007 — Hidden Assumption Excavation: Apply after decomposition; each sub-claim has its own assumption set that must be excavated.
- VERA-P-0012 — Cascading Claim Update: When sub-claims are treated as upstream evidence for a parent claim, sub-claim state changes trigger the cascading update protocol.
Verification Status:
| Field | Value |
|---|---|
| State | Verified |
| Verifier | VERA Founding Review |
| Verified on | 2026-02-21 |
| Record | VERA-V-0006 |
| Confidence | 0.87 |
VERA-P-0007 — Hidden Assumption Excavation
Pattern ID: VERA-P-0007
| Field | Value |
|---|---|
| Domain | Reasoning |
| Applicability | All |
| Complexity | Moderate |
| Maturity Level | 2 |
Context
All reasoning chains contain assumptions — premises taken as true without direct evidence. The Verification Protocol (Phase 3, Step 3.4) requires practitioners to identify and document assumptions. The practical problem is that the most consequential assumptions are typically the ones the practitioner doesn’t notice they’re making, because they feel like facts rather than choices.
The assumptions that destroy reasoning chains under scrutiny are rarely the ones the author flagged. They are the ones that felt so obviously true — so much like background reality rather than premise — that they were never examined.
Problem
Asking “what am I assuming?” is insufficient for identifying hidden assumptions. The question is self-defeating: if you knew what you were assuming, it wouldn’t be hidden. Practitioners who apply Step 3.4 conscientiously document the assumptions they are aware of. The remaining hidden assumptions still determine whether the reasoning chain holds.
This is not a problem of dishonesty or carelessness. It is a structural problem in human reasoning: things that feel like facts are processed differently from things that feel like premises. The solution requires a systematic interrogation method that bypasses this processing difference.
Forces
- You cannot directly inspect your own invisible assumptions; indirect interrogation methods are required.
- Some assumptions are so widely shared in a context that documenting them adds noise without epistemic value.
- Other assumptions are technically visible (practitioners know them) but not documented because they are taken as given.
- Contested assumptions — premises that are genuinely disputed in the relevant community — are the highest-priority category; they must be documented and their contestedness acknowledged.
- The same reasoning chain applied in a different context may have entirely different assumption sets; context-sensitivity is a critical dimension.
Solution
Apply a three-question interrogation to every step in the reasoning chain. The three questions are designed to approach the same space of hidden assumptions from different angles, increasing the probability of surfacing any given assumption.
Question 1: Necessary Conditions “What would need to be true for this conclusion to follow from these premises?”
List everything that must hold — beyond what is stated in the premises — for the inference to be valid. This surfaces assumptions about continuity (“what was true yesterday is true today”), scope (“the pattern observed in this sample holds for the whole population”), mechanism (“A causes B through mechanism M”), and completeness (“the factors listed here are the relevant ones”).
Question 2: Variance “What am I treating as fixed that could vary?”
Identify every quantity, relationship, or category in the reasoning step that you are treating as constant. Consider: What if it varied? Would the conclusion change? If yes, the constancy is an assumption that must be documented. This question surfaces assumptions about measurement invariance, definitional stability, contextual consistency, and the absence of moderating variables.
Question 3: Values and Framing “Where is this reasoning sensitive to value choices or framing decisions?”
Identify where the reasoning chain encodes choices that are presented as factual but are actually normative or framing-dependent: the choice of comparison baseline, the definition of “significant” or “successful,” the selection of which effects to count and which to ignore. This question surfaces normative assumptions that are particularly likely to be contested by people who share the evidence but not the values.
Implementation
-
Write out the reasoning chain in full before beginning excavation. The interrogation cannot be applied to a chain that doesn’t yet exist. Complete Phase 3, Step 3.3 first.
-
Apply Question 1 to each step. For each step in the chain, ask: What would need to be true, beyond the stated premises, for this conclusion to follow? List every condition. Write them as declarative statements.
-
Apply Question 2 to each step. For each step, identify every variable that is treated as fixed. Consider: time, location, population, definition, measurement methodology, causal pathway. List all.
-
Apply Question 3 to each step. Identify any normative or framing-sensitive elements. List them explicitly.
-
Classify each identified assumption. For each item surfaced:
- Background assumption: Widely shared, stable, low contestedness — document briefly.
- Domain assumption: Specific to the domain, likely shared by domain experts — document with reference to authority.
- Contested assumption: Genuinely disputed in the relevant community — document prominently, note the dispute, assess whether the claim should be scoped to conditions under which the assumption holds.
- Novel assumption: Not standard in the domain, introduced by the claimant — document thoroughly, treat as requiring evidence or explicit justification.
-
Add assumptions to the reasoning chain. All Domain, Contested, and Novel assumptions are added as documented elements of the reasoning chain — either as explicit premises in the step where they operate, or as a separate “Assumptions” section linked to the relevant steps.
-
Reassess claim scope. If any Contested assumptions are load-bearing (the conclusion fails if the assumption fails), consider scoping the claim to the conditions under which the assumption holds, rather than presenting the claim as unconditionally true.
Evidence Requirements
- The complete reasoning chain before excavation begins
- The list of items surfaced by each of the three questions, for each step
- The classification of each surfaced item
- For Contested and Novel assumptions: documentation of the dispute or novelty
Verification Criteria
- All three questions have been applied to all steps in the reasoning chain
- Every Contested and Novel assumption is documented prominently in the claim record
- The confidence rating reflects any Contested assumptions that are load-bearing
- The claim’s scope statement accounts for conditions under which load-bearing Contested assumptions fail
Consequences
Benefits:
- Surfaces the most dangerous class of reasoning errors: invisible premises.
- Forces explicit engagement with contestedness, preventing the claim from being presented as more settled than it is.
- The documented assumption set becomes a valuable resource for downstream users who may be in contexts where the assumptions don’t hold.
Liabilities:
- The three-question interrogation takes time and requires sustained adversarial focus.
- Extensive assumption documentation can make reasoning chains appear more uncertain than practitioners are comfortable with.
- Distinguishing background from contested assumptions requires domain knowledge; practitioners without deep domain expertise may mis-classify.
Known Uses
- Product team, enterprise software firm (2024): Applied during a business case review; Question 2 (Variance) surfaced an assumption that customer acquisition cost would remain constant as the sales team scaled — an assumption that was false and that invalidated the unit economics claim. The business case was revised before board presentation.
- Academic researcher, political science (2025): Applied to a comparative politics claim using analogical inference; Question 3 (Values and Framing) revealed that the comparison baseline was implicitly encoding a normative preference that was contested in the literature. The framing assumption was disclosed and the paper’s scope was narrowed accordingly.
Related Patterns
- VERA-P-0006 — Compound Claim Decomposition: Apply P-0006 before P-0007; each sub-claim has its own assumption set.
- VERA-P-0009 — Analogical Reasoning Validation: Analogical reasoning is particularly prone to hidden similarity assumptions; apply both patterns when inference type is analogical.
Verification Status:
| Field | Value |
|---|---|
| State | Verified |
| Verifier | VERA Founding Review |
| Verified on | 2026-02-21 |
| Record | VERA-V-0007 |
| Confidence | 0.85 |
VERA-P-0008 — Contrary Evidence Integration
Pattern ID: VERA-P-0008
| Field | Value |
|---|---|
| Domain | Reasoning |
| Applicability | All |
| Complexity | Moderate |
| Maturity Level | 2 |
Context
Evidence assembly following a proper prospective search plan (Verification Protocol Phase 2) will routinely produce evidence items that complicate or contradict the claim. This is not a failure of the research process — it is the expected outcome of honest evidence gathering. The question is what to do with contrary evidence once found.
Verification Protocol Phase 3, Step 3.5 specifies four valid responses to contrary evidence: outweigh (supporting evidence is stronger), distinguish (contrary evidence applies to a different scope), qualify (revise the claim to acknowledge the contradiction), or concede (the contrary evidence is decisive). But it does not provide a framework for determining which response is appropriate. Left without guidance, practitioners default to “outweigh” — a choice that frequently rationalizes rather than responds to contrary evidence.
Problem
The four responses (outweigh, distinguish, qualify, concede) are not interchangeable. Each is appropriate in specific conditions, and applying the wrong one produces either an overconfident claim (outweigh when distinguish or qualify is correct) or an artificially weakened claim (qualify or concede when the contrary evidence is genuinely less weighty). The current Protocol provides vocabulary but not decision criteria.
Forces
- “Outweigh” is easy to assert and hard to disprove; it is structurally available as a rationalization for any contrary evidence.
- “Distinguish” can be used to scope-away relevant contrary evidence by drawing a distinction that does not reflect the claim’s actual applicability.
- “Qualify” reduces the claim’s practical utility; practitioners may resist it even when it is the epistemically correct response.
- “Concede” is professionally uncomfortable; there is institutional pressure against abandoning claims that have been asserted.
- The quality and independence of contrary evidence must be assessed using the same standards as supporting evidence.
Solution
Before selecting a response, evaluate the contrary evidence on three dimensions using the same criteria applied to supporting evidence. The evaluation produces a contrary evidence profile that determines which responses are available.
Dimension 1: Quality Tier Rate the contrary evidence using the standard four-tier scale. Higher-tier contrary evidence requires a stronger response.
Dimension 2: Relevance Scope Assess whether the contrary evidence applies to exactly the same claim scope as the assertion, or to a related but distinct scope. Document the scope match precisely.
- Full overlap: Contrary evidence directly addresses the claim as stated.
- Partial overlap: Contrary evidence addresses a subset or superset of the claim’s scope.
- Adjacent: Contrary evidence addresses a closely related claim, but not this one.
Dimension 3: Independence Assess whether the contrary evidence is independent of the supporting evidence — does it derive from a different primary source? Apply the same source tree analysis as VERA-P-0004.
Response Selection Matrix:
| Quality | Scope | Independence | Available responses |
|---|---|---|---|
| Primary/Secondary | Full overlap | Independent | Qualify or Concede only |
| Primary/Secondary | Partial overlap | Independent | Distinguish (if honest), Qualify |
| Primary/Secondary | Full overlap | Dependent | Outweigh possible, with documented argument |
| Tertiary/Testimonial | Full overlap | Independent | Outweigh possible, Qualify preferred |
| Tertiary/Testimonial | Any | Dependent | Outweigh with argument |
| Any | Adjacent | Any | Distinguish (document the distinction precisely) |
Note: “Outweigh possible” means the response is available but must be argued — the reasoning chain must explain specifically why the supporting evidence outweighs the contrary, not merely assert that it does.
Implementation
-
Compile contrary evidence items. From the Phase 2 evidence set, identify all items marked as complicating or contradicting the claim.
-
Evaluate each item on three dimensions. For each contrary item: assign quality tier, assess scope overlap, assess independence from supporting evidence.
-
Apply the response matrix. Identify which responses are available for each item. If no response other than Qualify or Concede is available, proceed directly to Step 6.
-
For Outweigh responses: build the argument. Write the explicit comparison: why does the supporting evidence (identified by item) outweigh the contrary evidence (identified by item)? The argument must reference quality tier, scope, and independence — not just assert that the balance favors the claim.
-
For Distinguish responses: state the distinction precisely. Write the exact boundary between what the contrary evidence addresses and what the claim asserts. The distinction must be genuine: it cannot be drawn post hoc purely to exclude the contrary evidence.
-
For Qualify responses: revise the claim. Return to Phase 1 and revise the claim statement to incorporate the qualification. The qualification must be substantive — not a hedge that preserves the original meaning while appearing to respond to the contrary evidence.
-
For Concede responses: document the concession. Record that the contrary evidence was decisive for the claim as stated. The claim may be revised to a narrower version that the evidence does support, or retired.
-
Document all responses in the reasoning chain. Each contrary evidence item and its response must appear as a step in the reasoning chain. Contrary evidence addressed only in footnotes or appendices is not transparent reasoning — it is disclosure theater.
Evidence Requirements
- Quality tier assessment for each contrary evidence item
- Scope overlap assessment with documentation of the boundary
- Independence assessment (source tree) for each contrary evidence item
- For Outweigh: the explicit comparative argument
- For Distinguish: the precisely stated scope distinction
Verification Criteria
- Every contrary evidence item identified in Phase 2 appears in the reasoning chain with a documented response
- For Outweigh responses: the argument references quality tier, scope, and independence; it does not merely assert the balance favors the claim
- For Distinguish responses: the stated distinction is verifiable and not drawn solely to exclude the contrary evidence
- Qualify and Concede responses result in documented claim revisions
- The confidence rating reflects the weight and quality of contrary evidence that was addressed by Outweigh or Distinguish responses (not assumed to be neutralized by the response)
Consequences
Benefits:
- Prevents “outweigh” from functioning as a rationalization by requiring it to be argued.
- Makes the actual treatment of contrary evidence visible and auditable.
- Forces genuine engagement with evidence that contradicts the claim rather than gestural acknowledgment.
Liabilities:
- The three-dimension evaluation adds analysis work for each contrary evidence item.
- The response matrix constrains available responses in ways that may feel limiting when the practitioner has strong conviction in the claim.
- Qualify and Concede responses require revising work already done, which creates resistance.
Known Uses
- Legal research team, corporate law firm (2025): Applied during preparation of a legal memorandum with conflicting precedents; the response matrix revealed that two “distinguishable” cases were in fact full-overlap contrary evidence, requiring a Qualify response that changed the memorandum’s conclusion.
- Environmental assessment team, infrastructure firm (2024): Applied to an environmental impact claim; identified that the strongest contrary evidence (a Primary-tier independent study) required a Concede response on one of three claim components, leading to a scope revision that survived regulatory scrutiny where the original would not have.
Related Patterns
- VERA-P-0001 — Absence-of-Evidence Assessment: When no contrary evidence is found despite an anticipated type, use P-0001 to assess the materiality of its absence.
- VERA-P-0002 — Conflicted Source Disclosure: When contrary evidence comes from a conflicted source, the conflict affects the quality-tier assessment but does not eliminate the contrary evidence.
Verification Status:
| Field | Value |
|---|---|
| State | Verified |
| Verifier | VERA Founding Review |
| Verified on | 2026-02-21 |
| Record | VERA-V-0008 |
| Confidence | 0.83 |
VERA-P-0009 — Analogical Reasoning Validation
Pattern ID: VERA-P-0009
| Field | Value |
|---|---|
| Domain | Reasoning |
| Applicability | All |
| Complexity | Complex |
| Maturity Level | 3 |
Context
Analogical reasoning — “this case is like that other case; what was true there is likely true here” — is ubiquitous in organizational and policy decision-making. Case studies, precedent reasoning, benchmarking, and “best practices” transfer all rely on analogy. VERA’s Lexicon designates analogical inference as the weakest of the four inference types, because its validity depends entirely on the strength of the similarity being claimed — a dependency that is almost never assessed rigorously.
Practitioners who use analogical reasoning typically document it as: “This is similar to [reference case], which achieved [outcome], therefore [conclusion].” What is missing is any assessment of how similar the cases are, on which dimensions the similarity holds, and what the disanalogies imply for the scope of the conclusion.
Problem
Analogical reasoning is accepted in most reasoning chains without examination of the similarity claim that underlies it. The similarity is asserted, not demonstrated. Yet the entire inference rests on it. An analogy where the reference case and the current case are similar on three dimensions but differ on the two dimensions that determine the outcome is not just weak — it is actively misleading.
Forces
- Requiring rigorous similarity analysis for all analogical reasoning would burden a widely-used and often valuable inference form.
- The relevant similarity dimensions are domain-specific; no domain-agnostic similarity metric exists.
- Disanalogies are often harder to articulate than similarities, because the analyst is motivated by the analogical connection.
- Analogical conclusions should be scoped to what the similarity actually supports, which may be narrower than what the practitioner wants to conclude.
- High-stakes decisions frequently rely on analogical reasoning from historical cases; this is where the rigor investment is most justified.
Solution
Apply a structured similarity-disanalogy analysis that produces a similarity score and determines the permissible scope of the analogical conclusion.
Phase A: Dimension Identification
Identify the dimensions on which the reference case and current case could be similar or different. Dimensions should be:
- Causally relevant: they plausibly affect the outcome being analogized
- Independently assessable: each dimension can be evaluated without assuming the conclusion
- Specific: not “general context” but the specific contextual factors that might matter
Generate dimensions by asking: What features of the reference case might explain its outcome? What features of the current case might affect the outcome? Where do these feature sets overlap?
Phase B: Similarity Scoring
For each identified dimension, assess the degree of similarity between the reference case and the current case on a four-point scale:
| Score | Meaning |
|---|---|
| 3 | Substantially identical on this dimension |
| 2 | Similar with noted differences |
| 1 | Related but materially different |
| 0 | Disanalogous — cases differ significantly on this dimension |
Calculate the Similarity Score: sum of scores divided by (3 × number of dimensions). A score of 1.0 is perfect similarity; 0.0 is complete disanalogy.
Phase C: Disanalogy Assessment
For any dimension scored 0 or 1, conduct a specific assessment: does the disanalogy on this dimension affect the outcome being claimed? If a dimension is disanalogous but outcome-irrelevant, the disanalogy does not weaken the inference. If a dimension is disanalogous and outcome-relevant, the disanalogy directly limits the conclusion’s scope.
Conclusion Scope Determination:
| Similarity Score | Permissible conclusion scope |
|---|---|
| 0.85 – 1.0 | Strong analogical inference; conclusion scope matches reference case |
| 0.65 – 0.84 | Moderate analogical inference; conclusion must acknowledge material differences |
| 0.40 – 0.64 | Weak analogical inference; conclusion must be qualified to the dimensions of similarity; outcome-relevant disanalogies limit the conclusion explicitly |
| Below 0.40 | Analogy is not sufficient to support the inference; either find a closer reference case or change inference type |
Implementation
-
State the reference case. Document the specific case being used as the analogical basis. Retrieve at least one evidence item (Tier 1 or 2 preferred) that documents the reference case’s relevant features and outcome.
-
Identify causally relevant dimensions. List the dimensions on which similarity matters for the claimed outcome. If domain expertise is needed to identify these dimensions, obtain it before proceeding.
-
Score each dimension. For each dimension, assess and document the similarity score with a brief justification.
-
Compute the Similarity Score. Calculate and record the aggregate score.
-
Conduct disanalogy assessment for low-scored dimensions. For each dimension scored 0 or 1, assess outcome-relevance. Document the assessment.
-
Determine permissible conclusion scope. Apply the scope table to the Similarity Score. Revise the claim statement if the permissible scope is narrower than what was originally asserted.
-
Document in the reasoning chain. The analogy step in the reasoning chain must reference: the reference case, the Similarity Score, the outcome-relevant disanalogies, and the scoped conclusion.
-
Assign confidence. Analogical inference starts at a lower confidence ceiling than deductive or inductive inference. Map the Similarity Score to a confidence ceiling: Score 0.85+ → ceiling 0.80; Score 0.65–0.84 → ceiling 0.70; Score 0.40–0.64 → ceiling 0.55.
Evidence Requirements
- At least one Tier 1 or Tier 2 evidence item documenting the reference case’s features and outcome
- Documented dimension identification process (not just the list; the reasoning for including each dimension)
- Per-dimension similarity scores with justifications
- Disanalogy assessments for all dimensions scored 0 or 1
Verification Criteria
- A reference case is documented with Tier 1 or Tier 2 evidence (not “the well-known case of…” without citation)
- Similarity dimensions are identified and are genuinely causally relevant — not selected to maximize the score
- The Similarity Score is calculated correctly from individual dimension scores
- The claim’s scope matches the permissible scope for the achieved Similarity Score
- The confidence ceiling for the Similarity Score is applied in the confidence rating
Consequences
Benefits:
- Makes the similarity claim — the actual basis of analogical inference — explicit and challengeable.
- Forces acknowledgment of disanalogies and their implications, producing more honest conclusions.
- The Similarity Score provides a communicable summary of analogical strength.
Liabilities:
- The dimension identification step requires domain knowledge; incorrect dimensions produce misleading scores.
- The four-point scoring scale involves judgment; different analysts may score the same dimension differently.
- The conclusion scope constraints may produce claims narrower than practitioners want, creating pressure to inflate dimension scores.
Known Uses
- Strategy team, consumer goods company (2025): Applied to a market entry strategy that analogized from a successful entry in a different geography. Similarity Score of 0.61 — below the threshold for strong inference — revealed significant disanalogies in regulatory environment and distribution infrastructure. The strategy was modified to address the disanalogous dimensions before proceeding.
- Policy research institute (2024): Applied to a policy proposal drawing on an international precedent. Identified that three of eight dimensions were outcome-relevant disanalogies; the claim was revised from “Policy X will achieve Outcome Y” to “Policy X may achieve Outcome Y under conditions A, B, and C, which differ from the reference case in the following ways.”
Related Patterns
- VERA-P-0007 — Hidden Assumption Excavation: Analogical reasoning is particularly prone to hidden similarity assumptions; the two patterns are complementary and should be applied together.
- VERA-P-0008 — Contrary Evidence Integration: Cases where the analogy has been tested and found wanting (a prior attempt to apply the same analogy that failed) are contrary evidence; integrate them using P-0008.
Verification Status:
| Field | Value |
|---|---|
| State | Verified |
| Verifier | VERA Founding Review |
| Verified on | 2026-02-21 |
| Record | VERA-V-0009 |
| Confidence | 0.80 |
Verification Patterns
Verification Patterns address recurring challenges in the verification process itself: executing rigorous verification when ideal conditions don’t hold, engaging domain experts who lack VERA training, managing the propagation of state changes through a claim registry, and calibrating confidence ratings across a team of verifiers.
All patterns follow the canonical Pattern Template. Verification states, independence levels, and verification criteria are defined in the Lexicon and the Verification Protocol.
VERA-P-0010 — Self-Verification with Adversarial Stance
Pattern ID: VERA-P-0010
| Field | Value |
|---|---|
| Domain | Verification |
| Applicability | All |
| Complexity | Moderate |
| Maturity Level | 2 |
Context
The Verification Protocol specifies three independence levels for verifiers: Foundational (different person from the claimant), Peer (relevant domain competency, no stake in outcome), and Expert (recognized expertise, institutionally independent). Independent verification is always preferable. In practice — particularly at Maturity Levels 2 and early 3 — independent verifiers are frequently unavailable: the domain is too specialized, the team is too small, the timeline is too compressed, or the claim involves confidential information that limits who can review it.
When independent verification is unavailable, the Protocol permits self-verification. It does not specify how to conduct self-verification in a way that produces genuinely rigorous results rather than rationalizing the claim’s conclusion.
Problem
Self-verification has an inherent optimism bias. Research on human cognition consistently shows that people evaluating their own work apply less scrutiny than they apply to others’. This bias operates even when practitioners are genuinely trying to be adversarial. The problem is not intent — it is that the claimant and the verifier share the same reasoning history, the same implicit assumptions, and the same emotional investment in the claim. They cannot fully simulate the perspective of someone who comes to the claim without that history.
A self-verification that mimics the appearance of rigor without its substance — producing a Verification Record without genuine adversarial evaluation — is worse than no verification, because it creates a false confidence signal.
Forces
- Independent verification is not always achievable, and blocking claims until it is available may be disproportionate for low-stakes claims.
- Self-verification that is honest about its limitations is more valuable than no verification, provided the limitations are documented.
- The optimism bias in self-verification is real but not uniform; structured adversarial protocols reduce but do not eliminate it.
- Self-verification at Foundational independence produces a lower confidence ceiling than peer or expert verification; this must be reflected in the confidence rating.
- The temptation to treat self-verification as equivalent to independent verification — to omit the independence level from the Verification Record — is significant and must be structurally prevented.
Solution
Apply a five-element self-verification protocol that structurally mimics independence through role separation, time gap, and an adversarial checklist calibrated to catch the errors most likely to be missed by the claimant.
Element 1 — Role Declaration Before beginning self-verification, write a brief Role Declaration: “I am now acting as a verifier, not as the claimant. My goal is to find reasons why this claim’s verification should fail, not reasons why it should pass.” This is not ceremonial — it is a documented commitment that creates accountability for the verification role.
Element 2 — Mandatory Time Gap A minimum of 24 hours must elapse between completing Phase 3 (Reasoning Construction) and beginning Phase 4 (Verification Assessment) for self-verification. The gap allows working memory of the reasoning process to fade, reducing the tendency to evaluate the chain as familiar rather than as new. For high-stakes claims: 48 hours minimum.
Element 3 — Adversarial Checklist Apply the standard Phase 4 verification criteria AND the following adversarial checklist — items specifically calibrated to catch errors that claimants systematically miss in self-review:
| # | Adversarial check |
|---|---|
| A1 | Read the claim statement aloud. Does it assert exactly what the evidence supports — not more, not less? |
| A2 | For each evidence item rated Primary or Secondary: Have you actually read the original source, or are you recalling your notes about it? |
| A3 | For each reasoning step: If a skeptic handed you this step and asked you to argue against it, what would you say? Document that argument and assess it. |
| A4 | Is there contrary evidence in the set that you have not addressed in the reasoning chain? (Re-scan the evidence set independently of the reasoning chain.) |
| A5 | Which assumption in your reasoning chain would you least want an adversary to notice? Have you documented it? |
| A6 | What is the weakest link in this chain? Have you applied more scrutiny to it, or less, than to the stronger links? |
| A7 | If this claim turns out to be wrong, what would the most likely cause be? Is that cause addressed anywhere in the documentation? |
Element 4 — Confidence Ceiling Self-verification produces a Foundational independence level, which caps the achievable confidence rating at 0.72. No self-verified claim may be assigned a confidence rating above 0.72, regardless of how strong the evidence appears to the claimant-verifier. This ceiling is documented in the Verification Record.
Element 5 — Independence Documentation The Verification Record for a self-verified claim must clearly state: “Verification conducted by claimant. Independence level: Foundational. Confidence ceiling applied: 0.72. Independent verification recommended for claims above [stakes threshold].”
Implementation
-
Complete the claim through Phase 3. The full reasoning chain must be written before self-verification begins.
-
Write the Role Declaration. Date and sign it. File it with the Verification Record.
-
Observe the time gap. Do not begin Phase 4 work until the minimum gap has elapsed. For claims assembled under time pressure, document the reason if the gap cannot be met and flag the Verification Record accordingly.
-
Apply Phase 4 criteria. Work through all criteria in Phase 4, Step 4.2. Document findings for each criterion.
-
Apply the adversarial checklist. Work through A1–A7 in sequence. For each item, document: what you found, what argument you constructed against the claim, and how you assessed it. The checklist is not complete until all seven items are documented.
-
Assign verification state. Apply the same criteria as standard verification. The state is determined by criteria findings, not by the confidence ceiling.
-
Assign confidence rating. Apply the Foundational confidence ceiling (0.72). The confidence is capped at this value regardless of the evidence quality assessment.
-
Complete the Verification Record. Include all standard fields plus: independence level, confidence ceiling, Role Declaration reference, and a recommendation for independent verification if the claim will be used for high-stakes decisions.
Evidence Requirements
- The Role Declaration (dated, signed or attributed)
- Completed adversarial checklist with documented responses to each item
- Documentation of the time gap between Phase 3 completion and Phase 4 commencement
Verification Criteria
- The Verification Record explicitly states self-verification and Foundational independence level
- The adversarial checklist is complete with documented responses to all seven items
- The confidence rating does not exceed 0.72
- The Verification Record recommends independent verification for high-stakes applications
Consequences
Benefits:
- Provides a rigorous self-verification process that is meaningfully better than informal self-review.
- The confidence ceiling and independence documentation ensure that self-verification is never mistaken for independent verification.
- The adversarial checklist surfaces the specific failure modes most likely in self-review.
Liabilities:
- The time gap requirement may conflict with fast-moving work contexts.
- The confidence ceiling (0.72) means that some high-quality self-verified claims carry lower formal confidence than their evidence warrants.
- The adversarial stance requires sustained effort to maintain; practitioners under time or social pressure tend to revert to advocacy mode.
Known Uses
- Solo researcher, independent think tank (2024): Applied as a standard practice for all claims, accepting the 0.72 ceiling as a documentation of epistemic position and flagging high-stakes claims for eventual peer review. The adversarial checklist in item A5 (“assumption you least want an adversary to notice”) surfaced a load-bearing assumption on three separate occasions that was not otherwise documented.
- Small team, early-stage technology company (2025): Implemented as team policy after an important product claim was challenged during due diligence; the challenge identified a reasoning gap that the team’s informal self-review had missed. The time gap requirement was implemented as a 48-hour calendar block.
Related Patterns
- VERA-P-0011 — Expert Verifier Onboarding: Use when an expert is available to conduct independent verification; P-0010 is the fallback when P-0011 is not feasible.
- VERA-P-0013 — Claim Confidence Calibration: In organizations with multiple practitioners conducting self-verification, calibration is needed to ensure the 0.72 ceiling is applied consistently.
Verification Status:
| Field | Value |
|---|---|
| State | Verified |
| Verifier | VERA Founding Review |
| Verified on | 2026-02-21 |
| Record | VERA-V-0010 |
| Confidence | 0.86 |
VERA-P-0011 — Expert Verifier Onboarding
Pattern ID: VERA-P-0011
| Field | Value |
|---|---|
| Domain | Verification |
| Applicability | All |
| Complexity | Complex |
| Maturity Level | 3 |
Context
VERA verification requires two distinct competencies: domain knowledge (understanding what the claim asserts and whether the evidence and reasoning are sound within the domain) and VERA process knowledge (understanding what the verification criteria require and how to apply them). The ideal verifier has both. In practice, these competencies are rarely found together in the same person.
Domain experts who have not been trained in VERA can assess whether a claim is factually sound in their domain. They cannot reliably apply the VERA criteria (E1–E5, R1–R5, F1–F3) as stated, because those criteria use VERA-specific vocabulary and concepts that require background knowledge to apply correctly. Conversely, VERA-trained practitioners may lack the domain knowledge needed to evaluate whether the evidence truly supports the claim.
Problem
Engaging an expert as a VERA verifier without preparing them produces one of two outcomes: the expert reviews the claim for substantive accuracy (valuable but not VERA verification) or the expert attempts to apply VERA criteria without understanding them (producing a Verification Record that has the right format but reflects domain review rather than VERA verification). Both outcomes are misleading if the Verification Record represents the event as VERA verification.
Forces
- Expert time is expensive; consuming it with process overhead that could have been prepared in advance is wasteful.
- Brief VERA training for domain experts risks producing over-confident mis-application of incompletely understood criteria.
- The practitioner who produced the claim has the most VERA knowledge but the least independence; the expert has independence but not VERA knowledge.
- Splitting verification responsibilities (expert assesses domain substance; practitioner assesses formal VERA criteria) introduces coordination complexity and may miss the interaction between domain substance and formal criteria.
- The expert’s domain authority is valuable precisely because it is independent; involving them too heavily in the documentation process may compromise that independence.
Solution
Apply a structured briefing and joint record protocol in which the VERA practitioner performs three roles: (1) translating VERA criteria into domain-specific language the expert can apply, (2) managing the verification process and documentation, and (3) ensuring the expert’s domain judgment is accurately captured in the Verification Record.
The expert performs one role: providing independent domain judgment on the substance of the claim.
Phase A: Criteria Translation
Before the expert engagement, the practitioner prepares a Criteria Translation Worksheet: for each of the thirteen verification criteria (E1–E5, R1–R5, F1–F3), the practitioner writes a domain-specific version of the criterion that the expert can apply without VERA vocabulary. Example:
| Standard criterion | Domain translation (clinical research context) |
|---|---|
| E1 — Evidence Set Completeness | “Are there major types of studies — randomized trials, observational data, meta-analyses — that you would expect to see here that are missing?” |
| R1 — Logical Validity | “Does each inferential step follow from the studies it cites? Are there logical leaps?” |
| R3 — Completeness | “What is this analysis assuming that it doesn’t state? Are those assumptions reasonable in this domain?” |
Not all criteria require domain knowledge to assess. F1 (Claim Precision), F2 (Identifier Present), and F3 (Scope Defined) are procedural and can be assessed by the practitioner without the expert. Mark these on the worksheet as “Practitioner-assessed.”
Phase B: Pre-Review Briefing
The practitioner conducts a 20–30 minute briefing with the expert covering:
- The purpose and boundaries of the review (“You are being asked to verify, not endorse”)
- The independence requirement (“Your role is to find problems, not to confirm the claim”)
- The translated criteria (walk through the worksheet)
- The review format (structured interview, not free-form reading)
Phase C: Structured Review Interview
The expert reviews the claim record in advance. The practitioner then conducts a structured interview:
- For each translated criterion: “What did you find for criterion [X]?”
- The practitioner records the expert’s responses verbatim in the draft Verification Record.
- For any criterion the expert finds unclear, the practitioner clarifies using the standard VERA definition — but does not suggest how the criterion should be assessed.
Phase D: Joint Record Production
The practitioner drafts the Verification Record from the interview notes. The expert reviews and approves the record before it is finalized. The record must accurately reflect the expert’s judgments, not the practitioner’s interpretation of them.
Implementation
-
Assess expert independence. Confirm the expert meets VERA independence requirements for Peer or Expert level. Document the basis for the independence assessment.
-
Prepare the Criteria Translation Worksheet. For each criterion: write the domain translation, mark practitioner-assessed criteria.
-
Assess practitioner-assessed criteria independently. Complete F1, F2, F3 and any other criteria that do not require domain knowledge before the expert engagement.
-
Brief the expert. Conduct the pre-review briefing. Ensure the expert understands the adversarial nature of verification before they begin reviewing the claim.
-
Conduct the structured review interview. Work through the translated criteria. Record responses verbatim or in close paraphrase. Do not editorialize.
-
Draft and share the Verification Record. Send the draft to the expert for review. Invite corrections to how their judgment is represented.
-
Finalize the record. Incorporate the expert’s corrections. Both the practitioner and the expert sign or attribute the completed record.
-
Document the process. The Verification Record should note: that an Expert Verifier Onboarding protocol was used, the expert’s identity and relevant qualifications, the independence assessment, and whether the expert reviewed and approved the record.
Evidence Requirements
- Criteria Translation Worksheet (completed)
- Documentation of the expert’s independence assessment
- Interview notes or transcript supporting the Verification Record entries
- Expert’s review and approval of the final Verification Record
Verification Criteria
- The Verification Record accurately reflects the expert’s domain judgments (confirmed by expert approval)
- The independence assessment for the expert is documented at Peer or Expert level
- Practitioner-assessed criteria are distinguished from expert-assessed criteria in the record
- The process documentation is sufficient for a third party to understand how the verification was conducted
Consequences
Benefits:
- Expert domain knowledge is captured in a VERA-compliant verification, rather than in an informal review that cannot be cited as VERA verification.
- The division of labor prevents practitioner VERA knowledge from substituting for expert domain judgment.
- The translated criteria enable experts to apply VERA standards without learning the full framework.
Liabilities:
- Criteria translation requires significant preparation effort from the practitioner.
- The structured interview format may feel unfamiliar or constraining to experts accustomed to free-form consultation.
- The joint record production step adds timeline; it cannot be completed until the expert has reviewed the draft.
Known Uses
- Medical affairs team, biotechnology firm (2025): Applied when verifying a clinical claim for a regulatory submission; the medical director served as expert verifier using a translated criteria worksheet. The structured interview format surfaced a concern about evidence set completeness (a key trial type was missing) that the medical director had not mentioned in initial discussions.
- Risk management function, insurance firm (2024): Applied to verify an actuarial claim where the VERA-trained analyst lacked actuarial credentials; the credentialed actuary served as expert verifier. The joint record production required two revision rounds before the actuary was satisfied with the representation of their judgment.
Related Patterns
- VERA-P-0010 — Self-Verification with Adversarial Stance: Use when expert verifier engagement is not feasible; P-0010 is the fallback.
- VERA-P-0013 — Claim Confidence Calibration: Expert verifiers who are new to VERA may calibrate confidence differently from experienced VERA verifiers; calibration exercises help.
Verification Status:
| Field | Value |
|---|---|
| State | Verified |
| Verifier | VERA Founding Review |
| Verified on | 2026-02-21 |
| Record | VERA-V-0011 |
| Confidence | 0.81 |
VERA-P-0012 — Cascading Claim Update
Pattern ID: VERA-P-0012
| Field | Value |
|---|---|
| Domain | Verification |
| Applicability | Team, Organization |
| Complexity | Complex |
| Maturity Level | 3 |
Context
In a functioning VERA claim registry, verified claims are cited as evidence in other claims, creating a dependency graph. Claim A is used as evidence for Claim B; Claim B is used as evidence for Claim C. When Claim A changes verification state — new evidence refutes it, it is successfully contested, a reasoning gap is discovered — the epistemic status of Claims B and C is affected.
The Verification Protocol does not specify how to manage these downstream effects. Without a protocol, the most common outcome is that downstream claims remain Verified while their evidential basis has changed, creating a registry where the documented verification states no longer reflect the actual epistemic situation.
Problem
Claim state changes do not automatically propagate to downstream claims. A claim that was Verified when it used Claim A as evidence remains Verified after Claim A is Refuted — unless someone identifies the dependency and triggers re-evaluation. In large registries, this problem compounds: Claim A’s refutation may have been registered months ago, while Claim C (which depends on B which depends on A) continues to be cited in active decisions.
Forces
- Not all upstream claim changes have the same downstream impact; mass re-verification is expensive and often unnecessary.
- The dependency graph may not be documented at lower maturity levels; this pattern requires dependency tracking infrastructure that must be established.
- Downstream claim maintainers may not be aware of upstream changes, especially in large organizations with distributed claim ownership.
- Re-verification of downstream claims requires the same resources as original verification; triggering it prematurely (for minor upstream changes) wastes those resources.
- Leaving downstream claims un-updated after significant upstream changes creates an epistemic integrity problem that compounds over time.
Solution
Apply a three-phase cascade protocol: (1) register dependencies at claim creation, (2) triage impact when an upstream claim changes, (3) selectively re-verify based on triage results.
Phase A: Dependency Registration
At claim creation time (Verification Protocol Phase 1, Step 1.4), register every upstream claim used as evidence in the evidence set:
- Record the upstream claim’s ID and current verification state
- Record which component of the new claim the upstream claim supports
- Register the new claim as a downstream dependent of the upstream claim in the registry
This registration creates a bidirectional link: the upstream claim record knows its downstream dependents; the downstream claim record knows its upstream dependencies.
Phase B: Impact Triage
When an upstream claim changes state, the registry system (or, in manual registries, the claim owner) identifies all registered downstream dependents and conducts an impact triage for each:
For each downstream claim:
-
Identify the component supported by the changed upstream claim. Which part of the downstream claim’s evidence set does this affect?
-
Assess the impact of the state change. Use this matrix:
| Upstream change | Component support | Impact level |
|---|---|---|
| Verified → Partial | Single evidence item for this component | Moderate |
| Verified → Partial | One of multiple independent items for this component | Low |
| Verified → Contested | Any role | Moderate (pending resolution) |
| Verified → Refuted | Single evidence item for component | High |
| Verified → Refuted | One of multiple independent items | Moderate |
| Confidence drops ≥ 0.15 | Any role | Moderate |
| Confidence drops < 0.15 | Any role | Low |
- Assign a downstream action:
- High impact: Immediately update downstream claim state to Pending; initiate re-verification.
- Moderate impact: Flag downstream claim for expedited review; notify claim owner; schedule re-verification within 30 days.
- Low impact: Note the upstream change in the downstream claim’s record; include in next scheduled review.
Phase C: Selective Re-verification
Re-verification of downstream claims proceeds according to the Verification Protocol, with two modifications:
- The Phase 2 evidence update step explicitly addresses the changed upstream claim (replace, supplement, or maintain with documented rationale).
- The Verification Record for the re-verification must reference the upstream change that triggered it.
Implementation
-
Establish dependency registration. As part of the claim registry design (Integration domain, Level 3), create a mechanism for recording upstream dependencies at claim creation. In manual registries, a “Dependencies” column in the registry table is sufficient.
-
Register dependencies for all new claims. From the point of implementation, every new claim registers its upstream dependencies at creation time.
-
Backfill existing claims. For claims already in the registry, identify and register upstream dependencies retroactively. Prioritize high-stakes and frequently-cited claims.
-
Establish a change notification process. When a claim changes state, the change triggers identification of registered downstream dependents. In manual registries, this is a search of the dependency records; in automated registries, it is a query or alert.
-
Conduct impact triage. For each downstream dependent, apply the triage matrix and assign a downstream action (High / Moderate / Low impact).
-
Execute downstream actions. For High impact: immediately update claim state and initiate re-verification. For Moderate: notify owner and schedule. For Low: annotate the claim record.
-
Complete re-verification. For claims requiring re-verification, proceed through Phase 4 with the two modifications noted above.
Evidence Requirements
- Dependency registration records for all claims whose evidence set includes upstream VERA claims
- Impact triage records for each downstream dependent of the changed upstream claim
- Re-verification records for all claims whose downstream action was High impact
Verification Criteria
- All claims in the registry that use upstream VERA claims as evidence have registered dependencies
- Impact triage is conducted within 5 business days of an upstream claim state change
- High-impact downstream claims are updated to Pending state within 24 hours of triage
- Re-verification records reference the upstream change that triggered them
Consequences
Benefits:
- Maintains the epistemic integrity of the claim registry over time.
- Prevents the compounding of undetected errors through the dependency graph.
- The triage protocol prevents unnecessary mass re-verification while ensuring significant changes are addressed.
Liabilities:
- Dependency registration adds Phase 1 work for every claim that cites upstream claims.
- Backfilling existing registries is time-intensive.
- Impact triage requires judgment; practitioners may consistently under- or over-estimate impact, requiring calibration.
Known Uses
- Knowledge management function, professional services firm (2025): Implemented following discovery that a refuted market size claim had remained as evidence in seven active claims for four months without triggering review. Dependency registration was added to the claim template; the first cascade update identified two High-impact and three Moderate-impact downstream claims requiring action.
- Research team, public policy organization (2025): Applied after a key empirical claim about program effectiveness was successfully contested. Impact triage identified four downstream policy recommendation claims; two required immediate re-verification and revision, changing the organization’s policy positions on two issues.
Related Patterns
- VERA-P-0005 — Time-Sensitive Evidence Management: Stale upstream evidence items trigger P-0012 when the evidence’s state change affects a claim’s verification; the two patterns work together in registries with volatile evidence.
- VERA-P-0010 — Self-Verification with Adversarial Stance: When a downstream claim requires re-verification and no independent verifier is available, P-0010 applies.
Verification Status:
| Field | Value |
|---|---|
| State | Verified |
| Verifier | VERA Founding Review |
| Verified on | 2026-02-21 |
| Record | VERA-V-0012 |
| Confidence | 0.83 |
VERA-P-0013 — Claim Confidence Calibration
Pattern ID: VERA-P-0013
| Field | Value |
|---|---|
| Domain | Verification |
| Applicability | Team, Organization |
| Complexity | Complex |
| Maturity Level | 4 |
Context
Confidence ratings assigned by verifiers are only meaningful if they are consistent across verifiers and predictive of actual epistemic reliability. A confidence rating of 0.75 should mean the same thing regardless of which verifier assigned it, and the population of claims rated at 0.75 should be right approximately 75% of the time they are tested against outcomes.
In practice, different verifiers develop different implicit confidence scales. One verifier’s 0.80 is another’s 0.65. Some verifiers are systematically overconfident (assigning high confidence to claims that are subsequently contested or revised); others are systematically underconfident (assigning low confidence to claims that are never successfully challenged). Without calibration, the confidence distribution in a registry reflects verifier personality as much as epistemic quality.
Problem
Inconsistent confidence calibration across verifiers undermines the confidence rating’s value as a signal. Claims at similar evidential quality receive different ratings depending on who verified them. Downstream users who weight claims by confidence receive a noisy signal that mixes epistemic information with verifier variation. Organizations that track confidence distributions as a quality metric cannot distinguish genuine quality changes from verifier composition changes.
Forces
- Calibration requires historical data (claims with known outcomes against which to check ratings) that is not available at lower maturity levels.
- Over-calibration — requiring verifiers to apply anchor examples mechanically — produces ratings that are consistent but insensitive to genuine variation in the current claim.
- Confidence calibration is a probabilistic concept; many practitioners find it unintuitive, and training in it requires investment.
- High-stakes claims need especially reliable confidence ratings; a confidence committee process for these claims adds overhead but reduces the risk of single-verifier mis-calibration.
- Public acknowledgment of calibration variability across verifiers requires organizational trust and psychological safety.
Solution
Apply a three-component calibration program: anchor examples that establish a shared scale, consistency tracking that identifies verifier-level calibration gaps, and a confidence committee process for high-stakes claims.
Component 1: Anchor Example Library
Develop a library of claims with known outcomes, rated against the confidence scale by experienced verifiers who have reviewed the evidence and reasoning in detail. The library should include:
- At least three claims per confidence band (High 0.85+, Moderate 0.65–0.84, Low 0.40–0.64, Speculative below 0.40)
- Claims drawn from the organization’s actual domains of work
- A documented rationale for each anchor rating, explaining which features of the evidence and reasoning justify the band
- Known outcomes for each claim (what subsequently happened; was the claim revised, contested, confirmed?)
Anchor examples are used in calibration exercises: verifiers rate the anchor claims before seeing the established ratings, then compare their ratings to the anchors and review the rationale for any gaps above 0.10.
Component 2: Consistency Tracking
The governance function tracks, for each active verifier:
- Mean confidence assigned over the trailing 12 months
- Standard deviation of confidence assigned
- Frequency of confidence ratings that are subsequently revised (indicator of mis-calibration)
- Frequency of claims verified that are subsequently contested (indicator of overconfidence)
Verifiers whose mean confidence is more than 0.10 above or below the registry mean, or whose subsequent revision or contest rates are substantially higher than peers, are flagged for calibration review. The review is conducted using anchor exercises, not as a disciplinary process.
Component 3: Confidence Committee for High-Stakes Claims
Claims above a defined stakes threshold — those informing board-level decisions, regulatory submissions, significant capital allocations, or public communications — are assigned confidence ratings by a Confidence Committee of three verifiers rather than a single verifier. The process:
- Each committee member rates the claim independently, without discussion.
- Ratings are shared simultaneously.
- If all three ratings are within a 0.10 band: the final rating is the mean.
- If any rating is outside the 0.10 band: a structured discussion identifies the source of disagreement. Each member reconsiders and re-rates independently. If consensus within 0.10 is still not achieved, the final rating is the lowest individual rating (conservative), with a note explaining the disagreement.
Implementation
-
Establish the anchor library. Select 12–20 historical claims from the registry with known outcomes. Have at least two experienced verifiers rate each anchor independently. Where ratings agree within 0.10, adopt the mean as the anchor rating. Document the rationale.
-
Conduct initial calibration exercises. Have all active verifiers rate the anchor claims before seeing established ratings. Compare to anchors. Debrief individually on gaps larger than 0.10.
-
Establish consistency tracking. Implement tracking of the three verifier-level metrics in the governance function’s regular reporting.
-
Set the high-stakes threshold. Define the criteria for Confidence Committee review. Document the threshold as part of the governance policy.
-
Implement Confidence Committee process. Designate a committee roster with at least five eligible verifiers (to allow committees of three who haven’t been involved in the specific claim).
-
Conduct regular calibration exercises. Schedule biannual calibration exercises for all active verifiers, using updated anchor examples. Calibration exercises are part of verifier professional development, not performance evaluation.
-
Review and update anchors. The anchor library ages; outcomes accumulate; domains shift. Review anchors annually and add new examples from recent high-quality verifications.
Evidence Requirements
- The anchor example library with documented ratings and rationales
- Calibration exercise results for each active verifier
- Consistency tracking metrics reviewed by the governance function
- For Confidence Committee claims: the individual ratings before discussion, the discussion record, and the final rating with rationale
Verification Criteria
- The anchor library contains at least three claims per confidence band
- All active verifiers have completed at least one calibration exercise in the trailing 12 months
- Consistency tracking is reviewed by the governance function at each meeting
- All claims above the high-stakes threshold have Confidence Committee records
Consequences
Benefits:
- Confidence ratings become meaningful signals rather than idiosyncratic verifier judgments.
- Systematic overconfidence or underconfidence is detected and addressed before it compounds.
- High-stakes claims receive more reliable confidence assessments through the committee process.
Liabilities:
- The anchor library requires significant investment to develop and maintain.
- Calibration exercises require verifier time as a scheduled activity.
- The Confidence Committee process adds timeline for high-stakes claims.
- Surfacing calibration variability across verifiers requires psychological safety that may not exist in all organizations.
Known Uses
- Research function, financial services firm (2025): Implemented after discovering that confidence ratings from two senior analysts differed systematically by ~0.15 on equivalent claims. Anchor exercises identified that one analyst applied confidence to evidence quality (strong evidence = high confidence) while the other applied it to claim outcome predictability (high-uncertainty domains = low confidence regardless of evidence). The conceptual alignment achieved through calibration exercises resolved the discrepancy.
- Risk management function, infrastructure company (2024): Implemented the Confidence Committee for all claims informing capital allocation decisions above $50M. In the first year, committee discussions identified material confidence gaps in four high-stakes claims that single-verifier review had not surfaced.
Related Patterns
- VERA-P-0010 — Self-Verification with Adversarial Stance: The 0.72 confidence ceiling for self-verification is itself a calibration rule; P-0013 ensures that ceiling is applied consistently across all self-verifications.
- VERA-P-0011 — Expert Verifier Onboarding: Expert verifiers new to VERA will not be calibrated; include them in calibration exercises before they rate high-stakes claims.
Verification Status:
| Field | Value |
|---|---|
| State | Verified |
| Verifier | VERA Founding Review |
| Verified on | 2026-02-21 |
| Record | VERA-V-0013 |
| Confidence | 0.79 |
Getting Started
This chapter is for practitioners who have read the Foundations section and are ready to apply VERA to a real claim for the first time. It walks through the complete Verification Protocol on a concrete example, explains what to expect at each step, and describes what to do after the first claim is complete.
Read the Lexicon and Verification Protocol before proceeding. You do not need to have memorized them — but you need to have read them once, so that the terminology in this chapter is familiar.
Before You Begin
What You Need
VERA requires no specialized software. To complete your first claim, you need:
- A place to write that is searchable and persistent (a text editor, a document, a notes application)
- Access to the evidence you expect to use
- Two to four hours for the first claim (subsequent claims take less time as the format becomes familiar)
- If possible: one other person willing to serve as your verifier — someone who can read your claim record and evaluate it with some distance from the argument
If no independent verifier is available, you will use VERA-P-0010 (Self-Verification with Adversarial Stance). That is acceptable for a first claim.
Choosing Your First Claim
Do not choose a trivial claim. The investment required by VERA is only justified by claims with real epistemic stakes — claims that actually inform decisions, guide actions, or appear in communications that matter.
Do not choose the most complex claim you face. A compound, multi-domain claim with conflicting evidence is not the right starting point. The learning happens fastest on a claim that is significant but tractable.
Criteria for a good first claim:
- It is a single assertion (not multiple assertions bundled together)
- It is something you believe to be true but have not formally documented
- Evidence for it exists and is retrievable in a few hours
- The claim matters — it informs a real decision, supports a real position, or will appear in real communications
- You have some genuine uncertainty about it — you are not starting from certainty
Common good first claims:
- A claim about the effectiveness of a process, tool, or intervention your team uses
- A claim about conditions in a market, domain, or environment you work in
- A claim that you regularly make in presentations or documents that you have never formally documented
Claims to avoid for your first attempt:
- Historical claims about events that cannot be re-investigated
- Normative claims (“X is the right approach”) — these require special handling for value assumptions
- Claims where you are personally very invested in a specific conclusion — verification bias will be hardest to manage
Setting Expectations
Your first VERA claim will take longer than you expect and reveal more problems than you expect. This is not a failure — it is the framework doing what it is supposed to do. Every gap you find in your evidence, every assumption you excavate, every piece of contrary evidence you have to address represents an epistemic problem that existed before VERA made it visible.
The goal of the first claim is not to produce a perfect VERA artifact. The goal is to complete the process — all five phases, all the way through to a Verification Record — so that you understand what the framework requires in practice rather than in theory.
A Worked Walkthrough
The following example traces a complete first claim through all five phases. The claim is fictional but realistic. Use it as a reference as you work through your own claim alongside it.
The assertion to be documented: “Adopting automated code review reduced our team’s production defect rate by 35% in the six months following implementation.”
Phase 1: Claim Formulation
Step 1.1 — State the assertion precisely
Read the assertion carefully. Is it one claim or several?
This assertion contains:
- A causal claim (adoption caused a reduction)
- A quantitative claim (35% reduction)
- A scope claim (our team, production defects only)
- A temporal claim (six months following implementation)
That is four claim components. Test each against the Step 1.2 decomposition tests: Can each component be true while the others are false? Yes — the defect rate could have dropped 35% for reasons other than the code review adoption; the causal claim could be false while the quantitative claim is true.
Decomposition decision: The quantitative scope-and-time claim (“defect rate dropped 35% in six months”) can be documented and verified as one claim. The causal claim (“adoption caused the reduction”) is a separate claim that depends on the first being true plus additional evidence. For a first claim, address the quantitative component and treat causation as a stated assumption with documented uncertainty.
Revised claim statement: “Our team’s production defect rate, as measured in our incident tracking system, was 35% lower in the six-month period following automated code review adoption than in the equivalent six-month period preceding it.”
This is more precise, testable, and modest. It does not assert causation.
Step 1.3 — Assign a claim identifier
If you are an individual practitioner: VERA-C-DJF-2026-0001 (initials + year + sequential number).
If you are part of a team with a registry: use your organization’s format and register the claim.
Step 1.4 — Record claim context
Document:
- Claimant: [Your name / team name]
- Date initiated: [Today’s date]
- Purpose: Supporting the case for continued investment in automated code review tooling
- Scope: Production defects only; excludes test-environment defects; our team (not the wider engineering organization)
- Prior art: No prior VERA claims on this topic in the registry
Phase 1 output: A claim stub with ID, precise statement, and context metadata. Verification state: Unverified (○).
Phase 2: Evidence Assembly
Step 2.1 — Write the prospective search plan
Before retrieving a single piece of evidence, write down what evidence you expect to exist and where you expect to find it.
For this claim:
- Defect rate data before adoption: incident tracking system, six months preceding the adoption date
- Defect rate data after adoption: incident tracking system, six months following the adoption date
- Adoption date documentation: project management system or release notes confirming when automated code review was activated
- Defect count methodology: how are “production defects” defined in the tracking system? Is the definition stable across both periods?
- Potential confounds (evidence of other changes): were there other significant process, team composition, or codebase changes in the same period that might have affected defect rates?
Write this list before you open the incident tracking system.
Step 2.2 — Retrieve and document evidence
Retrieve each item on the list. For items not found, note that they are not found.
In this example:
- E1: Defect data export, pre-adoption period. Incident tracking system, exported [date]. Primary tier.
- E2: Defect data export, post-adoption period. Incident tracking system, exported [date]. Primary tier.
- E3: Deployment log confirming adoption date. Release management system, accessed [date]. Primary tier.
- E4: Defect classification documentation. Engineering wiki, version history checked. Secondary tier (editorial documentation of Primary methodology).
- E5: No documentation found of other significant process changes in the period. Prospective plan item: not found. → Proceed to Step 2.5.
Step 2.3 — Rate evidence quality
All data exports from your own systems are Primary tier — they are direct measurements without interpretation intermediary. The methodology documentation is Secondary. Rate each item.
Step 2.4 — Assess evidence independence
E1 and E2 both come from the same incident tracking system. They are not independent in origin — they share a system — but they measure different time periods and are not derived from each other. Classify as Correlated (same system, different measurements). Note this in the evidence set.
Step 2.5 — Document absent evidence
The prospective plan included “evidence of other changes” and none was found. Apply VERA-P-0001 (Absence-of-Evidence Assessment):
- Expected? Yes — it would be normal for other process changes to exist and be documented.
- Substitutable? Partially — you can ask team members, but that is Testimonial tier.
- Directionality? If other improvement initiatives existed, their absence from documentation is ambiguous (they may exist but be undocumented, or may not exist). Rate as Moderate materiality. Document in the claim: “No documentation of concurrent process changes was found. The causal interpretation of the defect rate drop therefore rests on an assumption of minimal concurrent change, which is noted as an unverified assumption.”
Phase 2 output: Evidence set with five items (four found, one not found with documented absence). Verification state remains Unverified (○).
Phase 3: Reasoning Construction
Step 3.1 — Map evidence to claim components
The claim asserts: defect rate was 35% lower in period B than period A.
- E1 (pre-adoption data) supports: the baseline defect count
- E2 (post-adoption data) supports: the post-adoption defect count
- E3 (adoption date) supports: the period boundary
- E4 (methodology documentation) supports: that “production defect” means the same thing in both periods
Step 3.2 — Identify the logical structure
The inference type is inductive: we generalize from two specific six-month measurements to a claim about the effect of the intervention. This is the appropriate type for comparative measurement claims.
Step 3.3 — Write the reasoning chain
Step 1:
Premises: E1 (pre-adoption defect data), E4 (methodology documentation)
Inference: Deductive
Reasoning: The incident tracking system recorded N1 production defects
in the six months before the adoption date (per E3). The
definition of "production defect" was stable across both
periods (per E4). Therefore, N1 represents the pre-adoption
production defect count under a consistent methodology.
Conclusion: Pre-adoption production defect count = N1.
Confidence: 0.91
Step 2:
Premises: E2 (post-adoption defect data), E4 (methodology documentation)
Inference: Deductive
Reasoning: The same system recorded N2 production defects in the six
months following the adoption date. Methodology consistent
per E4.
Conclusion: Post-adoption production defect count = N2.
Confidence: 0.91
Step 3:
Premises: Step 1 conclusion (N1), Step 2 conclusion (N2)
Inference: Deductive
Reasoning: Percentage reduction = (N1 - N2) / N1 × 100.
[Insert calculated value.]
Conclusion: Production defect rate decreased by [X]% from period A to
period B.
Confidence: 0.91 (inherits from premises; arithmetic is certain)
Step 4:
Premises: Step 3 conclusion
Inference: Inductive
Reasoning: The measured decrease of [X]% is the basis for the claim
that the defect rate was approximately 35% lower in period B.
The claim uses 35% as the rounded reported value; the precise
calculated value is documented in Step 3.
Conclusion: Production defect rate was approximately 35% lower in the
six-month post-adoption period than in the equivalent
pre-adoption period.
Confidence: 0.88 (small reduction for rounding and measurement uncertainty)
Step 3.4 — Identify and document assumptions
Using VERA-P-0007 (Hidden Assumption Excavation):
Apply Question 1 (Necessary Conditions) to Step 3:
- The incident tracking system was used consistently across both periods — a Background assumption; document briefly.
- Production defects are the relevant metric for the claim’s purpose — a Domain assumption; note that this assumes defects not captured in the tracking system are not material.
Apply Question 1 to Step 4:
- No other significant changes occurred concurrently that explain the drop — a Contested assumption. This is the causal gap identified in Phase 2. Document prominently.
Apply Question 2 (Variance) across all steps:
- Team composition, codebase complexity, and deployment frequency could all vary across periods and affect defect rates. These are noted as uncontrolled variables.
All assumptions are documented in the reasoning chain as an “Assumptions” section following the steps.
Step 3.5 — Address contrary evidence
The absent evidence (no documentation of concurrent changes) was assessed at Moderate materiality in Phase 2. In the reasoning chain, this is addressed with a Qualify response: the claim does not assert causation; the absence of documented concurrent changes is noted but not treated as conclusive evidence of their absence.
Phase 3 output: Complete reasoning chain with four steps, documented assumptions, and contrary evidence addressed. Verification state updated to Pending (◐).
Phase 4: Verification Assessment
If an independent verifier is available, share the claim record and ask them to work through the Phase 4 criteria (Verification Protocol, Steps 4.1–4.5). Provide them the criteria table from the Protocol — do not summarize it for them.
If you are self-verifying, apply VERA-P-0010 (Self-Verification with Adversarial Stance):
- Write the Role Declaration. Date it.
- Wait 24 hours.
- Work through Phase 4 criteria and the adversarial checklist (A1–A7).
Working through the criteria for this example:
- E1 (Completeness): The prospective search plan listed five evidence types; four were found; one absence was documented with materiality assessment. ✓ Met.
- E2 (Quality Adequacy): Two Primary-tier data exports and one Primary methodology anchor. For a measurement claim, Primary evidence is appropriate. ✓ Met.
- E3 (Independence): E1 and E2 are Correlated (same system, different periods). Independence limitation is documented. ✓ Met with notation.
- E4 (Contrary Evidence): The concurrent-change assumption is Contested and addressed with a Qualify response in the reasoning chain. ✓ Met.
- E5 (Chain of Custody): All evidence items include source, access date, and export method. ✓ Met.
- R1 (Validity): Steps 1–3 are deductive calculations; Step 4 is appropriately inductive. ✓ Met.
- R2 (Relevance): All evidence items address the claim components they are cited for. ✓ Met.
- R3 (Completeness): Assumptions documented including the Contested concurrent-change assumption. ✓ Met.
- R4 (Proportionality): The claim is scoped to measurement only, not causation. Modest and appropriate. ✓ Met.
- R5 (Assumption Disclosure): All assumptions documented, Contested assumption flagged. ✓ Met.
- F1 (Precision): Claim statement is precise: specific metric, specific system, specific time period. ✓ Met.
- F2 (Identifier): Claim ID assigned. ✓ Met.
- F3 (Scope Defined): Production defects only; this team; six-month comparison periods. ✓ Met.
All criteria met. Assign verification state: Verified (●).
Confidence rating: Evidence is Primary-tier, reasoning is sound, assumption documentation is complete. Independence limitation noted. Confidence: 0.79 (strong evidence but Correlated independence and a load-bearing Contested assumption reduce ceiling; self-verification cap of 0.72 applies if self-verified).
Phase 4 output: Verification Record VERA-V-[NNNN] with all criteria findings and confidence rating.
Phase 5: Documentation and Publication
Register the claim in whatever serves as your claim registry (a table in a shared document is fine for a first claim). The entry should be findable by anyone who wants to know whether this topic has been investigated.
Set a review cadence. This claim uses Primary-tier data from internal systems. Evidence decay class: Drifting (metrics may shift as team or codebase changes). Review in 12 months.
Phase 5 output: The claim is registered, linked to its Verification Record, and scheduled for review.
After the First Claim
What You Have Learned
By completing a first claim, you have learned things that cannot be learned by reading:
- How long each phase actually takes in your context
- Where your existing work process produces evidence and where it does not
- Which assumptions in your ordinary reasoning are load-bearing and undocumented
- What “explicit reasoning chain” means when you have to write one out rather than imagine one
What to Do Next
In the next week: Identify two or three more claims that matter in your current work. Do not document them yet. Just identify them and note which ones would be good candidates.
In the next month: Document one more claim — this time, a claim where the evidence situation is more complex (missing evidence, a conflicted source, or a genuine contrary evidence challenge). The second claim is where the patterns in the Patterns Library become essential.
When you have five claims documented: Conduct your first maturity assessment. Read the Maturity Model Overview and the Level 2 chapter. Assess yourself honestly against the checklist.
The instinct to suppress: At some point in your first few claims, you will encounter evidence that is weak, an assumption that is contested, or a contrary evidence item that resists clean resolution. The instinct is to quietly omit these problems from the claim record and proceed. Resist it. The value of VERA is precisely that it makes these problems visible. An imperfect, honest claim record is far more valuable than a clean, dishonest one.
Common Early Mistakes
Starting with the evidence, not the search plan. Practitioners who skip the prospective search plan reliably miss absent evidence and unconsciously practice selective citation. The plan takes ten minutes. Do not skip it.
Writing conclusions instead of reasoning chains. A reasoning chain that reads “Given the evidence above, we conclude that X” is not a reasoning chain — it is a conclusion with evidence attached. Each step must state its premises, its inference type, and its intermediate conclusion explicitly.
Treating Phase 4 as a formality. The most common Level 2 failure is treating verification as a step that confirms rather than evaluates. If you complete Phase 4 and find no problems, you are either verifying an exceptionally clean claim or not looking hard enough. Expect to find something.
Over-scoping the first claim. A claim that asserts too much requires a proportionally large evidence set and complex reasoning chain. Narrow the claim to something you can complete in two to four hours. You can always build on it later.
Under-documenting assumptions. Practitioners document the assumptions they consciously make. They miss the assumptions they make without noticing. Apply VERA-P-0007 systematically to every reasoning step — it will find things a conscious scan will not.
Minimum Viable VERA
For practitioners under significant time constraints, the following is the minimum set of documentation that constitutes genuine VERA practice (rather than VERA-shaped compliance):
- Claim statement: One precise declarative sentence with explicit scope.
- Claim ID: Assigned and logged.
- Prospective search plan: Written before evidence retrieval begins. Even a bulleted list of three to five expected evidence types counts.
- Evidence set: Source, access date, quality tier rating, and relevance statement for each item. Absent evidence noted.
- Reasoning chain: At minimum, two to three explicit steps connecting evidence to conclusion. Labeled inference type. One documented assumption.
- Verification record: Per-criterion findings (even brief), confidence rating with justification, independence level.
This minimum set is not as good as the full Protocol. But it is genuinely VERA, and it is dramatically better than no documentation. As practice develops, the minimum set expands toward the full Protocol naturally — not because of external pressure, but because practitioners discover what the missing elements reveal.
Proceed to Adoption Roadmap for a sequenced plan for building VERA practice beyond the first claim.
Adoption Roadmap
This roadmap describes how to move from Level 1 (Aware) to Level 3 (Practicing) as an individual, and from Level 1 to Level 3 as a team or organization. It is organized as phases rather than calendar months because adoption speed varies significantly by context, resource availability, and the complexity of the claims being documented. Estimated timelines are provided as reference points, not commitments.
Read Getting Started before this chapter. This roadmap assumes you have completed at least one full VERA claim.
How to Use This Roadmap
The roadmap has six phases. Most practitioners pass through all six, though not always in linear sequence. Phase 0 (Assessment) should be completed before any other phase. Phases 1 through 3 address individual practice. Phases 4 through 6 address team and organizational adoption.
If you are an individual practitioner who is not currently trying to scale VERA to a team, Phases 1 through 3 are your primary path. Return to Phases 4 through 6 when organizational adoption becomes relevant.
If you are leading team adoption, work through Phases 1 through 3 yourself first — you cannot effectively advocate for what you have not practiced — then run a compressed version of Phases 1 through 3 with your team before moving to Phase 4.
If you are an organizational leader commissioning VERA adoption, read all six phases, identify which phase your organization is currently in, and focus on the governance and obstacle-handling content in Phases 4 through 6.
Phase 0: Assessment
Duration: 1–2 days
Before building anything, establish an honest baseline. Phase 0 is a maturity assessment — not of aspirations, but of current state.
Conducting the Assessment
Work through the Maturity Model Overview and the six domain definitions. For each domain, answer the observable indicator questions with specific evidence — not general impressions.
Use the self-assessment checklists in the Level chapters as your primary assessment tool. Be conservative: if you are uncertain whether an indicator is met, score it as not met. The cost of underestimating your current level is low (you do easier work than necessary). The cost of overestimating is high (you skip foundational development and find gaps when they are harder to fix).
Assessment Outputs
Produce a simple profile document with:
- Your current level in each of the six domains (1–2 for most people starting here)
- The specific evidence for each rating (a sentence or two per domain)
- The single domain where you are strongest (your leverage point)
- The single domain where you are weakest relative to your goals
The assessment document is itself a VERA-adjacent exercise. It is not a full VERA claim — it does not go through the Verification Protocol — but producing it with evidence and explicit reasoning is good practice.
Common Starting Profiles
Solo practitioner, no organizational context: Typically Level 1 across all domains. Begin at Phase 1.
Small team with a motivated champion: Evidence and Reasoning typically at Level 1–2 (champion has been applying VERA informally); Governance, Sovereignty, and Integration at Level 1. Begin at Phase 1 individually; plan Phase 4 for Month 3 or 4.
Organization with a prior quality initiative (ISO, CMMI): Governance often at Level 2–3 (processes and mandates exist); Evidence, Reasoning, and Verification often at Level 1 (the quality framework doesn’t specifically address epistemic quality). Begin at Phase 2 for the governance-adjacent domains; Phase 1 for the epistemic domains.
AI-heavy organization: Evidence often at Level 1 (AI-mediated evidence is not documented systematically); Sovereignty often below Level 1 (sovereignty implications of AI tool use have not been considered). Begin at Phase 1 with a strong emphasis on VERA-P-0003.
Phase 1: First Practice
Estimated duration: 2–4 weeks | Target: Level 2 in Evidence and Reasoning
Goal
Complete three full VERA claims — all five phases, all the way through to a Verification Record — and understand what the protocol actually requires in your specific work context.
Week 1: The First Claim
Follow the Getting Started walkthrough. Choose a claim that matters. Complete all five phases. Expect this to take longer than you think. Do not shorten Phase 4.
Milestone: One complete claim record, one Verification Record, one claim registered (even if “the registry” is a row in a spreadsheet or a file in a folder).
Weeks 2–3: The Challenging Claims
The second claim should involve a problem that your first claim didn’t: missing evidence, a conflicted source, contrary evidence that resists easy resolution, or a compound claim that requires decomposition.
The third claim should be one where you have genuine uncertainty about the conclusion — one where the verification process might change your mind. This is the test of whether you are doing VERA or doing VERA-shaped compliance.
Before each claim, apply the relevant patterns from the Patterns Library. If the second claim involves missing evidence, use VERA-P-0001. If it involves an AI-assisted evidence search, use VERA-P-0003. The patterns are designed to be applied to real claims as you work; this is their first real use for most practitioners.
Milestone: Three complete claim records, three Verification Records. At least one claim where the verification process revealed a problem that required revising the claim statement or evidence set.
Week 4: Reflection and Self-Assessment
After three claims, conduct a brief retrospective:
-
What took the most time? If Phase 1 is consistently long, you are over-decomposing or writing imprecise claim statements. If Phase 3 is consistently long, your evidence search is happening in Phase 3 rather than Phase 2 (you are building reasoning chains before the evidence is complete). If Phase 4 is consistently short, you may not be verifying rigorously.
-
What did you skip or abbreviate? Be honest. The prospective search plan? The adversarial checklist in self-verification? Independent verification? Identify the step you find hardest to do and treat it as the step most worth doing.
-
What surprised you? The gap between what you thought you knew and what verification revealed is the measure of VERA’s value so far.
-
Re-assess your maturity profile. Compare to your Phase 0 assessment. Update the domains where your practice has genuinely developed.
Milestone: Updated maturity assessment with specific evidence. Evidence domain and Reasoning domain at Level 2 (observable indicators met per Level 2 checklist).
Phase 2: Building the Habit
Estimated duration: 6–10 weeks | Target: Level 2–3 in Evidence, Reasoning, Verification
Goal
VERA becomes a regular part of how you work with significant claims, not a separate exercise you do occasionally. By the end of Phase 2, you produce VERA documentation as naturally as you produce any other professional output.
The Habit Infrastructure
Habit formation requires friction reduction and trigger identification. For VERA:
Trigger identification: Identify the specific moments in your workflow where VERA should begin. These are the moments when a significant claim first appears:
- When you are asked to recommend something
- When you are beginning a research document
- When you are preparing a presentation that makes factual assertions
- When a team decision is documented
At each of these trigger moments, the first action should become: “Open a new claim record and write the claim statement.”
Template setup: Create a claim record template in whatever tool you use for writing. The template should have the fields from the Verification Protocol pre-populated with prompts. Remove the friction of remembering the structure so that the cognitive effort goes to the substance.
Registry setup: By Phase 2, your registry needs to be more than a mental note. Create a simple shared document — even a plain table — where claims are logged with their ID, statement, verification state, and review date. This registry serves two functions: it prevents you from re-documenting claims you have already documented, and it makes your VERA work visible to others.
Expanding to More Claim Types
Phase 1 claims were chosen for tractability. Phase 2 expands to harder types:
Claims with sparse evidence. Some important claims lack the Primary-tier data that makes Phase 1 claims clean. Apply VERA-P-0001 (absent evidence) and VERA-P-0002 (conflicted sources) as needed. Practice writing reasoning chains that honestly reflect the evidence’s limitations without abandoning the claim if it still has valid support.
Claims with compound structure. Apply VERA-P-0006 (Compound Claim Decomposition) to at least one claim. Experience the dependency tracking required when sub-claims must be verified before parent claims can be.
Claims involving AI-generated content. If you use AI tools in your work, apply VERA-P-0003 (AI-Generated Evidence Documentation) to at least one claim where AI contributed to the evidence. Experience the difference between Tier A (source retrieved) and Tier B (AI output without verified source) handling.
Expanding the Verifier Pool
Phase 1 likely relied heavily on self-verification. Phase 2 should involve at least two peer verification events — situations where someone other than you reviews a claim record and evaluates it against Phase 4 criteria.
The first time you share a claim record for peer verification, prepare your peer by walking them through the criteria checklist. The goal is genuine evaluation, not a rubber stamp. If the peer finds nothing in Phase 4, either the claim is exceptionally clean or the peer needs guidance on what adversarial verification looks like.
Milestone: 8–12 claims documented. At least two peer-verified. Claims span at least three different types of evidence situations. Registry is maintained and findable. A review cadence exists for the oldest claims.
Phase 2 Obstacles
“I don’t have time for this.” VERA takes more time upfront and less time later — when the claim is challenged, when evidence needs updating, when someone needs to understand where a conclusion came from. Track your total time on claims, including revision and challenge handling, not just documentation time. For most practitioners, VERA time pays back within three to five significant claims.
“My claims keep changing during Phase 3.” This is correct behavior — Phase 3 is supposed to reveal whether the claim as stated is actually what the evidence supports. Revising the claim statement after building the reasoning chain is not a failure; it is verification working. Build the expectation into your process: claim statements are provisional until Phase 4 is complete.
“I can’t find independent verifiers.” At Phase 2, Peer independence (someone with relevant competency and no stake) is the target. This does not require a VERA expert — it requires someone who will read your claim record carefully and try to find problems. The peer does not need to know the VERA criteria by name; they need to understand that their job is skeptical review, not endorsement. Walk them through the criteria informally.
Phase 3: Reaching Level 3
Estimated duration: 4–8 weeks | Target: Level 3 in Evidence, Reasoning, Verification; Level 2 in Governance and Sovereignty
Goal
VERA practice reaches the Level 3 definition: any significant claim you produce can be audited against the Verification Protocol. The practice is reproducible — it does not depend on which phase of your month it is or how pressured you feel.
What Level 3 Actually Requires
Level 3 is defined by all significant claims, not most. This requires a precise definition of “significant” — one you apply consistently, not case-by-case. Write that definition down. It should be specific enough that you can apply it to any given claim without deliberation.
Example definition for an individual practitioner: “A claim is significant if it (a) will appear in an external communication, (b) informs a decision with consequences I cannot easily reverse, or (c) is cited to justify resource allocation.”
The Level 3 test: look at everything you have produced in the last four weeks that meets your significance definition. Are all of it VERA-documented? If yes, you have Level 3 coverage. If no — if there are significant claims that are undocumented — you are still at Level 2 regardless of how good your documented claims are.
Sovereignty Assessment
Phase 3 includes completing the formal sovereignty assessment described in the Sovereignty Principles chapter. Work through the five sovereignty assessment questions for each principle. Be specific: do not answer “generally yes” — answer with a specific, named tool, process, or policy.
Common Level 3 sovereignty findings:
- S1 (Data): Evidence stored in proprietary tools with uncertain export capability. Document the gap and its remediation plan.
- S2 (Reasoning): Reasoning chains are visible to the practitioner but not to other stakeholders who might be affected by the claims.
- S4 (Process): Verification criteria are documented in this framework but have not been published in a stakeholder-accessible form in your specific organizational context.
Document each finding with a remediation plan that is specific enough to execute: named owner, specific action, target date.
Establishing a Review Cadence
By Phase 3, your oldest claims from Phase 1 are approaching their review dates. Conduct your first scheduled review:
- Check whether the evidence is still current (apply VERA-P-0005 decay classes)
- Check whether any upstream claims have changed state (apply VERA-P-0012 cascade check)
- Re-read the reasoning chain: does it still hold?
- Update the Verification Record if anything has changed
The first scheduled review is instructive. It reveals how much claims drift from their verification baseline over three to six months and whether your review cadence was calibrated correctly.
Phase 3 milestone: Full Level 3 maturity assessment meeting all Level 3 checklist criteria. Sovereignty assessment completed with gaps documented. At least one scheduled review completed. Claim registry is active and current.
Phase 4: Team Adoption
Estimated duration: 2–4 months | Target: Level 3 across team; Level 2 Governance
Goal
VERA practice is shared across a team. Multiple practitioners produce VERA-compliant claim records consistently. The team has a shared registry, shared evidence quality calibration, and a functioning peer verification process.
Pre-Conditions for Team Adoption
Team adoption should not begin until the team leader (or at least one practitioner) has reached personal Level 3. Leading a team through VERA adoption without having personally experienced Phase 1 through 3 produces the worst outcome: governance before substance, compliance surface before epistemic depth.
The Team Kickoff
Begin team adoption with a working session, not a training presentation. The format:
-
30 minutes: One of the experienced practitioners presents a real claim record from their own work — full evidence set, reasoning chain, Verification Record. Not a simplified example: the actual artifact from Phase 1 or 2, with its imperfections.
-
60 minutes: The team selects one current claim — something they are actually working on — and works through Phase 1 and 2 together, with the experienced practitioner facilitating. Produce a real claim stub and evidence set, not an exercise.
-
30 minutes: Debrief on what was harder than expected, what was clarifying, and which claims in current work most need VERA treatment.
Do not spend the kickoff explaining VERA. Demonstrate it. The documentation exists for explanation; the working session is for experiencing.
Team Calibration
Different practitioners rating the same evidence will produce different quality tier ratings and different confidence ratings. The first task after the kickoff is calibration.
Evidence quality calibration: Produce a set of five evidence items from your domain. Have each team member rate each item independently. Compare ratings. Where there is disagreement greater than one tier, discuss the evidence and agree on the correct rating and why. Document the agreed ratings as your team’s calibration examples — your mini version of the Patterns Library anchor examples.
Confidence calibration: Share two or three completed claim records with confidence ratings. Have team members assign their own ratings independently before seeing the assigned rating. Compare and discuss gaps. This is an early version of VERA-P-0013 (Claim Confidence Calibration).
Shared Registry Design
The team registry is more complex than the individual registry. It needs:
- A standard claim ID format that avoids collisions across practitioners
- A clear ownership model (who is responsible for maintaining each claim?)
- A significance threshold definition that all team members apply consistently
- A process for registering upstream dependencies at claim creation time (for VERA-P-0012)
Choose the simplest registry tool that meets these requirements. See Tooling & Integration for specific tool options.
The First Team Peer Verification
The first team peer verification event — where one practitioner verifies another’s claim using the full Verification Protocol — is a high-leverage moment. It often reveals:
- Calibration gaps between the claimant and verifier
- Patterns in what the claimant consistently omits
- Evidence quality standards that differ between practitioners
Make this a learning event, not an evaluation. The verifier’s goal is to produce a Verification Record that helps the claimant improve future claims, not to produce a pass/fail judgment.
Governance Foundations
At Phase 4, the team needs its first formal governance element: a documented significance threshold. This is a one-page document that defines:
- What claims require VERA treatment (the significance threshold)
- Who has responsibility for the claim registry
- What the standard for peer verification is (who can verify whom)
- What happens when a claim is contested
This document does not need to be complex. It should be specific enough that any team member can determine, for any given claim, whether VERA treatment is required.
Phase 4 milestone: Team Level 3 maturity assessment across all team members. Shared registry live and used. Significance threshold documented. At least three cross-practitioner peer verifications completed.
Phase 5: Institutionalization
Estimated duration: 4–8 months | Target: Level 3 Organization; Level 3 Governance
Goal
VERA is organizational policy, not team practice. The mandate exists. Ownership is formal. Training is part of onboarding. The claim registry is organizational infrastructure.
The Governance Mandate
Institutionalization requires an explicit decision by someone with organizational authority that significant claims will be documented and verified in VERA format. This decision produces a policy document with:
- Scope: Which claims require VERA treatment (the significance threshold, defined at organizational scale)
- Ownership: Who is responsible for the VERA framework within the organization (the VERA owner role)
- Standards: The Verification Protocol version in use and any organizational amendments
- Enforcement: How non-compliance is handled (at minimum: undocumented significant claims are not treated as verified in organizational decision-making)
- Training: How practitioners are trained and what the baseline competency requirement is
Getting this mandate requires making the business case for VERA. The business case is not abstract. It should draw on concrete examples from Phases 1 through 4:
- A specific claim where verification revealed a problem that would have been expensive to discover later
- A specific decision that was better because the claim supporting it was traceable
- A specific instance where VERA’s traceability allowed an error to be corrected cleanly
Abstract arguments for epistemic quality rarely secure mandates. Evidence of value from real work does.
Training Infrastructure
Organizational Level 3 requires that new practitioners receive VERA training as part of onboarding. This training should:
- Be mandatory, not optional
- Include a hands-on component (completing a practice claim, not just reading documentation)
- Include access to the team’s calibration examples and patterns library
- Establish contact with a VERA mentor (an experienced practitioner available for questions during the first month)
Training should be owned by the VERA owner role, not by individual champions.
Registry as Infrastructure
At organizational scale, the registry can no longer be a shared spreadsheet. It needs search capability, dependency tracking, ownership attribution, state management, and review cadence alerts. This does not require purpose-built VERA software — many wiki or knowledge management platforms can be configured to meet these requirements. See Tooling & Integration.
Phase 5 milestone: Documented VERA policy with named ownership. VERA training included in onboarding. Claim registry on organizational infrastructure with search and dependency tracking. First formal VERA quality review conducted by the VERA owner.
Phase 6: Governance Maturity
Target: Level 4 (beginning of)
Goal
VERA quality is measured and improving. The governance function reviews metrics, identifies patterns, and makes evidence-based investments in VERA quality.
Phase 6 is beyond the scope of this roadmap’s detailed guidance — it requires the organizational infrastructure established in Phase 5 and typically begins in Year 2 of adoption. The Level 4 chapter describes what Level 4 looks like across all six domains. The Level 4 self-assessment checklist is the practical starting point.
The transition from Phase 5 to Phase 6 is marked by one change: the VERA owner stops asking “are we doing VERA?” and starts asking “how well are we doing VERA?”
Handling Common Obstacles
“Leadership isn’t interested.”
Leadership interest in VERA follows from demonstrated value, not from explanations of the framework. The sequence that works:
- Apply VERA personally until you have three to five high-quality documented claims (Phases 1–2).
- Use one of those claims — specifically one where verification revealed a problem — in a leadership conversation. Do not say “VERA found this problem.” Say “When we documented the evidence for this claim carefully, we found that [specific problem].”
- Offer to apply VERA to one claim that leadership cares about. Make the offer specific: “Can I document the evidence and reasoning for the [specific assertion] we’re going to present to the board?”
- The experience of having a claim they care about go through verification — especially if verification reveals something — creates leadership interest more effectively than any explanation.
“VERA is slowing us down.”
VERA does slow down claim production. It speeds up claim use, claim updating, and error correction. The net time balance depends on how often your claims are challenged, revised, or used downstream.
If VERA feels purely like overhead, the most likely causes are:
- The significance threshold is set too low (you are applying VERA to claims that don’t warrant it)
- The claim registry is not integrated with your workflow (VERA is parallel work rather than primary work)
- The claims you are documenting are not being used downstream (VERA’s value is in the downstream)
Address the root cause rather than relaxing the standards.
“We can’t find independent verifiers.”
At Phase 2: self-verification with adversarial stance (VERA-P-0010) is the interim solution. At Phase 4: cross-practitioner peer verification within the team. At Phase 5: the organizational verifier pool. Not everyone needs to verify everything; a small group of practitioners trained to Expert-level can verify high-stakes claims for the whole organization.
If the domain is genuinely too specialized for internal peer verification, VERA-P-0011 (Expert Verifier Onboarding) provides the path to engaging external domain experts without requiring them to be VERA-trained.
“People are filling out the forms but not doing the thinking.”
This is the compliance surface problem described in the Level 3 chapter. It means verification is not rigorous enough — claims are being passed that should not be.
The fix is not more documentation requirements — it is better verification. Specifically: the first-pass verification rate needs to be meaningful. If 95% of submitted claims are verified on the first pass, the criteria are not being genuinely applied. A first-pass rate of 60–80% indicates real scrutiny.
Examine the Verification Records being produced: do they have per-criterion findings, or just a final state? Do they have confidence ratings with written justifications, or just numbers? Records that are substantively thin are a signal that verification is in compliance mode.
“Our most important claims can’t be verified without revealing confidential information.”
This is a real constraint, particularly in competitive intelligence, legal, and regulatory contexts. The solutions:
- Compartmented verification: The verifier has the same clearance/access level as the claimant; the independence requirement is met without disclosing beyond the authorization boundary.
- Verification of structure, not content: A verifier reviews whether the reasoning chain’s structure is valid and whether the evidence quality ratings are appropriate, without necessarily seeing the evidence content directly. This is lower independence (the verifier cannot check E5 chain-of-custody directly) but better than no verification.
- Third-party verification with NDA: An external verifier with appropriate agreements can satisfy independence requirements in cases where internal verifiers cannot.
Document whatever constraint applies and its impact on the verification’s independence level. A Verification Record that honestly states “Peer independence achieved within security clearance boundary; verifier did not directly review evidence content” is more honest and epistemically useful than one that conceals the constraint.
Roadmap Summary
| Phase | Duration | Individual target | Org target | Key milestone |
|---|---|---|---|---|
| 0. Assessment | 1–2 days | Baseline profile | — | Honest maturity profile documented |
| 1. First Practice | 2–4 weeks | L2 Evidence, Reasoning | — | 3 complete claims, 1 registry |
| 2. Habit Building | 6–10 weeks | L2–3 Evidence, Reasoning, Verification | — | 8–12 claims, peer verification begun |
| 3. Level 3 | 4–8 weeks | L3 Evidence, Reasoning, Verification; L2 Governance | — | Level 3 maturity confirmed; sovereignty assessed |
| 4. Team Adoption | 2–4 months | Maintained | L2–3 | Team calibration, shared registry, significance threshold |
| 5. Institutionalization | 4–8 months | Maintained | L3 all domains | VERA policy, formal ownership, onboarding training |
| 6. Governance | Ongoing | Maintained | L4 (beginning) | Quality metrics reviewed; improvement program active |
Proceed to Tooling & Integration for guidance on the tools and workflows that support VERA practice at each phase of this roadmap.
Tooling & Integration
VERA does not require specialized software. It requires that whatever tools you use can satisfy a small set of structural requirements derived from the framework’s sovereignty principles. This chapter describes those requirements, maps them to common tool categories, and gives concrete guidance for building a VERA workspace in the tools most practitioners already use.
What VERA Requires of Tools
Before evaluating any specific tool, establish the requirements. These are derived directly from the Sovereignty Principles and the practical needs of the Verification Protocol.
Non-Negotiable Requirements (All Maturity Levels)
R1 — Writeable and searchable. You must be able to create structured documents, add fields, and search across your claim records. A tool that only allows reading or that provides no search is inadequate.
R2 — Exportable. You must be able to export all your claim records, evidence documentation, and verification records in a non-proprietary format (plain text, Markdown, CSV, or similar). If you cannot export your data, you do not own it. This requirement flows directly from S1 (Data Sovereignty).
R3 — Linkable. Claims reference evidence items, evidence items reference sources, verification records reference claims. Your tooling must support this linking — at minimum by allowing you to include a reference ID or URL in a field.
R4 — Durable. Your claim records must persist. A tool that deletes data after inactivity, that regularly loses history, or whose longevity is uncertain is not appropriate for VERA’s registry function.
R5 — Accessible to stakeholders. Claim records and verification records must be accessible to the people affected by the claims they document. A tool that only the claimant can access violates S2 (Reasoning Sovereignty).
Requirements at Level 3 and Above
R6 — Registry capable. At Level 3, the claim registry must support: filtering by verification state, sorting by date, searching by claim content, and indicating ownership and review dates. A flat file cannot do this; a structured table or database can.
R7 — Dependency trackable. At Level 3, upstream claim dependencies must be registered at claim creation. Your tooling must be able to represent a “this claim uses Claim X as evidence” relationship.
R8 — Auditable history. At Level 4 and above, verification records and claim state changes must have an auditable history — who changed what, when. Version history in a wiki or document system meets this requirement.
What to Avoid
Proprietary-only formats. If your claim records can only be read in one vendor’s application, you have a Data Sovereignty problem. Accept proprietary tooling only when export to a non-proprietary format is available and tested.
Single-point-of-failure storage. Claims stored only in a local application on one device, or only in an account with no recovery path, are at risk. Mirror important claims in at least two locations.
Reasoning-opaque tools. Tools that produce conclusions — summaries, recommendations, classifications — without exposing the reasoning that produced them violate R3 (Reasoning Sovereignty) when that reasoning is incorporated into VERA claims without documentation. This includes AI tools used in a black-box mode.
The Minimum Viable Toolkit
You need exactly three capabilities to begin VERA practice:
- A writing environment where you can create and edit structured documents
- A claim registry where you can list all documented claims with their key fields
- A file or folder system where you can store evidence item references and verification records
These three capabilities can be satisfied by a single folder of plain text files, a spreadsheet, or any modern note-taking application. The choice of tool is less important than the discipline of using it consistently.
The minimum viable toolkit for a solo practitioner:
- Writing: Any text editor or word processor (including Markdown editors, Google Docs, Microsoft Word)
- Registry: A single table with columns: Claim ID, Statement, Verification State, Confidence, Owner, Review Date
- Evidence storage: A folder (local or cloud) named by claim ID, containing evidence documents and the verification record
This toolkit meets R1 through R5. It does not fully meet R6 through R8, which is appropriate — those requirements are for Level 3 and above.
The Claim Registry in Depth
The claim registry is the most important piece of VERA infrastructure. Get it right early; it is difficult to restructure a large registry later.
Registry Fields
The minimum fields for a functional registry:
| Field | Format | Notes |
|---|---|---|
| Claim ID | VERA-C-[YYYY]-[NNNN] or org format | Permanent; never changes |
| Statement | One precise declarative sentence | The claim itself, not a title |
| Domain / Topic | Free text or controlled vocabulary | For filtering |
| Verification State | ○ ◐ ◑ ● ◈ ✗ | Current state |
| Confidence | 0.0–1.0 | Current rating |
| Owner | Name or role | Who maintains this claim |
| Created | Date | Phase 1 initiation date |
| Verified | Date | When verification completed |
| Review Due | Date | From Phase 5 cadence setting |
| Verification Record | ID or link | VERA-V-[NNNN] or URL |
| Notes | Free text | Stale flags, contestation notes, etc. |
Optional fields for Level 3+ registries:
| Field | Purpose |
|---|---|
| Upstream Dependencies | Comma-separated Claim IDs this claim uses as evidence |
| Downstream Dependents | Claim IDs that use this claim as evidence (auto-populated if tooling supports) |
| Evidence Set Link | URL or folder path |
| Protocol Version | The Verification Protocol version used |
| Tags | For filtering by topic, domain, project |
Registry Implementation Options
Spreadsheet (Google Sheets, Excel, LibreOffice Calc)
Suitable for: Individual practitioners and small teams at Levels 1–3.
Setup: One sheet per registry. Columns as above. Use data validation for Verification State column (restrict to the six valid states). Freeze the header row. Add conditional formatting to highlight Stale claims (Review Due date passed).
Limitations: No dependency graph, no version history at the cell level (Google Sheets has basic version history), no relational links. Adequate to Level 3; starts showing constraints at Level 4.
Notion
Suitable for: Individual practitioners and teams at Levels 2–4.
Setup: Create a database for claims with properties corresponding to registry fields. Use Select property for Verification State. Use Relation property for Upstream Dependencies (relates to the same database). Use Date property for Review Due with reminders. Create views: one for All Claims, one filtered to In Progress (state = Pending or Partial), one sorted by Review Due date (Stale claims first).
Evidence sets: Create a second database for Evidence Items, related to Claims.
Strengths: Relation property natively supports dependency tracking (R7). Filtering and sorting meet R6. Export to Markdown or CSV meets R2.
Limitations: Notion stores data on Notion’s infrastructure; verify export works regularly (R2 test). The vendor dependency is a monitored Data Sovereignty risk.
Obsidian
Suitable for: Individual practitioners and small teams at Levels 1–4, especially those who prefer local-first, Markdown-native tools.
Setup: Create a Vault with a claims/ folder. Each claim is a Markdown file named by Claim ID. Use YAML frontmatter for structured fields:
---
claim_id: VERA-C-2026-0001
statement: "Production defect rate was 35% lower in the six-month post-adoption period than in the equivalent pre-adoption period."
state: verified
confidence: 0.79
owner: DJF
created: 2026-02-21
verified: 2026-03-01
review_due: 2027-03-01
verification_record: VERA-V-0001
upstream_deps: []
---
Use the Dataview plugin to generate registry views from frontmatter. Example Dataview query:
```dataview
TABLE statement, state, confidence, review_due
FROM "claims"
WHERE state != "refuted"
SORT review_due ASC
```
Strengths: All data is local plain text files — maximum Data Sovereignty. No vendor dependency. Export is trivially a file copy. Markdown links support R3.
Limitations: No built-in access control for stakeholder sharing (R5 requires a separate sharing solution). Team use requires a shared sync solution (Git, Syncthing, or a cloud folder).
Confluence (Atlassian)
Suitable for: Teams and organizations at Levels 3–5.
Setup: Create a VERA space. Use a Confluence database (or a structured page template with properties) for the claim registry. Use page templates for claim records and verification records.
Strengths: Native to organizational knowledge management infrastructure; stakeholder access (R5) is managed through existing Confluence permissions. Audit history (R8) is built in.
Limitations: Export (R2) requires deliberate configuration; verify that page exports include structured data in a portable format, not just HTML. Confluence licensing costs may be a governance consideration.
Plain Markdown Files with Git
Suitable for: Technical teams at Levels 2–5 who are comfortable with version control.
Setup: A Git repository with a /claims/ directory. Each claim is a Markdown file with YAML frontmatter (same structure as Obsidian above). Verification records live in /verification-records/. Evidence sets referenced by folder path or URL.
Strengths: Version history at the line level (R8) via Git log. Export is a clone. Stakeholder access via repository permissions. Dependency tracking via frontmatter fields. The entire VERA knowledge base can be reviewed, diffed, and audited using standard Git tooling.
Limitations: Requires Git literacy. No GUI registry view without additional tooling (a static site generator like mdBook can build a browsable version from the Markdown — suitable for teams already using mdBook for other documentation).
Evidence Management
Evidence items need to be stored, organized, and retrievable. The core requirement is chain-of-custody: for each evidence item, you need to be able to locate the original source, trace the path from source to your documentation, and confirm that the item was what you say it was when you accessed it.
Evidence Folder Structure
Organize evidence by claim, not by topic. Each claim has its own evidence folder:
evidence/
VERA-C-2026-0001/
E1-defect-data-pre-adoption.csv
E2-defect-data-post-adoption.csv
E3-deployment-log.pdf
E4-methodology-doc.md
evidence-set.md ← Evidence item records with ratings and chain of custody
The evidence-set.md file is the master document for that claim’s evidence, containing one Evidence Item Record per item.
Evidence Item Record Format
## E1: Pre-Adoption Defect Data
- **Source:** Internal incident tracking system (Jira)
- **Type:** Export
- **Quality Tier:** Primary
- **Access Date:** 2026-02-21
- **Accessed By:** DJF
- **Chain of Custody:** Exported directly from Jira filter "project = PROD AND type = Bug AND created >= 2025-05-01 AND created <= 2025-10-31" on 2026-02-21. File saved to /evidence/VERA-C-2026-0001/E1-defect-data-pre-adoption.csv.
- **Relevance:** Provides pre-adoption production defect count for Step 1 of reasoning chain.
- **Independence Classification:** Correlated with E2 (same system, different period)
- **Decay Class:** Drifting (12-month expiry: 2027-02-21)
- **Conflict:** None
Handling Non-Retrievable Sources
Some evidence sources cannot be stored as local files: paywalled academic papers, licensed datasets, confidential internal documents, proprietary systems. For these, the chain-of-custody record replaces the stored file:
- Document how access was obtained (subscription, license, internal access rights)
- Describe the access method in enough detail that the same access could be repeated
- Note any risk to future access (subscription renewal, license expiration, internal system changes)
If the source is a website that may change or disappear, capture a dated archive (WebCite, archive.org, or a local PDF of the page). This is especially important for Volatile evidence items.
AI Tool Integration
AI tools create specific tooling requirements beyond the general VERA requirements. These are derived from VERA-P-0003 (AI-Generated Evidence Documentation) and the Sovereignty Principles.
Prompt Logging
Any AI interaction that produces evidence used in a VERA claim requires:
- The name and version of the AI model used
- The exact prompt (or a reconstruction-sufficient summary)
- The complete verbatim output used as evidence
- The date and time of the interaction
Most AI tools do not provide automatic logging that meets these requirements. Solutions:
Manual logging: Copy prompts and outputs into your evidence item records immediately after each AI session. This is the minimum viable approach.
Session export: Some AI platforms (including Claude’s web interface) provide conversation export. Export conversations immediately after any session that produced evidence; include the export in the evidence folder.
Prompt management tools: Tools like Promptbase, PromptLayer, or custom prompt registries can log prompts and responses automatically for API-based AI use. If your AI use is API-based, prompt logging infrastructure is strongly recommended.
AI Sovereignty Assessment
For each AI tool used in VERA work, assess:
-
Reasoning transparency: When the AI produces reasoning (not just retrieval), can you capture that reasoning in full? If the tool only shows conclusions, it is not suitable for VERA reasoning chain contribution without the Tier C (AI synthesis) handling from VERA-P-0003.
-
Data sovereignty: Do your prompts and the AI’s responses leave your control? If they are stored on the AI provider’s servers, that is a data sovereignty consideration. Evaluate whether the data contains confidential claim content that should not leave organizational control.
-
Model identity: Can you identify the specific model version used? Confidence in an AI’s output is tied to knowing what model produced it. Tools that use unversioned or frequently updated models without notification create chain-of-custody problems.
-
Exit path: If you stop using the tool, can you retrieve all prompts and outputs? Test this before the tool becomes a significant part of your evidence workflow.
AI Tools in the VERA Workflow
AI tools are most valuable — and most compatible with VERA sovereignty requirements — in these specific roles:
Evidence discovery (Tier A work): AI can identify sources, suggest search terms, and surface relevant literature. VERA-P-0003 Tier A applies: AI identifies; you retrieve and verify the original source.
Claim formulation assistance: AI can help identify compound claim structures, suggest precision improvements to claim statements, and flag potentially hidden assumptions. These outputs are practitioner inputs, not evidence — they improve Phase 1 quality without creating evidence chain-of-custody issues.
Reasoning chain review: AI can serve as a preliminary adversarial reviewer — attempting to identify gaps and weaknesses in a reasoning chain before formal verification. This is not VERA verification; it is a preparation tool. The practitioner evaluates the AI’s critique independently.
Registry queries: AI with access to structured claim registry data can help practitioners find related claims, identify upstream dependencies, and surface review-due items.
AI tools are least appropriate — and require the most careful sovereignty management — when:
- They are generating conclusions that are incorporated into reasoning chains without capturing their reasoning
- They are summarizing evidence without traceability to the underlying sources
- Their outputs are cited as Secondary evidence without verification of the underlying sources
Integrating VERA with Existing Workflows
Decision Documents
The highest-value VERA integration for most organizations is making VERA verification status visible in decision documents. A decision document that supports each key factual assertion with a VERA Claim ID — rather than a verbal summary — allows decision reviewers to assess the epistemic basis of the decision without requiring them to be present for all the upstream research.
Implementation: Add a “Evidence and Verification” section to decision document templates. For each significant factual assertion in the document, include the Claim ID and current verification state. Decision reviewers who want to examine the reasoning can retrieve the full claim record from the registry.
Meeting Notes and Action Items
Decisions made in meetings often rest on claims asserted verbally — the most epistemically fragile form of claim. The VERA integration that addresses this is simple: when a significant claim is asserted verbally in a meeting and accepted as a basis for a decision or action, someone records it as a VERA claim stub (Claim ID, statement, context) as part of the meeting notes. The stub moves into the Verification Protocol pipeline, and the action or decision is not finalized until the claim reaches at least Partial verification.
This integration is high-friction and requires team discipline. It is appropriate for high-stakes decisions; it would be disproportionate for every meeting. Define the threshold (what types of meeting decisions warrant this treatment) explicitly.
Research and Analysis Outputs
Reports, analyses, and research documents typically contain multiple significant claims in their conclusions and recommendations sections. The VERA integration here: claim records for all significant claims in a report are produced before the report is finalized. The report document links to Claim IDs. Readers who want to evaluate the claims can access the full records; readers who trust the verification can proceed on the basis of the verification state and confidence rating.
For organizations at Level 4, this integration is standard. For organizations at Level 2, a simplified version is sufficient: list the most important three to five claims from a report in a registry entry, even if full verification records are not yet complete.
Onboarding and Training
VERA is most effectively integrated into onboarding not as a standalone training module but as the framework through which new practitioners learn to work with claims in their specific domain. When a new practitioner joins a Level 3 organization, their first substantive work should involve completing a VERA claim (with mentoring) in their actual work area, using the organization’s calibration examples and patterns library. This is more effective than completing a practice claim and then separately learning how the organization’s actual work is structured.
Tooling at Each Maturity Level
| Maturity Level | Claim Registry | Evidence Storage | Verification Records | Integration Points |
|---|---|---|---|---|
| L1: Aware | None required | None required | None required | None |
| L2: Exploring | Spreadsheet or flat file table | Named folder per claim | Document per record, stored with claim | None required |
| L3: Practicing | Searchable database (Notion, Confluence, Obsidian + Dataview, or equivalent) | Structured evidence folders with Evidence Item Records | Complete records with per-criterion findings; linked to claim registry | Decision documents link to Claim IDs |
| L4: Governing | Registry with dependency tracking, state history, and review cadence alerts | Evidence items linked to reference management system; chain-of-custody auditable | Versioned records; verifier identity tracked | Decision gates, project management, governance reporting |
| L5: Sovereign | Registry is fully exported and sovereign; no undocumented vendor dependencies | All evidence accessible and exportable independently of vendor tools | Records are externally auditable; criteria publicly documented | All significant knowledge work is VERA-native |
Testing Your Tooling for Sovereignty
Before committing to a tool stack, run this five-minute export test:
- Create a complete claim record in your chosen tool, including evidence item records and a verification record.
- Export everything to plain text or CSV.
- Delete the application or log out.
- Verify that you can reconstruct the complete claim record from the exported data alone — without the application.
- Verify that someone else could read and understand the exported data without any context from the application.
If you cannot complete steps 4 and 5, you have a Data Sovereignty gap. Either fix the export process or choose a different tool before you accumulate a registry that depends on it.
Run this test annually as part of the sovereignty assessment required by S1 (Data Sovereignty). Tools change. Exports that worked last year may not work this year.
The Implementation section is complete. Return to the Pattern Catalog if you encounter recurring challenges that patterns might address, or proceed to the Reference section for the glossary, standards alignment, and changelog.
Appendix A: Glossary
This glossary provides quick-reference definitions for all terms used across the VERA documentation. Definitions here are concise; the canonical definition for foundational terms lives in the Lexicon, which should be consulted when precision matters. Cross-references use → to indicate related terms.
Terms are listed alphabetically. For the formal error taxonomy, see Lexicon § Error Taxonomy.
A
Abductive inference An inference type in which the conclusion is presented as the best available explanation of the evidence, rather than as a logical necessity or statistical generalization. One of VERA’s four labeled inference types. Moderate strength; requires ruling out alternative explanations. See also: → Inference type.
Absent evidence An evidence type that was expected given a claim’s scope and prospective search plan but was not found during the evidence search. Distinct from evidence that was not searched for. Must be documented and assessed for materiality using → VERA-P-0001 (Absence-of-Evidence Assessment). Three materiality ratings: Non-material, Moderate, Significant.
Adversarial checklist In → VERA-P-0010, a seven-item checklist applied during self-verification, calibrated to surface the errors most likely to be missed by a claimant reviewing their own work. Includes prompts such as “What is the weakest link in this chain?” and “Which assumption would you least want an adversary to notice?”
AI-Generated Evidence Documentation → VERA-P-0003. A three-tier handling protocol for evidence retrieved, summarized, or synthesized by AI systems, with distinct chain-of-custody and quality-rating requirements for each tier: Tier A (source retrieved and verified), Tier B (source identified but not retrievable), and Tier C (AI-synthesized analysis without specific source).
Analogical inference An inference type in which a conclusion is drawn by reasoning from similarity to a reference case: “this is like that; what was true there is likely true here.” The weakest of VERA’s four inference types because its validity depends entirely on the strength of the similarity claim. Requires structured similarity scoring per → VERA-P-0009.
Analogical Reasoning Validation → VERA-P-0009. A structured similarity-disanalogy analysis that produces a Similarity Score (0.0–1.0) and determines the permissible scope of an analogical conclusion. See also: → Similarity Score.
Anchor example In → VERA-P-0013, a historical claim with a known outcome and a documented confidence rating and rationale, used in calibration exercises to align verifiers’ confidence scales. Anchor libraries should contain at least three examples per confidence band.
Assertion A statement made by an agent without formal evidence attachment, reasoning chain, or verification status. Assertions are the raw material of → Claims; they are not inherently invalid, but they are epistemically unprocessed. VERA tracks assertions separately from claims and flags their use as evidence in reasoning chains. Canonical definition: Lexicon § Assertion.
B
Backing In the → Toulmin Argumentation Model, the support for the warrant — evidence for why the warrant holds. Maps to VERA’s → Evidence Quality tier ratings and → Chain of custody documentation. See: Appendix B § Toulmin.
C
Cascading Claim Update → VERA-P-0012. A three-phase protocol for managing downstream effects when an upstream claim changes verification state: (1) dependency registration at claim creation, (2) impact triage using a standardized matrix, (3) selective re-verification based on triage results.
Chain laundering An error in which a weakly supported or unverified claim is used as if it were well-supported evidence in a subsequent reasoning chain, obscuring the weak evidentiary basis by embedding it in an apparently solid chain. Canonical definition: Lexicon § Chain Laundering.
Chain of custody The documented provenance of an evidence item from its original source to its use in a claim record. Required fields: how access was obtained, what transformation (if any) was applied, and by whom and when. See: → Data Sovereignty (S1).
Claim A structured epistemic object — the primary unit of VERA — consisting of: a precise statement, a → Claim identifier, a provenance record, an → Evidence set, a → Reasoning chain, a → Verification state, and an → Epistemic confidence rating. A claim without an evidence set, reasoning chain, or verification state is an → Assertion. Canonical definition: Lexicon § Claim.
Claim Confidence Calibration → VERA-P-0013. A calibration program ensuring that confidence ratings assigned by different verifiers are consistent and predictive. Components: anchor example library, consistency tracking metrics, and a → Confidence Committee process for high-stakes claims. Available at Level 4.
Claim identifier
A unique, permanent reference assigned to every VERA claim before evidence assembly begins. Format: VERA-C-[YYYY]-[NNNN]. Individual practitioners without an organizational registry use initials as a prefix: VERA-C-DJF-2026-0001. Identifiers never change; revised claims carry the same ID with a version suffix (e.g., v2).
Claim record The complete document containing all elements of a VERA claim: statement, identifier, context metadata, → Evidence set, → Reasoning chain, and links to → Verification records.
Claim registry A centralized, searchable catalog of all documented VERA claims, maintained by an individual, team, or organization. Minimum required fields at Level 3: Claim ID, statement, verification state, confidence, owner, created date, verified date, review due date, and verification record link. See: Tooling & Integration.
Compliance surface A failure mode in which VERA documentation has the correct formal structure — all required fields are present — but lacks epistemic substance. Evidence items are listed without genuine quality assessment; reasoning chains restate the evidence rather than argue from it; verification records record a state without documenting per-criterion findings. See: Level 3 § Anti-Patterns.
Compound Claim Decomposition → VERA-P-0006. A three-test decomposition method (Independence Test, Evidence Test, Verification Test) applied recursively until each component is atomic and independently verifiable. Includes dependency mapping for cases where sub-claims are logically prerequisite to each other.
Conclusion Sovereignty S3; one of VERA’s five → Sovereignty Principles. The right of the person or organization whose decisions are affected by a claim to reach their own conclusion — including one that differs from a verified claim. Prohibits institutional structures that make disagreement impossible, though it does not eliminate consequences for minority positions in legitimate governance processes. See: Sovereignty Principles § S3.
Confidence band A range on the 0.0–1.0 → Epistemic confidence scale associated with a qualitative assessment. Four bands: High (0.85–1.0), Moderate (0.65–0.84), Low (0.40–0.64), Speculative (0.00–0.39). Used in → Anchor examples for calibration exercises.
Confidence ceiling An upper bound on the → Epistemic confidence rating achievable under specific verification conditions. The self-verification ceiling (→ VERA-P-0010) is 0.72. The ceiling for analogical inference (→ VERA-P-0009) varies by → Similarity Score.
Confidence Committee In → VERA-P-0013, a group of three verifiers who independently assign confidence ratings to high-stakes claims and converge through structured discussion. Used when a single verifier’s calibration is insufficient for the stakes of the claim.
Contrary Evidence Integration → VERA-P-0008. A structured framework for evaluating contrary evidence on three dimensions (quality tier, relevance scope, independence) and selecting an appropriate response from four options: outweigh, distinguish, qualify, or concede.
Contrary evidence Evidence items assembled during Phase 2 that complicate or contradict the claim. Must be addressed in the → Reasoning chain using one of four documented responses. Contrary evidence that is not addressed violates verification criterion E4.
Correlated (independence classification) One of three → Evidence independence classifications. Items that share a common upstream source but add independent interpretation or transformation. Intermediate between Independent and Dependent. Must be noted in the evidence set and reflected in the confidence rating.
D
Data Sovereignty S1; the first of VERA’s five → Sovereignty Principles. The requirement that every evidence item underlying a claim is accessible, auditable, and exportable independently of vendor systems, licenses, or third-party controls. See: Sovereignty Principles § S1.
Decay class A classification applied to each → Evidence item indicating how rapidly its reliability decreases over time. Three classes: Stable (36-month default review window), Drifting (12-month), Volatile (3-month). Applied using → VERA-P-0005.
Deductive inference An inference type in which the conclusion follows necessarily from the premises — if the premises are true, the conclusion cannot be false. The strongest of VERA’s four inference types, but requires premises to be certain rather than probable. See also: → Inference type.
Dependent (independence classification) One of three → Evidence independence classifications. Items that are proxies for the same underlying source, providing no additional independent support. Dependent items must be consolidated using → VERA-P-0004.
Dependency registration The act of recording, at claim creation time (Phase 1, Step 1.4), which upstream verified claims are used as evidence in the new claim’s evidence set. Creates the bidirectional links required for → VERA-P-0012 (Cascading Claim Update).
Domain One of six capability areas measured by the → VERA Maturity Model: Evidence, Reasoning, Verification, Governance, Sovereignty, and Integration. An organization’s maturity level may differ across domains; a single composite score obscures meaningful variation. Canonical definition: Lexicon § Domain.
E
Epistemic confidence A numerical rating from 0.0 to 1.0 representing a verifier’s assessment of how strongly available evidence supports a claim. Not a probability; an explicit declaration of epistemic position. Travels with the claim and is used to weight claims when they appear as evidence in downstream reasoning chains. Canonical definition: Lexicon § Epistemic Confidence.
Epistemic sovereignty The condition of having genuine authority over the knowledge one acts on: the ability to access evidence, inspect reasoning, challenge conclusions, and reach independent judgments. VERA operationalizes epistemic sovereignty through the five → Sovereignty Principles. See: Sovereignty Principles.
Evidence decay The reduction in the reliability or currency of an → Evidence item over time, particularly in domains where conditions change frequently. Managed through → Decay classes and expiry dates per → VERA-P-0005.
Evidence domain One of the six → Domain areas in the VERA Maturity Model. Measures how well an organization identifies, retrieves, rates, documents, and maintains evidence items across the claim lifecycle.
Evidence independence The degree to which items in an evidence set derive from genuinely separate sources. Three classifications: Independent, Correlated, Dependent. Source collapse — treating Dependent items as Independent — inflates apparent confidence. Canonical definition: Lexicon § Evidence Independence.
Evidence Item A discrete piece of source material cited in support of a claim. Each item requires: source reference, evidence type (quality tier), relevance statement, chain of custody, and access date. Canonical definition: Lexicon § Evidence Item.
Evidence Item Record The structured document containing all required fields for a single evidence item. Stored in the claim’s evidence folder and aggregated into the → Evidence set documentation.
Evidence quality A four-tier rating system applied to each evidence item: Primary (Tier 1 — original source), Secondary (Tier 2 — peer-reviewed interpretation), Tertiary (Tier 3 — textbook or encyclopedic synthesis), Testimonial (Tier 4 — expert assertion without direct evidence access). Rates structural proximity to original source, not accuracy. Canonical definition: Lexicon § Evidence Quality.
Evidence set The complete collection of → Evidence items cited in a claim, including supporting evidence, contrary evidence, and documentation of absent expected evidence types. Canonical definition: Lexicon § Evidence Set.
Expert independence The highest of VERA’s three verifier → Independence levels. The verifier has recognized expertise in the claim’s domain and is institutionally independent from the claimant.
Expert Verifier Onboarding → VERA-P-0011. A structured briefing and joint record protocol for engaging a domain expert as a VERA verifier when the expert has not been trained in VERA. Components: Criteria Translation Worksheet, pre-review briefing, structured interview, and joint Verification Record production.
F
First-pass verification rate The percentage of claims submitted for verification that are verified on the first submission without being returned for revision. Tracked as a quality metric at Level 4. A rate above ~90% suggests criteria are too permissive; below ~60% suggests claims are submitted before preparation is complete.
Foundational independence The lowest of VERA’s three verifier → Independence levels. The verifier is a different person from the claimant but has no specific domain competency or institutional independence requirement. Caps achievable → Epistemic confidence at 0.72.
G
Governance domain One of the six → Domain areas in the VERA Maturity Model. Measures whether and how VERA practice is mandated, resourced, audited, and improved at an organizational level. Typically lags other domains during early adoption because governance infrastructure requires demonstrated value before investment.
Ground truth The most reliable available evidence for a given claim — the closest available approximation to direct observation of the phenomenon the claim describes. Not an absolute standard; ground truth in VERA is always domain-relative and rated by → Evidence quality tier.
H
Hidden Assumption Excavation → VERA-P-0007. A systematic protocol for surfacing assumptions embedded in reasoning steps that the claimant does not realize they are making. Applies three interrogation questions to each step: Necessary Conditions (“What would need to be true for this to follow?”), Variance (“What am I treating as fixed that could vary?”), and Values and Framing (“Where is this reasoning sensitive to normative choices?”).
I
Impact triage In → VERA-P-0012, the structured assessment of how much each downstream claim is affected by an upstream claim’s state change. Produces three downstream action categories: High (immediate re-verification), Moderate (expedited review within 30 days), Low (annotation at next scheduled review).
Independent (independence classification) One of three → Evidence independence classifications. Items that demonstrably trace to separate primary sources. Full independent support is the strongest evidence basis for a claim.
Inductive inference An inference type in which the conclusion generalizes from observed instances to a broader pattern. Strong when instances are numerous, representative, and well-documented. One of VERA’s four labeled inference types. See also: → Inference type.
Inference type One of four labeled categories for the logical connection between premises and conclusion in a → Reasoning chain step: Deductive (conclusion necessarily follows), Inductive (conclusion generalizes from instances), Abductive (conclusion is the best explanation), Analogical (conclusion inferred from similarity to another case). Each step in a reasoning chain must label its inference type.
Integration domain One of the six → Domain areas in the VERA Maturity Model. Measures how fully VERA practices are embedded in existing workflows, tools, and processes rather than existing as a parallel overhead system.
L
Lexicon The VERA chapter providing canonical definitions for all core framework terms. When a term appears in VERA documentation, it carries the meaning given in the Lexicon — not a colloquial meaning or a meaning from outside the framework. See: Foundations § Lexicon.
M
Maturity Level One of five stages in the → VERA Maturity Model: 1 — Aware, 2 — Exploring, 3 — Practicing, 4 — Governing, 5 — Sovereign. Levels are assessed per → Domain; a single composite level obscures meaningful variation. Canonical definition: Lexicon § Maturity Level.
Maturity Model VERA’s five-level, six-domain framework for assessing and developing VERA capability. Levels describe increasing systematization from initial awareness to full epistemic sovereignty. Domains cover the full VERA practice lifecycle. See: Maturity Model Overview.
Minimum Viable VERA The six-element floor of VERA documentation that constitutes genuine VERA practice rather than → Compliance surface: (1) a precise claim statement, (2) an assigned claim identifier, (3) a prospective search plan written before evidence collection, (4) a rated evidence set with absent evidence noted, (5) an explicit reasoning chain with at least one documented assumption, and (6) a verification record with per-criterion findings and a confidence rating. See: Getting Started § Minimum Viable VERA.
P
Pattern A reusable, documented solution to a recurring challenge in evidence management, reasoning construction, or verification practice. Follows the → Pattern Template and is itself a verified VERA claim. Canonical definition: Lexicon § Pattern.
Pattern ID
A unique, permanent identifier for a VERA pattern in the format VERA-P-[NNNN]. Assigned by the pattern registry upon submission. IDs never change; deprecated patterns retain their IDs marked Deprecated.
Pattern Template The canonical format for all VERA patterns, specifying fourteen required fields: Pattern ID, Classification, Context, Problem, Forces, Solution, Implementation, Evidence Requirements, Verification Criteria, Consequences, Known Uses, Related Patterns, and Verification Status. See: Foundations § Pattern Template.
Peer independence The middle of VERA’s three verifier → Independence levels. The verifier has relevant domain competency and no stake in the claim’s outcome. Provides stronger epistemic grounding than → Foundational independence.
Primary evidence → Evidence quality Tier 1. Original source material: direct observation, raw datasets, original documents, firsthand testimony from the event described. The closest available evidence to ground truth.
Process Sovereignty S4; one of VERA’s five → Sovereignty Principles. The requirement that the verification process is auditable and is structurally capable of producing negative results. A verification process controlled entirely by those who want a claim verified, or one that never produces a failed verification, violates this principle. See: Sovereignty Principles § S4.
Prospective search plan A documented list of evidence types expected to exist for a claim, written before evidence collection begins (Phase 2, Step 2.1). The prospective plan — not just the evidence found — is what allows → Absent evidence to be detected and documented. A search conducted without a prior plan cannot reliably identify what is missing.
Protocol version The version of the Verification Protocol used when verifying a claim. Recorded in the → Verification Record. Claims record the protocol version so that as the protocol evolves, the standards under which existing claims were verified remain auditable.
Q
Qualifier In the → Toulmin Argumentation Model, the modal strength of the claim (certainly, probably, presumably, etc.). Maps to VERA’s → Epistemic confidence rating and → Verification state, which together express how strongly the claim is held and at what stage of evaluation.
R
Reasoning Chain The explicit, step-by-step argument connecting an → Evidence set to a claim’s statement. Each step states its premises (evidence items or prior step conclusions), its → Inference type, intermediate conclusion, and confidence. Canonical definition: Lexicon § Reasoning Chain.
Reasoning domain One of the six → Domain areas in the VERA Maturity Model. Measures how explicitly and rigorously reasoning chains are constructed, documented, and evaluated. The most cognitively demanding domain to develop; strong intuition about “good reasoning” rarely translates immediately to explicit reasoning chain documentation.
Reasoning Gap An undocumented logical step in a → Reasoning chain — a premise that appears without sourcing. The most common location of consequential reasoning errors. VERA requires all gaps to be either filled with a documented step or flagged as an acknowledged assumption. Canonical definition: Lexicon § Reasoning Gap.
Reasoning Sovereignty S2; one of VERA’s five → Sovereignty Principles. The requirement that every step in a reasoning chain that informs someone’s decisions is made visible and challengeable to that person. Violated when AI-generated conclusions are used without exposing the AI’s reasoning, or when complex reasoning is summarized rather than documented step by step. See: Sovereignty Principles § S2.
Rebuttal In the → Toulmin Argumentation Model, exceptions or counter-arguments to the claim. Maps to VERA’s → Contrary evidence documentation and the four response types (outweigh, distinguish, qualify, concede) in the → Reasoning chain.
Registry Graveyard An anti-pattern at Level 3 in which a → Claim registry is maintained but not actively reviewed, accumulating stale, superseded, or out-of-scope claims that give a false impression of epistemic coverage. See: Level 3 § Anti-Patterns.
Review cadence A documented schedule for re-evaluating claims and their evidence, established in Phase 5 of the Verification Protocol (Step 5.4). Recommended intervals: 6 months (high-sensitivity domains such as medical, financial, regulatory), 12–18 months (organizational policy, technical standards), 36 months (historical, conceptual). Linked to → Decay class expiry dates.
S
Secondary evidence → Evidence quality Tier 2. Derived from primary sources through peer-reviewed expert interpretation, synthesis, or analysis. Reliable for most VERA claims; preferred when Primary evidence is not directly accessible.
Selective citation The practice of including only evidence that supports a claim while omitting evidence that complicates or contradicts it. A violation of → Evidence Primacy (VERA’s first principle). Includes cherry-picking, citing a document’s conclusions while omitting its caveats, and omitting → Absent evidence documentation. Canonical definition: Lexicon § Selective Citation.
Self-Verification with Adversarial Stance → VERA-P-0010. A structured self-verification protocol using role declaration, a mandatory 24-hour time gap, the → Adversarial checklist, and a → Confidence ceiling of 0.72.
Significance threshold The organizational definition of which claims require VERA treatment. Must be specific enough that any practitioner can apply it to any given claim without deliberation. Example: “Claims that will appear in external communications, inform decisions with irreversible consequences, or support capital allocations above $X.” See: Adoption Roadmap § Phase 3.
Similarity Score In → VERA-P-0009, the aggregate score from per-dimension similarity ratings, calculated as the sum of individual scores divided by (3 × number of dimensions). Score ranges map to permissible conclusion scope: 0.85–1.0 (strong analogical inference), 0.65–0.84 (moderate, with acknowledged differences), 0.40–0.64 (weak, conclusion scoped to areas of similarity), below 0.40 (analogy insufficient).
Source Collapse Detection and Remediation → VERA-P-0004. An evidence source tree audit that reveals shared roots among evidence items and consolidates Dependent items into an accurate count of independent sources.
Source collapse The reasoning error of treating multiple → Evidence items as independent when they derive from the same underlying source. Inflates apparent evidential support. Canonical definition: Lexicon § Source Collapse.
Sovereignty In VERA, the state in which an individual or organization retains ultimate authority over their evidence, reasoning, conclusions, and verification processes. Operationalized through the five → Sovereignty Principles (S1–S5). Canonical definition: Lexicon § Sovereignty.
Sovereignty domain One of the six → Domain areas in the VERA Maturity Model. Measures how fully the five → Sovereignty Principles are implemented in practice.
Sovereignty Principles VERA’s five binding design constraints that prevent epistemic sovereignty erosion: S1 Data Sovereignty, S2 Reasoning Sovereignty, S3 Conclusion Sovereignty, S4 Process Sovereignty, S5 Temporal Sovereignty. Any VERA implementation that violates them is not VERA-compliant regardless of how faithfully it follows the rest of the framework. See: Foundations § Sovereignty Principles.
Stale claim A claim whose evidence items have passed their → Review cadence date or → Decay class expiry date without a reconfirmation check. Must be flagged and not used in new reasoning chains without re-verification.
Statement The precise declarative sentence that constitutes a → Claim’s core assertion. Must be testable (two people could agree on what evidence would support or contradict it), scoped (conditions under which it holds are specified), and free of vague language (hedges are quantified or eliminated).
T
Temporal Sovereignty S5; one of VERA’s five → Sovereignty Principles. Authority over when claims are verified, reviewed, and retired, without non-epistemic pressure to accelerate or delay these processes. See: Sovereignty Principles § S5.
Tertiary evidence → Evidence quality Tier 3. Derived from secondary sources through textbook, encyclopedic, or journalistic synthesis. Acceptable for background context; insufficient as primary support for significant claims.
Testimonial evidence → Evidence quality Tier 4. An expert’s assertion or firsthand account where the underlying evidence is not directly accessible. Valid in VERA but carries the lowest quality rating; requires explicit documentation of the expert’s identity and basis for their testimony.
Time-Sensitive Evidence Management → VERA-P-0005. An evidence-level expiry annotation system with three → Decay class categories and monitoring protocols linked to claim review triggers.
Toulmin Argumentation Model A classical model of argument structure (claim, data, warrant, backing, qualifier, rebuttal) developed by philosopher Stephen Toulmin. VERA’s reasoning constructs are compatible with the Toulmin model; the Lexicon and Reasoning Chain are more operationally specific and add elements Toulmin does not include (inference type labeling, independence assessment, verification state). See: Appendix B § Toulmin.
U
Upstream dependency A verified → Claim used as an → Evidence item in another claim’s evidence set. Must be registered at claim creation time. State changes in upstream claims trigger → VERA-P-0012.
Urgent verification A modified verification path permitted when time constraints prevent the standard five-phase protocol. Applies a minimum viable criteria set (E1, R1, R2, F1) and publishes the claim as Partial-Urgent state. Full verification must complete within 72 hours. See: Verification Protocol § Special Situations.
V
VERA Verified Evidence and Reasoning Architecture. A structured framework for ensuring that claims are traceable to evidence, conclusions are the product of explicit reasoning, and individuals and organizations retain sovereignty over the knowledge they generate and depend upon. Not a software system, certification program, or AI product; an architecture for organizing epistemic practice.
VERA-C-[YYYY]-[NNNN] The canonical format for → Claim identifiers. YYYY is the four-digit year of claim initiation; NNNN is a sequential number within that year.
VERA-P-[NNNN] The canonical format for → Pattern IDs. Assigned by the pattern registry upon submission. Permanent and never reused.
VERA-V-[NNNN] The canonical format for → Verification Record identifiers.
Verification The process of evaluating whether a claim’s → Evidence set and → Reasoning chain adequately support its statement. Conducted by a designated → Verifier against explicit criteria (the Verification Protocol Phase 4 criteria), producing a → Verification Record. Not proof; epistemic accountability. Canonical definition: Lexicon § Verification.
Verification Capture The failure mode in which the verification process is structurally unable to produce a negative result. A captured process endorses rather than evaluates. Canonical definition: Lexicon § Verification Capture.
Verification domain One of the six → Domain areas in the VERA Maturity Model. Measures how consistently and rigorously claims are submitted, evaluated, and resolved through the Verification Protocol.
Verification Protocol VERA’s five-phase procedural framework for producing verified claims: Phase 1 (Claim Formulation), Phase 2 (Evidence Assembly), Phase 3 (Reasoning Construction), Phase 4 (Verification Assessment), Phase 5 (Documentation and Publication). A versioned document; the current version is 1.0. See: Foundations § Verification Protocol.
Verification Record The immutable documentation produced by a verification event. Required contents: claim identifier, verifier identity and qualifications, verification date, protocol version, per-criterion findings, resulting verification state, confidence rating with justification, and notes. Canonical definition: Lexicon § Verification Record.
Verification State The current status of a claim in the VERA verification lifecycle. Six states: Unverified (○), Pending (◐), Partial (◑), Verified (●), Contested (◈), Refuted (✗). State transitions require explicit triggers and documentation. Canonical definition: Lexicon § Verification State.
Verifier The individual or team responsible for Phase 4 of the Verification Protocol. Ideally independent from the claimant. Three independence levels recognized: → Foundational, → Peer, → Expert.
Verifier pool The set of practitioners qualified to conduct VERA verifications within a team or organization. Actively managed at Level 4: tracked for calibration consistency, assessed for independence, and developed through training and calibration exercises.
W
Warrant In the → Toulmin Argumentation Model, the reasoning connecting data to the claim — the bridge between evidence and conclusion. Maps directly to VERA’s → Reasoning chain, which makes the warrant explicit and step-by-step rather than leaving it implicit.
Appendix B: Standards Alignment
This appendix documents how VERA concepts and practices align with five major external frameworks. The mappings serve two purposes: they help practitioners in regulated or standards-oriented environments connect VERA work to existing compliance obligations, and they help organizations that already operate within these frameworks understand how VERA extends and operationalizes what those frameworks require.
None of the frameworks below require VERA specifically. VERA is designed to satisfy and in many cases exceed their epistemic quality requirements. Where VERA goes further than a framework, this is noted. Where a framework addresses something VERA does not, that gap is also noted.
NIST AI Risk Management Framework (AI RMF)
About the AI RMF
The NIST AI Risk Management Framework (NIST AI 100-1, released 2023) provides voluntary guidance for managing risks associated with AI systems across their development and deployment lifecycle. It organizes AI risk management around four functions: GOVERN, MAP, MEASURE, and MANAGE. Each function contains categories and subcategories addressed by specific actions and outcomes.
VERA is relevant to organizations implementing the AI RMF because AI-generated claims — conclusions, recommendations, predictions, and analyses produced by AI systems — are the epistemic objects at risk in AI-assisted decision making. VERA’s verification and sovereignty requirements directly address the traceability, transparency, and human oversight gaps that the AI RMF aims to close.
Mapping Table
| AI RMF Function & Category | AI RMF Requirement (summary) | VERA Mechanism |
|---|---|---|
| GOVERN 1.1 | Policies, processes, and procedures for AI risk management established | VERA Governance domain (Level 3+): documented VERA policy, named ownership, significance threshold, training requirement |
| GOVERN 1.2 | Accountability for AI risk established | VERA Governance domain: VERA owner role with formal accountability; Level 4: board-level epistemic quality reporting |
| GOVERN 1.4 | Organizational culture supports responsible AI | VERA’s sovereignty principles institutionalized at Level 3+; challenge process (S3) ensures culture permits disagreement |
| GOVERN 2.2 | Risk management processes take scientific findings into account | VERA’s Evidence Primacy principle; evidence quality tier system; Secondary and Primary evidence requirements |
| GOVERN 4.1 | AI RMF integrated into enterprise risk management | VERA Integration domain (Level 4): VERA metrics in governance reporting; decision gates requiring verified claims |
| MAP 1.5 | Organizational risk tolerance established | Significance threshold (which claims require VERA treatment) operationalizes risk tolerance for epistemic decisions |
| MAP 2.1 | Scientific and research knowledge reviewed | VERA evidence assembly (Phase 2): prospective search plan, quality rating, independence assessment |
| MAP 3.5 | AI system risks documented | VERA Claim records document the evidence and reasoning underlying AI-system risk assessments |
| MAP 5.1 | Likelihood and impact of AI risks assessed | VERA confidence ratings (0.0–1.0) and verification state provide quantified epistemic risk assessment |
| MEASURE 1.1 | AI risk measurement approaches identified | VERA Verification Protocol: explicit, versioned measurement criteria (E1–E5, R1–R5, F1–F3) |
| MEASURE 2.2 | AI system trustworthiness characteristics evaluated | VERA reasoning chain transparency (Reasoning Sovereignty, S2): AI reasoning captured and independently evaluated |
| MEASURE 2.5 | AI system performance evaluated | VERA verification of AI-generated claims (VERA-P-0003): Tier A/B/C handling with documented chain of custody |
| MEASURE 2.8 | Risks associated with AI system use monitored | VERA review cadence, decay class system (VERA-P-0005), and cascading claim update (VERA-P-0012) |
| MEASURE 4.1 | Measurement results documented | VERA Verification Records: immutable, per-criterion documentation of every verification event |
| MANAGE 1.3 | Responses to AI risks prioritized | VERA impact triage (VERA-P-0012): High/Moderate/Low classification determines response urgency |
| MANAGE 2.4 | AI risk treatment plans developed | VERA sovereignty assessment: documented gaps with owner accountability and remediation timelines |
| MANAGE 4.2 | Residual AI risks tracked | VERA claim registry with verification state and confidence tracking; stale claim monitoring |
Where VERA Exceeds AI RMF Requirements
- Reasoning chain explicitness: The AI RMF requires transparency; VERA specifies a step-by-step structure for reasoning chains that goes beyond general transparency requirements.
- Evidence independence assessment: VERA’s source collapse detection (VERA-P-0004) has no direct AI RMF equivalent; it addresses an evidence quality failure mode the AI RMF does not name.
- Sovereignty as a binding constraint: VERA’s five sovereignty principles are stronger than the AI RMF’s accountability and transparency requirements, which do not explicitly address the ability of affected parties to challenge conclusions.
Where AI RMF Addresses More Than VERA
- Technical AI risk categories (bias, robustness, security, privacy): The AI RMF addresses the full range of AI-specific technical risks. VERA addresses the epistemic quality of claims about those risks, not the risks themselves.
- AI system lifecycle governance: The AI RMF addresses procurement, deployment, monitoring, and decommissioning of AI systems. VERA addresses the claims made about and by those systems.
EU AI Act
About the EU AI Act
The EU AI Act (Regulation 2024/1689, fully applicable from August 2026) establishes a tiered regulatory regime for AI systems based on risk level. High-risk AI systems — those with significant impacts on health, safety, fundamental rights, or critical infrastructure — face mandatory requirements including technical documentation, transparency, human oversight, and accuracy standards. General-purpose AI model providers also face specific obligations.
VERA is particularly relevant to organizations developing or deploying high-risk AI systems, for whom claims about AI system performance, risk, and fitness for purpose must be documented to a standard that supports regulatory review.
Mapping Table: High-Risk AI System Requirements
| EU AI Act Article | Requirement | VERA Mechanism |
|---|---|---|
| Art. 9 — Risk Management | Risk management system identifying, analyzing, estimating, and evaluating risks throughout the lifecycle | VERA claim registry: documented, verified claims about risk levels; VERA-P-0012 for lifecycle monitoring |
| Art. 10 — Data Governance | Training, validation, and testing data practices; data quality criteria | VERA evidence quality framework applied to data quality claims: Primary-tier evidence required for data governance assertions |
| Art. 11 — Technical Documentation | Documentation enabling assessment of system compliance before market placement | VERA claim records serve as the epistemic component of technical documentation; each claim traceable to evidence and reasoning |
| Art. 12 — Record-Keeping | Automatic logging of system operation; records retained for review | VERA Verification Records: immutable documentation of verification events; claim registry audit trail |
| Art. 13 — Transparency | AI system information disclosed to deployers; operation understandable to users | VERA Reasoning Sovereignty (S2): reasoning chains made visible to affected parties; VERA-P-0003 for AI-generated claim documentation |
| Art. 14 — Human Oversight | Measures enabling human monitoring, interpretation, and override of AI outputs | VERA Conclusion Sovereignty (S3): challenge process for verified claims; human verifier requirement in Phase 4 |
| Art. 15 — Accuracy, Robustness, Cybersecurity | Performance levels documented; system behavior predictable and resistant to manipulation | VERA confidence ratings document performance epistemic uncertainty; VERA-P-0013 calibration ensures confidence ratings are meaningful |
| Art. 50 — Transparency for Certain AI | AI-generated content disclosed; interaction with AI disclosed | VERA-P-0003: mandatory AI provenance documentation for all AI-contributed evidence |
| Art. 53 — General-Purpose AI Model Providers | Technical documentation, training data summary, copyright policy | VERA evidence documentation practices applicable to training data claims and capability claims |
Where VERA Exceeds EU AI Act Requirements
- Evidence chain-of-custody: VERA requires documented chain of custody for every evidence item. The EU AI Act requires technical documentation but does not specify evidence provenance standards at the item level.
- Independent verification: VERA requires that verification be conducted by someone other than the claimant (at Peer or Expert independence level for significant claims). The EU AI Act does not specify who must conduct conformity assessment for internal quality systems.
- Reasoning chain explicitness: Art. 13 requires transparency at the system output level; VERA requires step-by-step reasoning chain documentation for any claim about the system.
Where the EU AI Act Addresses More Than VERA
- Legal obligations and enforcement: The EU AI Act is binding law with penalties. VERA is a voluntary framework. VERA evidence and reasoning documentation can support compliance demonstrations but does not itself constitute legal compliance.
- Conformity assessment procedures: Specific third-party assessment requirements for some high-risk AI categories have no VERA equivalent.
- AI system registration and market surveillance: Regulatory infrastructure requirements are outside VERA’s scope.
ISO/IEC 42001 — AI Management Systems
About ISO/IEC 42001
ISO/IEC 42001 (2023) specifies requirements for an Artificial Intelligence Management System (AIMS) — a framework for responsibly developing, providing, or using AI systems. It follows the standard ISO high-level structure used by ISO 9001 (quality management) and ISO 27001 (information security), making it familiar to organizations already certified to those standards.
VERA is most relevant to ISO 42001’s operational (Clause 8) and performance evaluation (Clause 9) requirements, where claims about AI system quality, risk, and impact must be documented and assessed.
Mapping Table
| ISO 42001 Clause | Requirement | VERA Mechanism |
|---|---|---|
| 4.1 — Context | Understanding internal and external factors affecting AIMS | VERA sovereignty assessment (S1–S5): identifies epistemic dependencies on external tools and systems |
| 4.2 — Interested Parties | Needs and expectations of interested parties | VERA Conclusion Sovereignty (S3) and Reasoning Sovereignty (S2): interested parties retain ability to access and challenge claims |
| 5.2 — AI Policy | Top management establishes AI policy including objectives | VERA Governance domain (Level 3): documented VERA policy with named ownership; Level 4: leadership accountability for epistemic quality |
| 5.3 — Roles and Responsibilities | Responsibilities for AIMS defined and communicated | VERA VERA owner role, claim ownership, verifier pool management |
| 6.1 — Risk Assessment | AI risks identified, analyzed, and evaluated | VERA maturity assessment as structured risk identification; verification criteria (E1–E5, R1–R5) as risk evaluation criteria |
| 6.2 — Objectives | AI management objectives established and measurable | Level 4 VERA metrics: first-pass verification rate, confidence distribution, sovereignty assessment scores |
| 7.2 — Competence | Competence of persons affecting AI system performance | VERA training requirements (onboarding), calibration exercises (VERA-P-0013), verifier qualification |
| 7.5 — Documented Information | Required documentation maintained | VERA claim registry, evidence sets, verification records, sovereignty assessment: comprehensive documented information |
| 8.2 — AI Risk Treatment | AI risks treated per risk treatment plan | VERA impact triage (VERA-P-0012), sovereignty remediation plans with owner accountability |
| 8.3 — Operational Controls | Controls for AI system operation documented | VERA Verification Protocol: explicit, versioned operational procedure for claims about AI systems |
| 9.1 — Monitoring and Measurement | Performance monitored against objectives | Level 4: verification quality metrics reviewed on governance cadence; confidence calibration (VERA-P-0013) |
| 9.2 — Internal Audit | Internal audits of AIMS | Level 5: governance function applies VERA methods to its own conclusions; verification records are audit-ready |
| 9.3 — Management Review | Top management reviews AIMS performance | Level 4 governance reporting: VERA quality metrics in leadership reporting |
| 10.1 — Continual Improvement | AIMS continually improved | VERA pattern development: recurring challenges documented as patterns; protocol improvements proposed through community governance |
Where VERA Complements ISO 42001
ISO 42001 specifies what must be documented and governed but leaves significant discretion on how. VERA provides the how: the specific evidence quality standards, reasoning chain structure, verification criteria, and sovereignty requirements that give ISO 42001’s requirements operational content in the epistemic quality domain.
An organization implementing ISO 42001 that adopts VERA for its claims about AI systems will have more rigorous documentation than ISO 42001 minimally requires, and that documentation will be structured to survive audit.
Where ISO 42001 Addresses More Than VERA
- AI system design and testing: ISO 42001 addresses technical requirements for AI systems themselves (Annex A). VERA addresses claims about those systems, not system design.
- Supply chain and third-party requirements: ISO 42001 includes requirements for AI supply chain management. VERA addresses the evidence and reasoning documentation practices that support supply chain assessments.
CMMI v2.0
About CMMI v2.0
The Capability Maturity Model Integration (CMMI) v2.0 is a process improvement framework that describes five levels of capability maturity and a set of practice areas covering the full software and product development lifecycle. CMMI is widely used in defense, government, and commercial software development. Its five-level maturity progression is the direct inspiration for VERA’s own five-level model.
Level Correspondence
| CMMI Level | CMMI Name | Defining characteristic | VERA Level | VERA Name | Correspondence |
|---|---|---|---|---|---|
| 1 | Initial | Unpredictable, reactive, ad hoc | 1 | Aware | Understanding exists; no systematic practice |
| 2 | Managed | Basic project management applied | 2 | Exploring | VERA applied to selected claims; not yet institutionalized |
| 3 | Defined | Organization-wide standard processes | 3 | Practicing | VERA is organizational policy; all significant claims covered |
| 4 | Quantitatively Managed | Measurement and statistical control | 4 | Governing | VERA quality measured; improvement program active |
| 5 | Optimizing | Continuous improvement and innovation | 5 | Sovereign | Self-referential VERA application; community contribution |
The correspondence is strong enough that organizations already at CMMI Level 3 will find the governance concepts in VERA Level 3 familiar, and organizations targeting CMMI Level 4 will find VERA Level 4 metrics directly analogous to CMMI’s quantitative management requirements.
Practice Area Mapping
| CMMI Practice Area | CMMI Requirement (summary) | VERA Mechanism |
|---|---|---|
| DAR — Decision Analysis and Resolution | Alternatives analyzed against criteria before significant decisions | VERA Verification Protocol: explicit criteria (Phase 4) applied before claims are accepted as verified |
| REQM — Requirements Management | Requirements managed; changes tracked and communicated | VERA claim formulation (Phase 1): precise scoped statements; claim versioning; dependency notification (VERA-P-0012) |
| PPQA — Process and Product Quality Assurance | Processes and work products objectively evaluated | VERA verification independence requirement: evaluator distinct from creator; per-criterion findings documented |
| CAR — Causal Analysis and Resolution | Causes of defects analyzed; corrective actions taken | VERA contested claim process: challenges trigger structured re-evaluation; refuted claims traced to error source |
| MA — Measurement and Analysis | Measurement needs identified; measurement data analyzed | VERA Level 4 metrics: first-pass rate, rework rate, confidence distribution, contested claim rate; reviewed on governance cadence |
| OPF — Organizational Process Focus | Process strengths and weaknesses identified; improvement plans | VERA reasoning error taxonomy (Level 4): common failures tracked; training updated based on taxonomy |
| OT — Organizational Training | Training needs identified and met | VERA competency in onboarding; calibration exercises; VERA mentor assignment for new practitioners |
| RDM — Requirements Development and Management | Requirements elicited, developed, and verified | VERA claim decomposition (VERA-P-0006): compound assertions broken into independently verifiable components |
Where VERA Differs from CMMI
CMMI addresses the full software and product development lifecycle across practice areas including technical solution, supplier management, and service delivery. VERA addresses only the epistemic quality of claims — a narrower but more precise scope. An organization implementing both applies VERA as the epistemic quality layer within CMMI’s broader process framework.
CMMI’s maturity levels apply to the organization’s capability across all practice areas. VERA’s maturity levels apply to epistemic practice specifically. An organization can be at CMMI Level 3 overall while being at VERA Level 1 in the Evidence domain — the two assessments measure different things.
Toulmin Argumentation Model
About the Toulmin Model
The Toulmin Model of Argumentation, developed by philosopher Stephen Toulmin in The Uses of Argument (1958), describes the structure of practical arguments using six elements: Claim, Data, Warrant, Backing, Qualifier, and Rebuttal. It is widely taught in rhetoric, philosophy, and writing instruction, and has influenced formal argumentation frameworks in AI and law.
VERA’s reasoning constructs are compatible with the Toulmin Model. The primary relationship is that VERA operationalizes the Toulmin Model for organizational practice — making the elements explicit, versioned, and subject to systematic verification.
Element Mapping
| Toulmin Element | Definition | VERA Equivalent | VERA Enhancement |
|---|---|---|---|
| Claim | The assertion being argued for | → Statement (within a VERA Claim) | VERA adds: identifier, provenance, scope definition, verification state |
| Data | The facts, evidence, or grounds cited in support | → Evidence Set | VERA adds: quality tier rating, independence assessment, chain of custody, absent evidence documentation, decay class |
| Warrant | The principle connecting data to claim — the reasoning bridge | → Reasoning Chain | VERA makes the warrant fully explicit: each step documented with premises, inference type, intermediate conclusion, and confidence |
| Backing | Support for the warrant — evidence that the warrant holds | → Evidence quality tier documentation and source credibility assessment | VERA formalizes backing into the four-tier quality system and chain-of-custody requirements |
| Qualifier | The modal strength of the claim (certainly, presumably, probably) | → Epistemic confidence (0.0–1.0) + → Verification State (○/◐/◑/●/◈/✗) | VERA replaces natural-language qualifiers with a calibrated numerical scale and a formal state machine |
| Rebuttal | Exceptions, counter-arguments, or conditions under which the claim does not hold | → Contrary evidence documentation + four response types (outweigh, distinguish, qualify, concede) | VERA requires documented assessment of every rebuttal and a formal response selection; rebuttal cannot be omitted |
Key Differences
Explicitness of the warrant. In the Toulmin Model, warrants are often implicit — the connection between data and claim is unstated and assumed to be understood. VERA requires that the reasoning chain be written out step by step. This is the most significant practical difference: VERA’s reasoning chain is the Toulmin warrant made fully explicit and independently evaluable.
Verification as a separate phase. The Toulmin Model describes argument structure; it does not specify a process for evaluating arguments. VERA adds Phase 4 (Verification Assessment) as a formal, criteria-driven evaluation by an independent party. This is absent from Toulmin.
Versioning and state. The Toulmin Model describes arguments as static structures. VERA adds verification state (which changes over time), confidence ratings (which can be revised), and version management (which allows claims to be updated when evidence changes). VERA claims are living epistemic objects; Toulmin arguments are static descriptions.
Independence assessment. VERA’s evidence independence assessment (source collapse detection, VERA-P-0004) has no Toulmin equivalent. Toulmin’s “data” encompasses multiple evidence items without assessing their independence.
Sovereignty. The Toulmin Model has no epistemic sovereignty concept. VERA’s five sovereignty principles are VERA-specific commitments about who retains authority over the epistemic process — not present in any classical argumentation theory.
When to Use the Toulmin Model Alongside VERA
The Toulmin Model is most useful as a teaching tool for practitioners who are learning to identify the elements of an argument before they are ready to document them in full VERA format. Walking through “what is the claim? what is the data? what is the warrant?” in Toulmin terms is a good preparation for writing a VERA reasoning chain.
In organizational contexts where VERA is not yet implemented, Toulmin-style analysis of key arguments (before they are formally documented as VERA claims) can reveal warrant and rebuttal gaps that otherwise remain invisible. This is a useful bridge practice at Level 1 (Aware) as practitioners develop the habit of explicit reasoning before they implement the full protocol.
Cross-Framework Summary
The following table shows at a glance which aspects of VERA address which frameworks’ requirements.
| VERA Mechanism | NIST AI RMF | EU AI Act | ISO 42001 | CMMI | Toulmin |
|---|---|---|---|---|---|
| Evidence quality tiers | MAP 2.1 | Art. 10 | 7.5 | RDM | Data |
| Reasoning chain documentation | MEASURE 2.2 | Art. 13 | 8.3 | DAR | Warrant |
| Verification Protocol (Phase 4) | MEASURE 1.1 | Art. 15 | 9.1 | PPQA | — |
| Claim registry + records | MEASURE 4.1 | Art. 12 | 7.5 | REQM | — |
| Challenge process | MANAGE 1.3 | Art. 14 | 10.1 | CAR | Rebuttal |
| Sovereignty principles (S1–S5) | GOVERN 4.1 | Art. 14 | 4.2 | — | — |
| Maturity model | GOVERN 1.1 | — | 9.3 | All levels | — |
| Quality metrics (Level 4) | MANAGE 4.2 | Art. 9 | 9.1 | MA | — |
| AI evidence documentation (P-0003) | MEASURE 2.5 | Art. 50 | 8.2 | — | — |
| Pattern library | MANAGE 2.4 | — | 10.1 | OPF | — |
| Training and calibration | GOVERN 1.4 | — | 7.2 | OT | — |
Appendix C: Changelog
v1.0.0 — 2026-02-21
First complete release — all sections complete.
- Glossary: comprehensive alphabetical glossary of ~80 terms; each entry includes a substantive definition, cross-references to related terms, and a chapter citation for the canonical definition; coverage spans all five book sections
- Standards Alignment: full mapping tables for five external frameworks — NIST AI RMF (16-row GOVERN/MAP/MEASURE/MANAGE mapping), EU AI Act (9-row article-level mapping for high-risk AI provisions), ISO/IEC 42001 (13-row clause mapping), CMMI v2.0 (level correspondence table + 8-row practice area mapping), Toulmin Argumentation Model (6-element correspondence + key differences + usage guidance); 11×5 cross-framework summary table
Verification Protocol version: 1.0
Pattern library: VERA-P-0001 through VERA-P-0013 — all Verified
v0.4.0 — 2026-02-21
Implementation section complete.
- Getting Started: prerequisites, first-claim selection criteria, full five-phase walkthrough on a concrete example (“defect rate reduced 35% post code-review adoption”), post-first-claim guidance, seven common early mistakes, minimum viable VERA specification
- Adoption Roadmap: Phase 0 assessment through Phase 6 governance; individual and team tracks; six common obstacle scenarios with specific remediation guidance; roadmap summary table
- Tooling & Integration: five non-negotiable tool requirements + three Level 3+ requirements; minimum viable toolkit; detailed registry implementation for five tool environments (spreadsheet, Notion, Obsidian, Confluence, plain Markdown + Git); evidence folder structure and Evidence Item Record format; AI tool integration and sovereignty assessment protocol; workflow integration patterns (decision documents, meeting notes, research outputs, onboarding); tooling-by-maturity-level table; five-minute sovereignty export test
v0.3.0 — 2026-02-21
Patterns Library section complete.
- Pattern Catalog: full index of 13 patterns with domain/complexity/level table; navigation by domain, maturity level, and use case; quick-reference entries with anchor links
- Evidence Patterns: VERA-P-0001 (reference + summary), VERA-P-0002 Conflicted Source Disclosure, VERA-P-0003 AI-Generated Evidence Documentation, VERA-P-0004 Source Collapse Detection and Remediation, VERA-P-0005 Time-Sensitive Evidence Management
- Reasoning Patterns: VERA-P-0006 Compound Claim Decomposition, VERA-P-0007 Hidden Assumption Excavation, VERA-P-0008 Contrary Evidence Integration, VERA-P-0009 Analogical Reasoning Validation
- Verification Patterns: VERA-P-0010 Self-Verification with Adversarial Stance, VERA-P-0011 Expert Verifier Onboarding, VERA-P-0012 Cascading Claim Update, VERA-P-0013 Claim Confidence Calibration
All 13 patterns include: Context, Problem, Forces, Solution, Implementation steps, Evidence Requirements, Verification Criteria, Consequences, Known Uses, Related Patterns, and Verification Status block.
v0.2.0 — 2026-02-21
Maturity Model section complete.
- Maturity Model Overview: full 5×6 grid, domain descriptions, level characterizations, self-assessment methodology, common trajectories, domain interdependencies
- Level 1 — Aware: full domain detail, observable indicators, common mis-assessments, Level 1→2 transition steps, self-assessment checklist
- Level 2 — Exploring: full domain detail, Level 2 variability analysis, common traps (Showcase, Form-Without-Substance, Perfectionism, Champion Fatigue), Level 2→3 transition requirements, self-assessment checklist
- Level 3 — Practicing: full domain detail, why Level 3 is the primary target, Level 3 anti-patterns (Compliance Surface, Significance Threshold Creep, Registry Graveyard), Level 3→4 transition requirements, self-assessment checklist
- Level 4 — Governing: full domain detail, governance trap analysis, Level 4→5 transition requirements, self-assessment checklist
- Level 5 — Sovereign: full domain detail, Level 5 conditions and obligations, sustainability threats, self-assessment checklist (evidence-based)
v0.1.0 — 2026-02-21
Initial release — Foundations complete.
- Added
book.toml,SUMMARY.md, andintroduction.md - Foundations section: Philosophy & Principles, Lexicon, Verification Protocol, Pattern Template, Sovereignty Principles
- Maturity Model: overview and level stubs (Levels 1–5)
- Patterns Library: catalog stub and domain stubs
- Implementation: chapter stubs
- Reference: Glossary index, Standards Alignment stub, this Changelog
Verification Protocol version: 1.0
Pattern library: VERA-P-0001 (Absence-of-Evidence Assessment) — Verified