wiredepth
Run a check

Docs · Verifiability

Verify your audit log

Every Wiredepth audit-log row carries a sha256 hash linked to the previous row, forming a Merkle hash chain. We publish daily snapshots of the chain head at a public endpoint that requires no Wiredepth account; anyone with an exported audit log can re-compute every hash, compare to the published anchor, and prove no row was modified, deleted, or reordered after the fact.

This specification is frozen.

Changing the canonicalisation rules below would invalidate every existing chain segment. New columns or fields would need to be addressed via a documented chain restart, not by modifying the encoding. Treat this page as a contract.

What the chain proves

The audit log is the record of every consequential action on a customer's Wiredepth account: logins, scan runs, alert dispatches, workpaper downloads, plan changes. Without a chain, an attacker (or a careless admin) could rewrite history in the database: delete a login row, change a timestamp, swap one user's action onto another. The customer's only recourse would be to take Wiredepth's word that we didn't.

With the chain, every row's hash is derived from the previous row's hash. Modifying any row invalidates every subsequent hash. The current head and historical daily heads are published at a public URL Wiredepth cannot retroactively rewrite (well-known anchors, third-party caches, archive.org snapshots). To convince a verifier the log is authentic, the chain you produce must hash to the same value that's publicly published.

Row fields included in the hash

Each chained row hashes these eight fields. Any column not listed (notably the bigserial id) is nothashed - the chain is intentionally insertion-order-dependent but id-value-independent, so a sequence reset doesn't break verification.

FieldTypeNotes
actionstringe.g. 'login', 'pdf.download'
created_atstringISO-8601 UTC; client-set at queue time
ipstring | nullcaller IP or null
metadataobject | nullfree-form; canonicalised recursively
prev_hashstring | nullrow_hash of preceding row; null on genesis
resourcestring | nulle.g. domain or scan id
user_agentstring | nullcaller User-Agent
user_idstring | nulluuid or null

Canonical JSON encoding

  1. The payload is a single JSON object containing exactly the eight keys above (no extras, no omissions).
  2. Object keys are emitted in lexicographic (alphabetical) order.
  3. Strings use standard JSON escapes (the same encoding JSON.stringify emits in JavaScript or encoding/json emits in Go for a plain string).
  4. null and missing values both encode as JSON null.
  5. metadata (a JSONB column in Postgres) is recursively canonicalised by the same rules - keys sorted, values normalised, no whitespace.
  6. created_at is the ISO-8601 string the application captured when the row was queued; the verifier MUST use the value from the export, not re-derive it from a timestamp.
  7. NO whitespace anywhere in the canonical encoding.
  8. The encoded string is UTF-8 sha256'd.

Reference implementation in TypeScript:

function canonicalJson(value) {
  if (value === null || value === undefined) return 'null';
  const t = typeof value;
  if (t === 'string') return JSON.stringify(value);
  if (t === 'number' || t === 'boolean') return JSON.stringify(value);
  if (Array.isArray(value)) {
    return '[' + value.map(canonicalJson).join(',') + ']';
  }
  if (t === 'object') {
    const keys = Object.keys(value).sort();
    return '{' + keys.map(k =>
      JSON.stringify(k) + ':' + canonicalJson(value[k])
    ).join(',') + '}';
  }
  return JSON.stringify(value);
}

row_hash algorithm

Given a row R and the previous row's hash prev_hash (or null on the chain genesis):

payload = canonicalJson({
  action:      R.action,
  created_at:  R.created_at,
  ip:          R.ip,
  metadata:    R.metadata,
  prev_hash:   prev_hash,
  resource:    R.resource,
  user_agent:  R.user_agent,
  user_id:     R.user_id,
})

row_hash = sha256(utf8_bytes(payload)).hex().lowercase()

Public anchor endpoint

GET /api/v1/audit/anchors returns daily head-hash snapshots for the last 90 days plus the live chain head. No authentication required. Cache-friendly. CORS-open so a third-party verifier can query directly from a browser.

{
  "format": "v1",
  "algorithm": "sha256",
  "spec": "/docs/verify",
  "anchors": [
    {
      "date": "2026-05-12",
      "headHash": "abc123...",
      "headRowId": 12345,
      "rowCount": 12345,
      "createdAt": "2026-05-13T00:05:01.234Z"
    },
    ...
  ],
  "current": {
    "headHash": "...",
    "headRowId": 99999,
    "rowCount": 99999,
    "asOf": "2026-05-13T10:30:00.000Z"
  }
}

Once an anchor row is written it's immutable - the publishing job uses ON CONFLICT DO NOTHING on the date primary key. Third parties who pin an anchor today can verify it months later against the same head.

RFC 3161 third-party timestamps. Each anchor head is submitted to a public Time Stamping Authority (TSA) and the resulting TSR token is stored alongside the anchor as tsrTokenB64. A verifier base64-decodes the token and runs openssl ts -verifyagainst the TSA's published cert chain to confirm the head hash existed at a specific time, independent of Wiredepth's clock. Anchors published when the TSA was unreachable have tsrTokenB64: null and re-attempt on subsequent runs.

Chain segments

Audit rows that existed before the chain migration carry NULL chain columns by default. A separate cron-triggered backfill (/api/cron/backfill-audit-chain) processes those rows in id order + computes a chain over them, producing a pre-migration segmentthat's internally consistent but does NOT link to the active chain segment (mutating active rows' prev_hash would invalidate every previously-published anchor head).

The published anchor covers the active segment only. Verifiers that download an export spanning both segments should reset the chain check at the segment boundary (a row whose prev_hash is null in the middle of an otherwise-linked export marks a new segment). The reference verifiers at /verify + the CLI subcommand do this automatically.

Running the verifier

Three ways to verify, ordered by lowest setup:

  • Browser verifier at /verify. Paste in your exported audit chain (JSON Lines) and the page re-computes every hash client-side using the Web Crypto API. Pass / fail verdict plus the hash that did not match. Nothing leaves your browser.
  • Open-source CLI verifier: postvale audit verify chain.jsonl in the public WiredepthHQ/postvale-cli repo. No Wiredepth account required to run it. Verifies the chain against either a pasted anchor or the live /api/v1/audit/anchors response.
  • Implement your own. The spec above is sufficient; the canonicalisation function is ~15 lines in any language with a JSON encoder and a sha256 primitive. The reference TypeScript implementation lives at src/lib/audit-chain.ts in the webapp repository.

v2: per-user chain proofs (shipped)

The v1 chain is global: rows from every customer interleave by insertion order. A customer verifying their own scoped export under v1 alone can re-compute every row's hash but can't prove the chain wasn't modified at rows they don't own. v2 adds a per-user chain alongside the global chain so a single tenant verifies their own segment end-to-end against their personal daily anchor.

Every chained row now carries TWO pairs of hash columns:

  • prev_hash + row_hash - v1 global chain (rows linked by global insertion order)
  • user_prev_hash + user_row_hash - v2 per-user chain (rows linked to the previous row from the same user). NULL on anonymous events; rows without a user_id stay outside the per-user chain by design.

Both hashes use the same canonicalisation rules + sha256 function documented above - the only difference is which prev_hash field feeds the canonical payload. The v2 daily anchor publishes per-user heads at /api/v1/audit/anchors under the user key (authenticated callers only). The browser verifier at /verify + the CLI postvale audit verify subcommand check both chains automatically + report per-user results alongside the global verdict.

v3: Merkle inclusion proofs (shipped)

v3 commits all per-user daily heads into a binary Merkle tree + publishes the root as the day's third trust artifact alongside the global anchor + the per-user anchors. Each authenticated caller's anchor entry at /api/v1/audit/anchors now ships with an inclusionProof for their head + the matching merkleRoot; verifiers walk the proof and confirm my head IS in the rootwithout re-fetching other tenants' data.

Tree rules (frozen):

  1. Leaf = sha256(<user_id>:<head_hash>) hex. Embedded colon disambiguates concatenation collisions.
  2. Leaves sorted ascending (case-sensitive lexicographic hex) before tree construction. Determinism without recording order.
  3. Internal nodes = sha256(left || right) over the raw 32-byte child hashes, NOT hex. When the level has an odd count, the last node is duplicated to pair with itself.
  4. Empty leaf list -> root = sha256(empty string).
  5. Inclusion proof = ordered list of (sibling_hash, side) pairs from leaf level up to (but not including) the root. The side label is L if the sibling sits on the left of the accumulator, R on the right.

Daily Merkle roots are publicly listed under merkle: […]in the anchors response. Customers verify their head against the root without needing other tenants' rows; auditors verify Wiredepth can't silently rewrite a day's anchor without breaking the published root.

FAQ

What stops Wiredepth from publishing a fake anchor that matches a rewritten chain?

Nothing on day one - the trust property strengthens with time. As soon as a third party pins an anchor (a search- engine cache, archive.org, an auditor's saved copy, an off-Wiredepth RSS subscriber), Wiredepth cannot retroactively rewrite a day's anchor without that third party noticing the mismatch. RFC 3161 third-party timestamping on each daily anchor head is the next step and removes the dependency on any specific cache.

What about rows from before this feature shipped?

Pre-migration rows carry NULL chain columns and live before the chain begins. The verifier accepts them as out-of-scope: the chain only proves integrity from the first row with a populated row_hash onwards. The migration date is logged as an audit event of its own.

Can the customer export the entire chain or only their own rows?

Logged-in customers export only their own audit rows; the chain still verifies against the global anchor because the prev_hashvalues link to rows belonging to other tenants whose hashes the customer can fetch (just the hash, not the row content) from the same export endpoint. Auditors with a scoped invite see only the inviting customer's rows.

Why not Sigstore / certificate transparency?

Long-term we may integrate with the Sigstore Rekor log so each daily anchor head is itself recorded in a globally- public append-only ledger. The current design is one step simpler and ships now; Rekor integration is additive and doesn't change the canonicalisation spec.