Every prompt, every redaction, every decision — logged, signed, and SIEM-exportable.
PortEden's AI audit trail is the chain-of-custody record for AI activity. Tamper-evident, cryptographically chained, and streamed in real time to Splunk, Datadog, Elastic, or S3. One vendor-neutral timeline across Claude, ChatGPT, Copilot, and MCP servers — produces the per-request evidence SOC 2 CC7.2, HIPAA §164.312(b), and GDPR Art. 30 auditors typically request.
Free tier · No credit card · Works with any AI client
AI activity is invisible to your SIEM, your DLP, and your auditor.
Browser tabs bypass network egress. AI vendor consoles surface summary data, not per-request evidence. Each tool — Claude, ChatGPT, Copilot, MCP — runs its own silo if any. When the auditor asks "what client data has touched OpenAI in 2026?", you have nothing to point at. The audit trail layer is missing.
An auditor asks what client data has touched OpenAI in 2026
You're reconstructing from screenshots, Slack threads, and OAuth grant lists. There is no single timeline, no per-request evidence, and no way to prove what was redacted before it left.
A user pastes a customer thread into ChatGPT
There's no record of it. It never reaches your DLP because the browser tab bypassed the network egress controls. Your SIEM has nothing on it. The leak is invisible to incident response.
Compliance review can't reconstruct who saw what, when
Every AI tool has its own siloed log if any — Claude, ChatGPT, Copilot, MCP servers. None of them speaks SIEM. Stitching them together for an audit takes weeks and is never complete.
One feed, every AI client, every event.
A representative slice of the live audit feed. Every authorization, redaction, and filter is a record — actor, AI client, integration, decision, and detail — cryptographically chained and streamed to your SIEM.
Six event categories, one timeline.
Every event PortEden captures fits one of six categories. Each carries the same shared fields — actor, AI client, integration, policy version, evidence hash — so cross-category investigation is one query, not six.
Authentication & session
Who logged in from where, with what factor, on which AI client.
- User sign-in / sign-out with IdP outcome
- MFA challenge issued / passed / failed
- AI client OAuth grant / revoke / re-consent
- Session start / refresh / expiry
- Impossible-travel and anomaly flags
- Service-account and machine-identity logins
- Break-glass admin elevations
Authorization decisions
Per-layer policy outcome for every request that crossed the boundary.
- Visibility layer outcome (free/busy, filename-only, full)
- Contact-rules layer outcome (allowed / excluded / overridden)
- Action-limits outcome (read / write / send / delete)
- Time-window outcome (in-window / out-of-window scoped)
- Account-scope outcome (which workspaces / mailboxes / drives)
- Data-reduction outcome (which fields masked)
- Final allowed-payload size and shape
Redaction events
Which rules fired, with category counts and reversible placeholders.
- PHI rule fires (count by sub-category — names, MRN, DOB)
- PCI rule fires (PAN, CVV, expiry detection)
- Secrets rule fires (API key, token, certificate, password)
- Custom regex / dictionary rule fires
- ML classifier confidence and category
- Reversible placeholder issued / consumed
- Original-vs-redacted hash pair for evidence
Data access
What resources were read, by whom, on behalf of which AI client.
- Resource type and identifier (message, file, event, ticket)
- AI client identity (Claude, ChatGPT, Copilot, MCP)
- Integration (Gmail, Drive, Calendar, Slack, Jira)
- Payload size in / out (bytes, token-equivalent)
- Source IP and geo for the AI client
- Custom tags (matter ID, patient panel, project code)
- Cache hit / miss for repeated requests
Admin & policy change
Who changed what, with a diff, an approver, and a replayable version.
- Policy create / edit / delete with full diff
- Role assignment / removal per user or group
- Integration connect / disconnect / re-auth
- Retention setting changes
- SIEM destination configuration changes
- Approver decisions on change-control workflows
- Tenant settings (region, isolation flags, retention)
System & integration
Sync runs, integration health, errors — the operational ground truth.
- Integration sync start / success / failure
- Rate-limit hit / backoff event from upstream
- Token refresh outcomes (success, refused, revoked)
- Schema change detection on a connected source
- Background-job durations and queue depth
- Internal error with stack-fingerprint hash
- Health-check transitions (green / yellow / red)
Capture. Sign. Stream. Query.
1. Capture
Every event is captured at the integration boundary — auth, authorization, redaction, data access, admin, system. The same enforcement point that filters the data also writes the evidence, so there is no gap between what happened and what's logged.
2. Sign
Each record is cryptographically signed and chained — the hash of every event includes the previous hash. Daily anchors land in append-only storage. Any insertion, deletion, or edit breaks the chain and is detectable on verification.
3. Stream
Events ship to your SIEM in real time — Splunk, Datadog, Elastic, Sentinel, Chronicle, S3. Median end-to-end latency is 2–4 seconds. SIEM is the source of truth for long-term retention; PortEden holds the hot tier for investigation.
4. Query
An ad-hoc investigation UI lets compliance and DFIR teams pivot across actor, AI client, integration, and tag without touching the SIEM. Any filtered view exports as a signed CSV bundle that an auditor can verify independently.
One audit entry, everything an auditor needs.
Click any row in the live feed and you see the full breakdown. Actor, AI client, integration, policy version, per-layer outcome, redactions applied, payload sizes, and a chained evidence hash. Operational detail without engineer-speak.
Calendar.events.list — allowed (filtered)
Jamie asked Claude desktop to summarize meetings from the last six months. PortEden allowed the request, narrowed the time window, blocked the delete action, and recorded each layer outcome.
On 22 April 2026 at 14:22:08 UTC, Jamie asked Claude to read calendar events. Policy policy_2026_q2_v17 was live. Visibility was narrowed to free/busy. The personal calendar was excluded. The 6-month window was scoped to 30 days. The delete action was blocked. No PHI, PCI, or secrets were present. Every claim above is independently verifiable from the chained evidence hash.
The same incident, two very different outcomes.
Citations, not vague reassurances.
The audit trail maps directly to the clauses your auditor is reading. Evidence is exportable as a signed CSV bundle or a PDF evidence pack — independently verifiable with the published PortEden public key.
Every source the AI tries to reach into.
One audit trail, six regulated workflows.
Evidence at the boundary, not stitched after the fact.
Vendor consoles surface summary data. PortEden writes the evidence at the same enforcement point that filters the data — so the log is the ground truth, not a reconstruction. One timeline, every AI client, cryptographically chained.
Tamper-evident chain
Each event hash includes the previous hash. Daily anchors land in append-only storage. Gaps and edits are detectable on verification — even by a fully compromised tenant.
Streamed in real time
Events leave PortEden in seconds, not nightly batches. SIEM is the source of truth for long-term retention; PortEden's UI is the hot tier for ad-hoc investigation.
One timeline, every AI client
Claude, ChatGPT, Copilot, Gemini, MCP servers — all surface in the same audit view. No per-vendor consoles to stitch together. Vendor-neutral by design.
Pairs well with
Audit trail questions
What is an AI audit trail and why do I need one?
What gets logged exactly?
Is the audit log tamper-evident?
Does it integrate with Splunk, Datadog, Elastic, and S3?
Can I export signed CSVs for auditors?
How long are events retained?
Can I redact PII inside the audit log itself?
Does it cover MCP servers and autonomous agents?
What evidence does this produce for HIPAA §164.312(b) auditors?
What's the latency to SIEM?
Can I tag events with custom metadata?
What pricing tier includes audit trail?
When your auditor asks "who let ChatGPT see this?" — have the answer in one query.
Five minutes to install. Every prompt, every redaction, every authorization decision is signed and SIEM-ready from the first request — so the timeline already exists when the question lands. Free tier for solo users; Enterprise adds tamper-evident chaining, signed CSV exports, S3 archival, and 7-year retention.