Skip to content
Enterprise · AI Data Compartmentalization

Restore the data compartmentalization AI connectors erase

The moment you connect Claude or ChatGPT to Drive, Gmail, or any internal API, every seat in your AI tenant can potentially pull every other user's data through the shared connector — and the agent can take actions the user could never authorize directly. PortEden re-imposes the boundaries: per-user data scoping, per-request action enforcement, and a signed audit trail of every decision.

Compartmentalization collapses the moment you connect

Every native AI connector ships with the same gaps. Together they let any seat in your AI tenant reach data they were never meant to see — and perform actions no policy ever authorized.

User A pulls User B's data through the same connector

Most AI connectors authenticate once at the workspace level and serve the entire AI tenant. Anyone with a Claude or ChatGPT seat can prompt the connector for any record it can technically reach — across teams, matters, projects, and clearance levels. The per-user boundaries your IdP and your data platform enforce do not survive the connector.

OAuth scopes describe capabilities, not boundaries

"gmail.readonly" grants the whole mailbox; "drive.readonly" grants every shared drive. No connector spec lets you pin access to one matter folder, one project, one date range, or one purpose. There is no scope for "only what this user, on this team, for this purpose, should see."

Agents perform unenforced operations

Once a connector can send mail, share files, create tickets, or call billing endpoints, an LLM under prompt injection or a confused workflow can chain those actions in ways no human approval ever sanctioned. The agent's effective authority is the union of every verb in every scope — not the narrower authority the user actually has.

Three pillars of enterprise AI governance

Identity sync that survives the connector

SCIM 2.0 from Okta, Microsoft Entra ID, and Google Workspace pins every Claude, ChatGPT, Copilot, and MCP request to a current, named identity. Joiner-mover-leaver propagates in seconds. A deprovisioned user cannot keep an agent session alive on a stale token.

Per-user data scoping and per-action enforcement

Every tool call is bound to the prompting user's identity and decided against subject, resource, action, AI-client, environment, and context attributes. User A's request only returns what User A is entitled to see. Send, share, delete, and external-write verbs are individually gated — an agent cannot chain destructive operations that the user has no policy to perform.

Auditable evidence per agent and per request

Tamper-evident log of every authorization decision with the policy version, attribute snapshot, and outcome. SIEM stream to Splunk, Datadog, Elastic, or S3 — signed CSV exports for SOC 2 CC6.1, HIPAA §164.312(b), and ISO 27001 A.5.15 access reviews.

Compliance map

How AI data compartmentalization helps you satisfy the controls your auditors read

RequirementWhat PortEden doesEvidence
SOC 2 CC6.1 / CC6.3 — Logical access controls & user accessPer-request access decisions across every AI client and connector. Continuous evidence collection via SIEM stream supports CC6.1 and CC6.3 populations without manual sampling.Per-request decision log · SIEM-stamped evidence
HIPAA §164.312(a)(1) — Access control (technical safeguard)Unique user identification on every tool call, short-lived scoped tokens, encryption (TLS 1.3 + AES-256), and emergency access via audited break-glass — applied uniformly across Claude, ChatGPT, Copilot, Gemini, and MCP servers.Short-lived scoped JWTs · audited break-glass
HIPAA §164.502(b) — Minimum necessaryCompartmentalization at the request boundary: each AI tool call is narrowed to the smallest scope a workflow needs (one patient panel, one folder, one verb). Over-broad OAuth scopes are masked by policy-side narrowing.Per-request scope narrowing · default-deny on missing purpose
GDPR Art. 5(1)(b) & 5(1)(c) — Purpose limitation & data minimizationPurpose is a required policy attribute. Requests without a registered purpose are denied. Data is filtered to the minimum necessary for the stated purpose before it reaches the model.Purpose-attribute gating · per-request minimization log
ISO 27001 A.5.15 — Access controlDocumented access-control policy expressed as code. Roles and attributes inherit from the IdP via SCIM; access reviews exportable as signed evidence per AI vendor and per integration.Policy-as-code · signed access-review CSV
NIST 800-53 AC-6 — Least privilegeSix-layer enforcement narrows every agent request from the broad OAuth grant to the least-privilege scope a task needs. Privilege escalation requires explicit approval recorded in the audit trail.Six-layer per-request narrowing · approval-trail audit

Built for procurement

DPA available
Subprocessor list
SIG / CAIQ pre-filled
Pen-test report on request
Book a demo

Talk to our enterprise team

30-minute discovery call. Bring your security questionnaire.

Frequently Asked Questions

Why don't OAuth scopes already solve compartmentalization for AI agents?
OAuth scopes describe whole capabilities ("read mail," "edit drive") — not data boundaries. A connector authorized with "gmail.readonly" sees the entire mailbox; there is no scope for "only label:matter-2419" or "only messages tagged confidential=false." Once that grant exists, every prompt from every seat in your AI tenant can pull from it. PortEden adds a request-time policy layer between the AI client and the OAuth scope, so the compartments you express in policy (folder, label, project, purpose, AI client, time window) are the compartments the agent actually sees.
How does PortEden compartmentalize a single Claude or ChatGPT connection across teams and matters?
A single PortEden tenant can host many policy groups, each with its own scope on the shared connector. Team A's Claude requests evaluate against Team A's policy bundle (their folders, their labels, their purposes); Team B's evaluate against theirs. The OAuth grant to the data source is shared at the platform layer, but every individual tool call is narrowed at the request layer before it reaches the model. Per-team and per-matter audit trails fall out of this automatically.
A user prompts Claude about another user's matter — what stops the agent from returning it?
PortEden binds every tool call to the prompting user's identity (from SCIM-synced Okta, Entra ID, or Google Workspace) and evaluates that identity against the policy bundle on every request. If User A's policy doesn't grant access to User B's matter folder, the underlying tool call to Drive or Gmail is denied at the policy layer — the data never enters the model's context. The denial emits a per-request audit record naming the identity, the requested resource, and the policy version that decided it. The shared connector cannot be used as a back-door past per-user entitlements.
How do you stop an agent from performing operations the user couldn't authorize directly?
Actions (send_email, share_external, delete, create_ticket, post_to_channel, hit_billing_endpoint, and so on) are first-class policy attributes evaluated alongside the subject and resource. Read access does not imply write access; write access does not imply external-share access; external-share access does not imply send-on-behalf access. Each verb is gated independently with default deny. High-impact actions can require human-in-the-loop approval (Slack, Teams, webhook to ticketing) that pauses the agent's tool call until the approver responds — even if the underlying OAuth scope technically permits the operation.
We use Okta / Entra ID — how fast does deprovisioning reach a running agent?
SCIM 2.0 from Okta, Microsoft Entra ID, and Google Workspace propagates joiner-mover-leaver events in seconds end to end. The next tool call after deprovisioning is denied at the policy layer with a per-request audit record naming the identity event that triggered the denial. There is no propagation delay because revocation is checked on the request path, not via a downstream cache that has to invalidate.
Does this work for MCP servers and direct REST API agents alike?
Yes. PortEden hosts MCP servers for Claude (Desktop and Web), ChatGPT (via Connectors), Cursor, Gemini, and Grok, and exposes a REST API for direct integrations. Every tool call — MCP or REST — traverses the same six-layer policy engine and emits the same audit record. Policies, compartments, redaction, and identity binding are uniform across surfaces; there is no MCP-only or API-only gap that a connector could slip through.
Can the same AI client be allowed for one workload and denied for another?
Yes. AI client identity (vendor, model, region, MCP server identity) is a first-class attribute in the policy engine. You can require Claude for clinical workloads, deny ChatGPT for confidential resources, and route Copilot only through M365-tenanted data — all on the same shared connector. Per-vendor policies compose cleanly with per-team and per-resource policies.
What audit evidence do we get for an access review?
Every authorization decision emits an audit record: who (identity from your IdP), what (resource and action), which AI client, which policy version, the attribute snapshot, and the outcome. The audit boundary is the tool call — PortEden does not see the user's prompt or the model's reply. Records stream to Splunk, Datadog, Elastic, or S3 in real time, and signed CSV exports satisfy SOC 2, HIPAA, and ISO access-review evidence requirements.

Ready to govern AI across your organization?

Book a discovery call. Bring your security questionnaire — DPA, subprocessor list, and pen-test summary available on request.