Express the rule your auditor reads — and let it run on every AI request.
PortEden's PBAC engine evaluates attribute-rich policies — subject, resource, action, AI-client, environment, context — on every request from Claude, ChatGPT, Copilot, Gemini, and MCP servers. Roles cover the 80%; PBAC expresses the edge cases your auditor cares about. Live evaluator, Git-versioned, mapped to NIST AC-3(4) and SOC 2 CC6.3.
Free tier · No credit card · Works with any AI client
Your real access rules don't fit in a role bundle.
Roles are great for the 80% case — Engineering reads repos, Sales reads CRM. The 20% that auditors fail you on lives in conditions: time of day, contractor status, document sensitivity, ethical-wall side, geographic region, prior approval. PBAC is the layer where those conditions become enforceable expressions instead of tribal knowledge.
Roles aren't expressive enough
"Engineers can read repos" doesn't capture "engineers can read repos, but not the auth-service repo after 8pm without an approver." Roles describe groups, not the conditions that should actually gate access.
Policy lives in screenshots and Slack
Your real access rules exist in tribal knowledge — onboarding docs, channel pins, the answer one director gives in the standup — not in any system that can enforce them. Auditors find the gap; users find the workaround.
Edge cases drive most audit findings
Auditors don't fail you on the obvious 80% — those are well-handled by roles. They fail you on the "we forgot to handle the contractor case" 20%: time windows, geo restrictions, ethical walls, approval-required actions.
One expression, enforced on every AI request.
This is what a PBAC rule looks like in PortEden — a labeled expression, not source code. The live evaluator runs it against real traffic before deployment and records every match for the audit trail.
Six attribute categories, resolved at request time.
PBAC for AI is only as expressive as the attributes it can match. PortEden pulls from your IdP, your integrations, the request itself, and the surrounding context — so policies can describe the rules your team actually follows.
Subject attributes
Who is asking?
subject.role = contractor · subject.team = legal · subject.clearance >= secret
- Role and role bundles
- Team and department
- Clearance level and security tier
- Employment status (FTE, contractor, intern, vendor)
- Manager and reporting chain
- IdP group membership
- Custom subject attributes (e.g., matter team, cost center)
Resource attributes
What is being accessed?
resource.label = confidential · resource.project = matter-X · resource.retention = legal-hold
- Sensitivity label (public, internal, confidential, restricted)
- Owner and shared-with set
- Project, matter, or workspace tag
- Retention class (legal hold, archive, ephemeral)
- Confidentiality and PII / PHI flags
- Source integration and account
- Custom resource attributes (e.g., client ID, region)
Action attributes
What does the AI want to do?
action = share_external · action = send_on_behalf · action = summarize
- Read, write, delete, modify
- Share external, share internal
- Send on behalf, draft only
- Summarize, analyze, extract
- Approve, sign, route
- Bulk vs single-record action
- Per-integration action namespaces
AI-client attributes
Which AI is asking?
client.vendor = anthropic · client.model = claude-3.5 · client.region = us-east
- Vendor (Anthropic, OpenAI, Microsoft, Google)
- Model and version
- MCP server identity and signature
- Deployment region and tenant
- Client trust tier (managed, BYO, third-party)
- Session origin (desktop, browser, API, agent)
- Custom client attributes (e.g., approved-vendor flag)
Environmental attributes
Where, when, and from what network?
env.time_local NOT BETWEEN 09:00 AND 18:00 · env.country = US · env.network = corporate
- Time of day, day of week, holiday calendar
- Geolocation and country
- Network (corporate, VPN, public)
- Device posture and managed-device status
- Request rate and burst patterns
- Source IP and ASN
- Tenant region and data-residency zone
Context attributes
What surrounds the request?
context.approval = granted · context.break_glass = active · context.parent = matter-review
- Parent request and request chain
- Prior consent state and consent scope
- Approval state (pending, granted, expired)
- Break-glass token presence and remaining TTL
- Step-up authentication state
- Linked ticket or incident ID
- Custom context attributes (e.g., workflow stage)
Define. Express. Test. Deploy.
1. Define
Declare the attributes a policy can match. Subject attributes pull from the IdP; resource attributes from integration metadata; environment attributes from the request itself. Custom attributes (matter ID, clearance code, project tag) plug in alongside the built-ins.
2. Express
Write the rule. The UI builder offers guided clauses with type-checked operators; YAML is available for power users and Git deployments. Every expression reads as a sentence: WHEN subject AND resource AND action AND environment THEN allow / deny / require approval.
3. Test
The live evaluator runs the rule against synthetic and historical requests, with a diff against the current production policy. See the clauses that fire, the requests that change, and any unintended consequences before the policy ever goes live.
4. Deploy
Promote to production through Git or the management API. Change-control with approver workflows is available on the Enterprise tier. Every deployment is signed, versioned, and recorded — rollback is one click and replay against any prior version is on tap.
One request, five attributes, one decision.
A real evaluation walked through layer by layer: the incoming request, the rule that matched, every attribute the engine resolved, and the resulting decision — all captured as a single audit-trail entry.
The same entry streams to your SIEM and is replayable against any prior policy version — incident response can ask "what would the policy from last Tuesday have done?" and get a deterministic answer.
The 20% your auditor reads — finally enforceable.
Real citations, per-attribute evidence.
Every PBAC evaluation records the attribute values, the matching expression, and the policy version that was live. The same data answers SOC 2, NIST, HIPAA, GDPR, and ISO assessor questions without a separate evidence-collection project.
Every source the AI tries to reach into.
One expression model, six regulated workflows.
Attributes from anywhere, decisions you can replay.
PBAC is only as good as the attributes it can match and the evidence it can produce. PortEden resolves attributes from every source at evaluation time, runs every change through a live evaluator, and signs every decision so any prior authorization can be replayed exactly as it was made.
Attributes from anywhere
Subject attributes from your IdP. Resource attributes from integration metadata. Environment attributes from the request itself. Custom attributes from any source you plug in. Every value is available at evaluation time, with provenance recorded.
Live evaluator + diff
Every policy edit is testable against historical and synthetic requests before going live. The diff shows exactly which decisions would change. Risky policies are caught in review, not in the audit-trail post-mortem.
Versioned, signed, replayable
Every authorization decision records the policy version that was live. Replay any decision exactly as it was made — critical for incident response and assessor evidence. Rollback is one click, history is immutable.
Pairs well with
Policy-based access control questions
What is PBAC and how does it differ from RBAC?
Do I have to write code to use PBAC?
Which attributes are available out of the box?
Can I add custom attributes for our domain?
How are PBAC and RBAC combined — which wins?
How do I test a policy before deploying it?
How do I roll back a bad policy?
Are policy edits themselves audited?
How does PBAC support break-glass workflows?
What evidence does this produce for NIST AC-3(4) ABAC auditors?
How fast is policy evaluation?
What pricing tier includes PBAC?
Ready to put your real access rules into a system that can enforce them?
Set up your first PBAC policy in under 10 minutes — the live evaluator runs your draft against historical traffic before it ever goes live. Enterprise adds approver workflows, Git-versioned deployments, and SIEM streaming.