Skip to content

Connect PortEden to Microsoft Copilot

Microsoft has three different Copilots and they integrate differently. This guide covers the supported PortEden integration for each:

  • M365 Copilot — via the PortEden Graph proxy and a custom Copilot connector
  • Copilot Studio agents — via Power Automate flows that call the PortEden REST API
  • GitHub Copilot agents — via the PortEden CLI in the agent's tool list

Note

M365 Copilot inherits Microsoft Graph permissions wholesale, which is the source of most overshare incidents. PortEden's Graph proxy sits between Copilot and Graph, applying field-level redaction and per-contact rules before Copilot grounds on your tenant.

Prerequisites

  • A PortEden account at my.porteden.com with Outlook, Teams, OneDrive, or SharePoint connected
  • A PortEden API key — see token permissions
  • For M365 Copilot: tenant admin with Copilot Studio or Microsoft 365 Admin access
  • For GitHub Copilot agents: the porteden CLI installed

Path 1 — M365 Copilot via the Graph proxy

The PortEden Graph proxy fronts Microsoft Graph for Copilot. You add it as a custom Copilot connector; M365 Copilot calls the proxy instead of calling Graph directly. PortEden enforces redaction and per-contact rules before any tenant data is grounded.

1. Generate a PortEden Graph-proxy token

At my.porteden.com, create a token with Outlook + Teams + SharePoint scope. Set access rules for the contacts and SharePoint sites that should be opaque to Copilot.

2. Add a Copilot connector

In Copilot Studio, create a custom connector pointing at:

https://graph-proxy.porteden.com/v1

Authentication is Bearer token. Paste the token from step 1.

3. Publish to your tenant

Once Copilot Studio publishes the connector, M365 Copilot routes tenant data calls through the proxy. The audit log at my.porteden.com captures every call — prompt, redacted payload, rule evaluations.

Path 2 — Copilot Studio agents via Power Automate

Copilot Studio agents that need access to tenant data should call PortEden through a Power Automate flow rather than touching Graph directly.

{
"type": "Http",
"inputs": {
"method": "GET",
"uri": "https://api.porteden.com/v1/email/messages?today=1",
"headers": {
"Authorization": "Bearer @{parameters('PE_API_KEY')}"
}
}
}

Store the API key as a Power Automate environment variable. Restrict the key to the operations the agent needs — never use a tenant admin's personal key.

Path 3 — GitHub Copilot agents via the CLI

GitHub Copilot agents and chat extensions can shell out to the PortEden CLI for any Workspace or M365 data. The agent never touches Graph or Google APIs directly — it goes through PortEden.

1. Install the CLI

brew install porteden/tap/porteden

2. Authenticate

porteden auth login

3. Use it from a Copilot agent

# Read today's unread emails as redacted JSON
porteden email messages --today --unread -jc
# List today's calendar events
porteden calendar events --today -jc
# Search Drive for a file
porteden drive search -q "Q2 budget" -jc

The -jc flag returns compact JSON optimized for LLM consumption — the same format used by OpenClaw skills.

Recommended Permissions

  • One token per Copilot surface — never reuse keys across M365 and GitHub
  • For M365 Copilot: lock the Graph proxy token to the SharePoint sites and mailboxes Copilot is allowed to ground on
  • Block sensitive contacts (e.g. legal, HR, executive) at the access-rules layer so Copilot can't see them even with a permissive token
  • Set timeframePastDays on agentic tokens to limit historical exposure

Troubleshooting

Copilot connector deploys but returns no data

Check the Bearer token at my.porteden.com — most often, the token doesn't have scope for the requested resource. The audit log will show the access-rule decision that fired.

Power Automate flow times out

Increase the HTTP timeout to at least 30 s for Drive and SharePoint operations. Large file listings can take 2-3 s after redaction.

GitHub Copilot agent can't find the CLI

Verify with porteden --version. If Codespaces, install via the install script in the dotfiles startup so each new container has the binary.