Claude Code emits rich telemetry natively. Here's how to get it into your SIEM.
Read the blog
Read the blog
Resources / Blog / Detection Engineering for OpenAI Enterprise Audit Logs

April 21, 2026

Detection Engineering for OpenAI Enterprise Audit Logs

Darwin Salazar

Head of Growth

OpenAI’s enterprise tier emits a structured audit log with 51 event types: identity changes, configuration changes, role assignments, network control modifications, and more. The schema is well-documented and the events are immutable once logging is enabled

This post walks through five detections worth shipping first, plus a bonus. Each carries a noise profile, though every environment is different; whether a rule is noisy in yours, and whether you deploy it at all, will depend on multiple factors like OpenAI use cases, # of users, threat profile etc.

Our goal here was to focus on detections that are immediately high-signal. Things that indicate anomalous behavior without requiring much baselining or tuning. With 51 event types available, there’s a lot more you could build on top of this so think of this more as a starter pack. 

The Telemetry

The Audit Log API (GET /organization/audit_logs) returns admin and user actions across your OpenAI org. Every entry includes an id, an effective_at Unix timestamp, a type field, an actor object, and an optional project scope.

For session-based actions, the actor object includes IP address, geolocation (city, region, country, lat/lon, ASN), user agent, and JA3/JA4 TLS fingerprints. For API key actions, you get the key’s tracking ID and whether it belongs to a user or service account.

Event types across these categories:

  • Identity & Access: login.succeeded, login.failed, logout.succeeded, logout.failed, invite.sent/.accepted/.deleted, user.added/.updated/.deleted
  • Service Accounts: service_account.created/.updated/.deleted
  • API Keys: api_key.created/.updated/.deleted
  • Projects: project.created/.updated/.archived/.deleted
  • Roles & Permissions: role.created/.updated/.deleted, role.assignment.created/.deleted
  • Org Config: organization.updated, rate_limit.updated/.deleted
  • Network Controls: ip_allowlist.created/.updated/.deleted/.config.activated/.config.deactivated
  • Certificates & Keys: certificate.created/.updated/.deleted, certificates.activated/.deactivated, external_key.registered/.removed
  • SCIM & Groups: scim.enabled/.disabled, group.created/.updated/.deleted
  • Infrastructure: tunnel.created/.updated/.deleted, checkpoint.permission.created/.deleted, resource.deleted

The Detections

1. SCIM Disabled

Event: scim.disabled

Noise: Near-zero.

If your org uses SCIM to sync users from Okta, Azure AD, or another IdP, disabling it means terminated employees keep OpenAI access until someone manually removes them. In a compromise scenario, disabling SCIM also blocks automated deprovisioning: if the compromised account later gets flagged in the IdP, its OpenAI access won’t sync out automatically.

Rule: alert on every scim.disabled. No threshold, no grouping. Pair with scim.enabled to surface toggle patterns. Ship this one first.

2. IP Allowlist Disabled or Broadened

Events: ip_allowlist.config.deactivated, ip_allowlist.deleted, ip_allowlist.updated

Key fields: allowed_ips (on updated and deleted), configs[] (on config.deactivated), top-level actor

Noise: Low. Updates need a list of known corporate CIDR ranges to suppress routine changes.

IP allowlists are your network perimeter for OpenAI access. Deactivating them opens your org to access from any IP on the internet.

Rules:

  • Page on any ip_allowlist.config.deactivated or ip_allowlist.deleted.
  • On ip_allowlist.updated, alert when allowed_ips adds 0.0.0.0/0, ::/0, or any prefix broader than your typical corporate ranges (e.g., shorter than /24).

3. Privilege Escalation (Role Changes)

Events: role.updated, role.assignment.created

Key fields: role.updated.changes_requested.permissions_added, role.assignment.created.principal_id, role.assignment.created.principal_type, role.assignment.created.resource_id, role.assignment.created.resource_type

Noise: Medium (needs context). role.updated with permissions_added is clean. role.assignment.created is noisier and needs a production-project tag to be useful.

Attackers who avoid creating new accounts often grant themselves, or an existing low-privilege principal, additional permissions via a role update or assignment instead.

The role.updated event makes this easy: the permissions_added array tells you exactly which permissions were granted. role.assignment.created is less precise, since the event doesn’t expose which role was assigned, only the principal and resource.

Rules:

  • Alert on any role.updated where permissions_added is non-empty. Elevate severity when the added permissions touch service accounts, API keys, IP allowlists, rate limits, or org config.
  • Alert on role.assignment.created where resource_id matches a project you’ve tagged as production in your SIEM or enrichment layer. Enrich with the principal’s current roles to make triage fast.

4. API Call Logging Disabled or Reduced

Event: organization.updated

Key field: changes_requested.api_call_logging

Noise: Low. Legitimate reductions happen for privacy or compliance reasons; pair with change-management.

OpenAI’s audit log (the stream feeding these detections) is separate from its API call logging, which controls whether the contents of API calls (prompts, completions, usage metadata) are retained. The api_call_logging setting takes four values: disabled, enabled_per_call, enabled_for_selected_projects, or enabled_for_all_projects. Reducing this setting is a classic defense evasion move.

Rule: alert on any organization.updated where changes_requested.api_call_logging is set to disabled, enabled_per_call, or enabled_for_selected_projects. If the new value is enabled_for_selected_projects, include changes_requested.api_call_logging_project_ids in the alert so responders know which projects were carved out. Pair with a change-management check.

5. Service Account Created (Especially Owner)

Event: service_account.created

Key field: data.role

Noise: Low. Owner service accounts should be exceptional in most orgs.

Service accounts can act as long-lived, programmatic access paths to your OpenAI org: they can create API keys, modify projects, change rate limits, and invite users. An owner-level service account is persistent, programmatic access that survives password resets, and it’s easy to overlook in day-to-day monitoring.

Rule: alert on any service_account.created where data.role = "owner". Elevate severity when:

  • the creating actor is itself a service account (rather than a human)
  • the creation falls outside your normal provisioning patterns (e.g., off-hours, unusual actor, unexpected project)

For non-owner service account creation, alert at a lower severity and use enrichment to surface the creating actor’s team or role.

Bonus: External Key (BYOK) Tampering

Events: external_key.registered, external_key.removed

Key fields: id (the external key configuration ID), top-level actor, top-level project scope

Applies to: Orgs using customer-managed encryption keys (a subset of enterprise deployments). Very high signal when it fires.

If your org uses customer-managed keys to encrypt OpenAI-held data at rest, the external key is the root of your control over that data. An attacker who registers a key they control may be able to influence how newly encrypted data is protected, depending on how key usage is configured. Removing a legitimate key is destructive.

Rule: maintain an allowlist of expected external key IDs and the humans authorized to manage them. Alert on any external_key.registered or external_key.removed where the actor isn’t on that list. Suppress alerts that match a change ticket.

Enrichment Opportunities

OpenAI’s audit log is rich but as with any log set, you need enrichment and correlation to help drive actionability. With this log set, you get the who (actor), the what (event type and payload), and the where (project scope, session IP). What you don’t get is the context that turns a raw event into a confident detection. Was the actor a current employee or someone who left last week? Did this config change come from the platform team, or from someone who shouldn’t be touching it?

That context lives outside OpenAI: in Okta, your HRIS, your CMDB, your threat intelligence feeds. Pipeline enrichment joins it onto each event before the event lands in your SIEM. That cuts false positives on noisy events and gives responders the context to triage fast.

Enrich service_account.created events (#5) with the actor’s team from your HRIS; an owner service account created by someone outside the platform team is a different severity than one created by a platform admin. The same pattern applies across every control-plane detection in this post: knowing who the actor is (team, role, employment status) turns config changes from anonymous events into contextualized decisions.

Real-time matters. Query-time joins repeat on every search and still pay ingest cost on events you could have suppressed upstream. Monad’s enrichments run inline on the stream, so each OpenAI event arrives in your SIEM already decorated with the context you need to alert or to suppress.

Beyond the Top Five

These five cover identity, access, network perimeter, persistence, and defense evasion without requiring baselining or heavy tuning. Once they're running, there's more worth writing: API key behavioral anomalies, TLS fingerprint (JA3/JA4) anomalies, rate limit manipulation, API key scope changes, project archive and delete as a defense evasion signal, bulk invite patterns, tunnel creation, checkpoint permission changes, certificate lifecycle, and authentication failure bursts.

Getting the Events In

You have a few ways to get these events into your SIEM:

  • Check your SIEM's integration catalog. Coverage for OpenAI enterprise audit logs varies by vendor; some ship a native or marketplace integration. If yours does, this is the lowest-effort route.
  • Build it yourself. The API is a single endpoint (GET /organization/audit_logs) with cursor pagination.You’ll need to handle cursor tracking and polling, but the implementation should be straightforward.
  • Use a pipeline. Monad's OpenAI Enterprise Audit Log input handles ingestion, normalization, enrichment, and routing to your SIEM or data lake. If you want to skip the collector entirely, this is the path.

Enable your audit log, pick the option that fits, and start with SCIM disabled and IP allowlist deactivation. Near-zero noise, no tuning, real coverage by end of day

Related content

Detection Engineering for OpenAI Enterprise Audit Logs

Darwin Salazar

|

April 21, 2026

Detection Engineering for OpenAI Enterprise Audit Logs

300+ Integrations: More Coverage, Less Noise, Better Detections

Darwin Salazar

|

April 15, 2026

300+ Integrations: More Coverage, Less Noise, Better Detections

Detection Engineering for Claude Code, Part 2

Matt Jane

|

March 23, 2026

Detection Engineering for Claude Code, Part 2

The backbone for
security telemetry.

Effortlessly transform, filter, and route your security data. Tune out the noise and surface the signal with Monad.