Access authentication logs and Gateway activity logs (DNS, Network, and HTTP) now feature a refreshed user interface that gives you more flexibility when viewing and analyzing your logs. The updated UI includes:
Filter by field - Select any field value to add it as a filter and narrow down your results. Customizable fields - Choose which fields to display in the log table. Querying for fewer fields improves log loading performance. View details - Select a timestamp to view the full details of a log entry. Switch to classic view - Return to the previous log viewer interface if needed.
For more information, refer to Access authentication logs and Gateway activity logs.
Radar now features an expanded Routing section with dedicated sub-pages, providing a more organized and in-depth view of the global routing ecosystem. This restructuring lays the groundwork for additional routing features and widgets coming in the near future. Dedicated sub-pages The single Routing page has been split into three focused sub-pages:
Overview — Routing statistics, IP address space trends, BGP announcements, and the new Top 100 ASes ranking. RPKI — RPKI validation status, ASPA deployment trends, and per-ASN ASPA provider details. Anomalies — BGP route leaks, origin hijacks, and Multi-Origin AS (MOAS) conflicts.
New widgets The routing overview now includes a Top 100 ASes table ranking autonomous systems by customer cone size, IPv4 address space, or IPv6 address space. Users can switch between rankings using a segmented control. The RPKI sub-page introduces a RPKI validation view for per-ASN pages, showing prefixes grouped by RPKI validation status (Valid, Invalid, Unknown) with visibility scores. Improved IP address space chart The IP address space chart now displays both IPv4 and IPv6 trends stacked vertically and is available on global, country, and AS views. Check out the Radar routing section to explore the data, and stay tuned for more routing insights coming soon.
Internal DNS is now in open beta. Who can use it? Internal DNS is bundled as a part of Cloudflare Gateway and is now available to every Enterprise customer with one of the following subscriptions:
Cloudflare Zero Trust Enterprise Cloudflare Gateway Enterprise
To learn more and get started, refer to the Internal DNS documentation.
This week's release introduces new detections for a critical authentication bypass vulnerability in Fortinet products (CVE-2025-59718), alongside three new generic detection rules designed to identify and block HTTP Parameter Pollution attempts. Additionally, this release includes targeted protection for a high-impact unrestricted file upload vulnerability in Magento and Adobe Commerce. Key Findings
CVE-2025-59718: An improper cryptographic signature verification vulnerability in Fortinet FortiOS, FortiProxy, and FortiSwitchManager. This may allow an unauthenticated attacker to bypass the FortiCloud SSO login authentication using a maliciously crafted SAML message, if that feature is enabled on the device.
Magento 2 - Unrestricted File Upload: A critical flaw in Magento and Adobe Commerce allows unauthenticated attackers to bypass security checks and upload malicious files to the server, potentially leading to Remote Code Execution (RCE).
Impact Successful exploitation of the Fortinet and Magento vulnerabilities could allow unauthenticated attackers to gain administrative control or deploy webshells, leading to complete server compromise and data theft. RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionCommentsCloudflare Managed Ruleset4f7d513cea424c2a853881982f7f95e9 N/AGeneric Rules - Parameter Pollution - BodyLogDisabledThis is a new detection.Cloudflare Managed Ruleset60d023f3be414d379428add3319731a4 N/A Generic Rules - Parameter Pollution - Header - Form Log Disabled This is a new detection.Cloudflare Managed Ruleset2dde02d792ad41ec8fd65c2bdef262dd N/A Generic Rules - Parameter Pollution - URI Log Disabled This is a new detection.Cloudflare Managed Rulesetab8a96ed13034d56a81a79e570a36147 N/AMagento 2 - Unrestricted file uploadLogBlockThis is a new detection.Cloudflare Managed Ruleset0a13a38dd81c44688950444e2ffcca9f N/AFortinet FortiCloud SSO - Authentication Bypass - CVE:CVE-2025-59718LogBlockThis is a new detection.
Announcement DateRelease DateRelease BehaviorLegacy Rule IDRule IDDescriptionComments2026-03-302026-04-06LogN/A73ae1cf103da4bacaa2e1a610aa410af Generic Rules - Command Execution - 5 - BodyThis is a new detection.2026-03-302026-04-06LogN/Aa88a85b0cc5a4bc2abead6289131ec2f Generic Rules - Command Execution - 5 - HeaderThis is a new detection.2026-03-302026-04-06LogN/A28518cdc40544979bbd86720551eb9e5 Generic Rules - Command Execution - 5 - URIThis is a new detection.2026-03-302026-04-06LogN/A1177993d53a1467997002b44d46229eb MCP Server - Remote Code Execution - CVE:CVE-2026-23744This is a new detection.2026-03-302026-04-06LogN/A3d43cdfbc3c14584942f8bc4a864b9c2 XSS - OnEvents - CookiesThis is a new detection.2026-03-302026-04-06LogN/Ac9dbce2c1da94b24916e37559712a863 SQLi - Evasion - BodyThis is a new detection.2026-03-302026-04-06LogN/A64d812e6d5844d7c9d7a44a440732d48 SQLi - Evasion - HeadersThis is a new detection.2026-03-302026-04-06LogN/A50de9369ef7c45928a5dfb34e68a99b5 SQLi - Evasion - URIThis is a new detection.2026-03-302026-04-06LogN/A765ffb5c67b94c9589106c843e8143d2 SQLi - LIKE 3 - BodyThis is a new detection.2026-03-302026-04-06LogN/A5c3dbd4f115e47c781491fcd70e7fb97 SSQLi - LIKE 3 - URIThis is a new detection.2026-03-302026-04-06LogN/A89fa6027a0334949b1cb2e654c538bd9 SQLi - UNION - 2 - BodyThis is a new detection.2026-03-302026-04-06LogN/A05946b3458364f1b9d4819d561c439c9 SQLi - UNION - 2 - URIThis is a new detection.2026-03-302026-04-06LogN/Ab2fe5c2a39df4609b6d39908cf33ea10 SolarWinds - Auth Bypass - CVE:CVE-2025-40552This is a new detection.
Four new fields are now available on request.cf.tlsClientAuth in Workers for requests that include a mutual TLS (mTLS) client certificate. These fields encode the client certificate and its intermediate chain in RFC 9440 format — the same standard format used by the Client-Cert and Client-Cert-Chain HTTP headers — so your Worker can forward them directly to your origin without any custom parsing or encoding logic. New fields
FieldTypeDescriptioncertRFC9440StringThe client leaf certificate in RFC 9440 format (:base64-DER:). Empty if no client certificate was presented.certRFC9440TooLargeBooleantrue if the leaf certificate exceeded 10 KB and was omitted from certRFC9440.certChainRFC9440StringThe intermediate certificate chain in RFC 9440 format as a comma-separated list. Empty if no intermediates were sent or if the chain exceeded 16 KB.certChainRFC9440TooLargeBooleantrue if the intermediate chain exceeded 16 KB and was omitted from certChainRFC9440. Example: forwarding client certificate headers to your origin export default { async fetch(request) { const tls = request.cf.tlsClientAuth; // Only forward if cert was verified and chain is complete if (!tls || !tls.certVerified || tls.certRevoked || tls.certChainRFC9440TooLarge) { return new Response("Unauthorized", { status: 401 }); } const headers = new Headers(request.headers); headers.set("Client-Cert", tls.certRFC9440); headers.set("Client-Cert-Chain", tls.certChainRFC9440); return fetch(new Request(request, { headers })); },}; For more information, refer to Client certificate variables and Mutual TLS authentication.
MCP server portals support two context optimization options that reduce how many tokens tool definitions consume in the model's context window. Both options are activated by appending the optimize_context query parameter to the portal URL.
minimize_toolsStrips tool descriptions and input schemas from all upstream tools, leaving only their names. The portal exposes a special query tool that agents use to retrieve full definitions on demand. This provides up to 5x savings in token usage.
https://./mcp?optimize_context=minimize_tools
search_and_executeHides all upstream tools and exposes only two tools: query and execute. The query tool searches and retrieves tool definitions. The execute tool runs the upstream tools in an isolated Dynamic Worker environment. This reduces the initial token cost to a small constant, regardless of how many tools are available through the portal.
https://./mcp?optimize_context=search_and_execute
For more information, refer to Optimize context.
DLP now processes ZIP files using a streaming handler that scans archive contents element-by-element as data arrives. This removes previous file size limitations and improves memory efficiency when scanning large archives.
Microsoft Office documents (DOCX, XLSX, PPTX) also benefit from this improvement, as they use ZIP as a container format.
This improvement is automatic — no configuration changes are required.
MCP server portals support code mode, a technique that reduces context window usage by replacing individual tool definitions with a single code execution tool. Code mode is turned on by default on all portals.
To turn it off, edit the portal in Access controls > AI controls and turn off Code mode under Basic information.
When code mode is active, the portal exposes a single code tool instead of listing every tool from every upstream MCP server. The connected AI agent writes JavaScript that calls typed codemode.* methods for each upstream tool. The generated code runs in an isolated Dynamic Worker environment, keeping authentication credentials and environment variables out of the model context.
To use code mode, append ?codemode=search_and_execute to your portal URL when connecting from an MCP client:
https://./mcp?codemode=search_and_execute
For more information, refer to code mode.
Containers and Sandboxes now support connecting directly to Workers over HTTP. This allows you to call Workers functions and bindings, like KV or R2, from within the container at specific hostnames. Run Worker code Define an outbound handler to capture any HTTP request or use outboundByHost to capture requests to individual hostnames and IPs. export class MyApp extends Sandbox {} MyApp.outbound = async (request, env, ctx) => { // you can run arbitrary functions defined in your Worker on any HTTP request return await someWorkersFunction(request.body);}; MyApp.outboundByHost = { "my.worker": async (request, env, ctx) => { return await anotherFunction(request.body); },}; In this example, requests from the container to http://my.worker will run the function defined within outboundByHost, and any other HTTP requests will run the outbound handler. These handlers run entirely inside the Workers runtime, outside of the container sandbox. TLS support coming soonContainers and Sandboxes currently only intercept HTTP traffic. HTTPS interception is coming soon. This will enable using Workers as a transparent proxy for credential injection.Even though this is just using HTTP, traffic to Workers is secure and runs on the same machine as the container. If needed, you can also upgrade requests to TLS from the Worker itself. Access Workers bindings Each handler has access to env, so it can call any binding set in Wrangler config. Code inside the container makes a standard HTTP request to that hostname and the outbound Worker translates it into a binding call. export class MyApp extends Sandbox {} MyApp.outboundByHost = { "my.kv": async (request, env, ctx) => { const key = new URL(request.url).pathname.slice(1); const value = await env.KV.get(key); return new Response(value ?? "", { status: value ? 200 : 404 }); }, "my.r2": async (request, env, ctx) => { const key = new URL(request.url).pathname.slice(1); const object = await env.BUCKET.get(key); return new Response(object?.body ?? "", { status: object ? 200 : 404 }); },}; Now, from inside the container sandbox, curl http://my.kv/some-key will access Workers KV and curl http://my.r2/some-object will access R2. Access Durable Object state Use ctx.containerId to reference the container's automatically provisioned Durable Object. export class MyContainer extends Container {} MyContainer.outboundByHost = { "get-state.do": async (request, env, ctx) => { const id = env.MY_CONTAINER.idFromString(ctx.containerId); const stub = env.MY_CONTAINER.get(id); return stub.getStateForKey(request.body); },}; This provides an easy way to associate state with any container instance, and includes a built-in SQLite database. Get Started Today Upgrade to @cloudflare/containers version 0.2.0 or later, or @cloudflare/sandbox version 0.8.0 or later to use outbound Workers. Refer to Containers outbound traffic and Sandboxes outbound traffic for more details and examples.
Radar ships several improvements to the URL Scanner that make scan reports more informative and easier to share:
Live screenshots — the summary card now includes an option to capture a live screenshot of the scanned URL on demand using the Browser Rendering API. Save as PDF — a new button generates a print-optimized document aggregating all tab contents (Summary, Security, Network, Behavior, and Indicators) into a single file. Download as JSON — raw scan data is available as a JSON download for programmatic use. Redesigned summary layout — page information and security details are now displayed side by side with the screenshot, with a layout that adapts to narrower viewports. File downloads — downloads are separated into a dedicated card with expandable rows showing each file's source URL and SHA256 hash. Detailed IP address data — the Network tab now includes additional detail per IP address observed during the scan.
Explore these improvements on the Cloudflare Radar URL Scanner.
HTTP Archive (HAR) files are used by engineering and support teams to capture and share web traffic logs for troubleshooting. However, these files routinely contain highly sensitive data — including session cookies, authorization headers, and other credentials — that can pose a significant risk if uploaded to third-party services without being reviewed or cleaned first.
Gateway now includes a predefined DLP profile called Unsanitized HAR that detects HAR files in HTTP traffic. You can use this profile in a Gateway HTTP policy to either block HAR file uploads entirely or redirect users to a sanitization tool before allowing the upload to proceed.
In the Cloudflare dashboard, go to Zero Trust > Traffic policies > Firewall Policies > HTTP and create a new HTTP policy using the DLP Profile selector:
SelectorOperatorValueActionDLP ProfileinUnsanitized HAR
Then choose one of the following actions:
Block: Prevents the upload of any HAR file that has not been sanitized by Cloudflare's sanitizer. Use this for strict environments where HAR file sharing must be disallowed entirely.
Block with Gateway Redirect: Intercepts the upload and redirects the user to https://har-sanitizer.pages.dev/, where they can sanitize the file. Once sanitized, the user can re-upload the clean file and proceed with their workflow.
HAR files processed by the Cloudflare HAR sanitizer receive a tamper-evident sanitized marker. DLP recognizes this marker and will not re-trigger the policy on a file that has already been sanitized and has not been modified since. If a previously sanitized file is edited, it will be treated as unsanitized and flagged again.
Gateway logs will reflect whether a detected HAR file was classified as Unsanitized or Sanitized, giving your security team full visibility into HAR file activity across your organization.
For more information, refer to predefined DLP profiles.
The new secrets configuration property lets you declare the secret names your Worker requires in your Wrangler configuration file. Required secrets are validated during local development and deploy, and used as the source of truth for type generation.
wrangler.jsonc { "secrets": { "required": ["API_KEY", "DB_PASSWORD"], },} wrangler.toml [secrets]required = [ "API_KEY", "DB_PASSWORD" ]
Local development
When secrets is defined, wrangler dev and vite dev load only the keys listed in secrets.required from .dev.vars or .env/process.env. Additional keys in those files are excluded. If any required secrets are missing, a warning is logged listing the missing names.
Type generation
wrangler types generates typed bindings from secrets.required instead of inferring names from .dev.vars or .env. This lets you run type generation in CI or other environments where those files are not present. Per-environment secrets are supported — the aggregated Env type marks secrets that only appear in some environments as optional.
Deploy
wrangler deploy and wrangler versions upload validate that all secrets in secrets.required are configured on the Worker before the operation succeeds. If any required secrets are missing, the command fails with an error listing which secrets need to be set.
For more information, refer to the secrets configuration property reference.
Logpush now supports higher-precision timestamp formats for log output. You can configure jobs to output timestamps at millisecond or nanosecond precision. This is available in both the Logpush UI in the Cloudflare dashboard and the Logpush API. To use the new formats, set timestamp_format in your Logpush job's output_options:
rfc3339ms — 2024-02-17T23:52:01.123Z rfc3339ns — 2024-02-17T23:52:01.123456789Z
Default timestamp formats apply unless explicitly set. The dashboard defaults to rfc3339 and the API defaults to unixnano. For more information, refer to the Log output options documentation.
Cloudflare now exposes four new fields in the Transform Rules phase that encode client certificate data in RFC 9440 format. Previously, forwarding client certificate information to your origin required custom parsing of PEM-encoded fields or non-standard HTTP header formats. These new fields produce output in the standardized Client-Cert and Client-Cert-Chain header format defined by RFC 9440, so your origin can consume them directly without any additional decoding logic. Each certificate is DER-encoded, Base64-encoded, and wrapped in colons. For example, :MIIDsT...Vw==:. A chain of intermediates is expressed as a comma-separated list of such values. New fields
FieldTypeDescriptioncf.tls_client_auth.cert_rfc9440StringThe client leaf certificate in RFC 9440 format. Empty if no client certificate was presented.cf.tls_client_auth.cert_rfc9440_too_largeBooleantrue if the leaf certificate exceeded 10 KB and was omitted. In practice this will almost always be false.cf.tls_client_auth.cert_chain_rfc9440StringThe intermediate certificate chain in RFC 9440 format as a comma-separated list. Empty if no intermediate certificates were sent or if the chain exceeded 16 KB.cf.tls_client_auth.cert_chain_rfc9440_too_largeBooleantrue if the intermediate chain exceeded 16 KB and was omitted. The chain encoding follows the same ordering as the TLS handshake: the certificate closest to the leaf appears first, working up toward the trust anchor. The root certificate is not included. Example: Forwarding client certificate headers to your origin server Add a request header transform rule to set the Client-Cert and Client-Cert-Chain headers on requests forwarded to your origin server. For example, to forward headers for verified, non-revoked certificates: Rule expression: cf.tls_client_auth.cert_verified and not cf.tls_client_auth.cert_revoked Header modifications:
OperationHeader nameValueSetClient-Certcf.tls_client_auth.cert_rfc9440SetClient-Cert-Chaincf.tls_client_auth.cert_chain_rfc9440 To get the most out of these fields, upload your client CA certificate to Cloudflare so that Cloudflare validates the client certificate at the edge and populates cf.tls_client_auth.cert_verified and cf.tls_client_auth.cert_revoked. Prevent header injectionYou should ensure that Client-Cert and Client-Cert-Chain headers received by your origin server can only originate from this transform rule — any client could send these headers directly. If you use WAF custom rules to block requests with invalid mTLS connections: The transform rule is sufficient. For all requests that reach your origin server, the rule will overwrite any existing Client-Cert and Client-Cert-Chain headers. If you do not enforce mTLS at the WAF: Add another transform rule that removes any incoming Client-Cert and Client-Cert-Chain headers from all requests (use expression true), ordered before the rule above. This ensures your origin server cannot receive client-supplied values for these HTTP headers.
For more information, refer to Mutual TLS authentication, Request Header Transform Rules, and the fields reference.
Dynamic Workers are now in open beta for all paid Workers users. You can now have a Worker spin up other Workers, called Dynamic Workers, at runtime to execute code on-demand in a secure, sandboxed environment. Dynamic Workers start in milliseconds, making them well suited for fast, secure code execution at scale. Use Dynamic Workers for
Code Mode: LLMs are trained to write code. Run tool-calling logic written in code instead of stepping through many tool calls, which can save up to 80% in inference tokens and cost. AI agents executing code: Run code for tasks like data analysis, file transformation, API calls, and chained actions. Running AI-generated code: Run generated code for prototypes, projects, and automations in a secure, isolated sandboxed environment. Fast development and previews: Load prototypes, previews, and playgrounds in milliseconds. Custom automations: Create custom tools on the fly that execute a task, call an integration, or automate a workflow.
Executing Dynamic Workers Dynamic Workers support two loading modes:
load(code) — for one-time code execution (equivalent to calling get() with a null ID). get(id, callback) — caches a Dynamic Worker by ID so it can stay warm across requests. Use this when the same code will receive subsequent requests.
JavaScript export default { async fetch(request, env) { const worker = env.LOADER.load({ compatibilityDate: "2026-01-01", mainModule: "src/index.js", modules: { "src/index.js": export default { fetch() { return new Response("Hello from a dynamic Worker"); }, }; , }, // Block all outbound network access from the Dynamic Worker. globalOutbound: null, });
return worker.getEntrypoint().fetch(request); },}; TypeScript export default { async fetch(request: Request, env: Env): PromiseResponse> { const worker = env.LOADER.load({ compatibilityDate: "2026-01-01", mainModule: "src/index.js", modules: { "src/index.js": export default { fetch() { return new Response("Hello from a dynamic Worker"); }, }; , }, // Block all outbound network access from the Dynamic Worker. globalOutbound: null, });
return worker.getEntrypoint().fetch(request); },};
Helper libraries for Dynamic Workers
Here are 3 new libraries to help you build with Dynamic Workers:
@cloudflare/codemode: Replace individual tool calls with a single code() tool, so LLMs write and execute TypeScript that orchestrates multiple API calls in one pass.
@cloudflare/worker-bundler: Resolve npm dependencies and bundle source files into ready-to-load modules for Dynamic Workers, all at runtime.
@cloudflare/shell: Give your agent a virtual filesystem inside a Dynamic Worker with persistent storage backed by SQLite and R2.
Try it out Dynamic Workers Starter Use this starter to deploy a Worker that can load and execute Dynamic Workers. Dynamic Workers Playground Deploy the Dynamic Workers Playground to write or import code, bundle it at runtime with @cloudflare/worker-bundler, execute it through a Dynamic Worker, and see real-time responses and execution logs. For the full API reference and configuration options, refer to the Dynamic Workers documentation. Pricing Dynamic Workers pricing is based on three dimensions: Dynamic Workers created daily, requests, and CPU time.
IncludedAdditional usageDynamic Workers created daily1,000 unique Dynamic Workers per month+$0.002 per Dynamic Worker per dayRequests ¹10 million per month+$0.30 per million requestsCPU time ¹30 million CPU milliseconds per month+$0.02 per million CPU milliseconds ¹ Uses Workers Standard rates and will appear as part of your existing Workers bill, not as separate Dynamic Workers charges. Note: Dynamic Workers requests and CPU time are already billed as part of your Workers plan and will count toward your Workers requests and CPU usage. The Dynamic Workers created daily charge is not yet active — you will not be billed for the number of Dynamic Workers created at this time. Pricing information is shared in advance so you can estimate future costs.
You can now control how Cloudflare handles origin responses without changing your origin. Cache Response Rules let you modify Cache-Control directives, manage cache tags, and strip headers like Set-Cookie from origin responses before they reach Cloudflare's cache. Whether traffic is cached or passed through dynamically, these rules give you control over origin response behavior that was previously out of reach. What changed Cache Rules previously only operated on request attributes. Cache Response Rules introduce a new response phase that evaluates origin responses and lets you act on them before caching. You can now:
Modify Cache-Control directives: Set or remove individual directives like no-store, no-cache, max-age, s-maxage, stale-while-revalidate, immutable, and more. For example, remove a no-cache directive your origin sends so Cloudflare can cache the asset, or set an s-maxage to control how long Cloudflare stores it. Set a different browser Cache-Control: Send a different Cache-Control header downstream to browsers and other clients than what Cloudflare uses internally, giving you independent control over edge and browser caching strategies. Manage cache tags: Add, set, or remove cache tags on responses, including converting tags from another CDN's header format into Cloudflare's Cache-Tag header. This is especially useful if you are migrating from a CDN that uses a different tag header or delimiter. Strip headers that block caching: Remove Set-Cookie, ETag, or Last-Modified headers from origin responses before caching, so responses that would otherwise be treated as uncacheable can be stored and served from cache.
Benefits
No origin changes required: Fix caching behavior entirely from Cloudflare, even when your origin configuration is locked down or managed by a different team. Simpler CDN migration: Match caching behavior from other CDN providers without rewriting your origin. Translate cache tag formats and override directives that do not align with Cloudflare's defaults. Native support, fewer workarounds: Functionality that previously required workarounds is now built into Cache Rules with full Tiered Cache compatibility. Fine-grained control: Use expressions to match on request and response attributes, then apply precise cache settings per rule. Rules are stackable and composable with existing Cache Rules.
Get started Configure Cache Response Rules in the Cloudflare dashboard under Caching > Cache Rules, or via the Rulesets API. For more details, refer to the Cache Rules documentation.
Containers now support Docker Hub images. You can use a fully qualified Docker Hub image reference in your Wrangler configuration instead of first pushing the image to Cloudflare Registry.
wrangler.jsonc { "containers": [ { // Example: docker.io/cloudflare/sandbox:0.7.18 "image": "docker.io//:", }, ],} wrangler.toml [[containers]]image = "docker.io//:"
Containers also support private Docker Hub images. To configure credentials, refer to Use private Docker Hub images.
For more information, refer to Image management.
The top-level Interconnects page in the Cloudflare dashboard has been removed. Interconnects are now located under Connectors > Interconnects. Your existing configurations and functionality remain the same.
Cloudflare Gateway now supports OIDC Claims as a selector in Firewall, Resolver, and Egress policies. Administrators can use custom OIDC claims from their identity provider to build fine-grained, identity-based traffic policies across all Gateway policy types. With this update, you can:
Filter traffic in DNS, HTTP, and Network firewall policies based on OIDC claim values. Apply custom resolver policies to route DNS queries to specific resolvers depending on a user's OIDC claims. Control egress policies to assign dedicated egress IPs based on OIDC claim attributes.
For example, you can create a policy that routes traffic differently for users with department=engineering in their OIDC claims, or restrict access to certain destinations based on a user's role claim. To get started, configure custom OIDC claims on your identity provider and use the OIDC Claims selector in the Gateway policy builder. For more information, refer to Identity-based policies.