Workers AI and AI Gateway have received a series of dashboard improvements to help you get started faster and manage your AI workloads more easily. Navigation and discoverability AI now has its own top-level section in the Cloudflare dashboard sidebar, so you can find AI features without digging through menus. Onboarding and getting started Getting started with AI Gateway is now simpler. When you create your first gateway, we now show your gateway's OpenAI-compatible endpoint and step-by-step guidance to help you configure it. The Playground also includes helpful prompts, and usage pages have clear next steps if you have not made any requests yet. We've also combined the previously separate code example sections into one view with dropdown selectors for API type, provider, SDK, and authentication method so you can now customize the exact code snippet you need from one place. Dynamic Routing
The route builder is now more performant and responsive. You can now copy route names to your clipboard with a single click. Code examples use the Universal Endpoint format, making it easier to integrate routes into your application.
Observability and analytics
Small monetary values now display correctly in cost analytics charts, so you can accurately track spending at any scale.
Accessibility
Improvements to keyboard navigation within the AI Gateway, specifically when exploring usage by provider. Improvements to sorting and filtering components on the Workers AI models page.
For more information, refer to the AI Gateway documentation.
Digital Experience Monitoring (DEX) provides visibility into WARP device connectivity and performance to any internal or external application. Now, all DEX logs are fully compatible with Cloudflare's Customer Metadata Boundary (CMB) setting for the 'EU' (European Union), which ensures that DEX logs will not be stored outside the 'EU' when the option is configured. If a Cloudflare One customer using DEX enables CMB 'EU', they will not see any DEX data in the Cloudflare One dashboard. Customers can ingest DEX data via LogPush, and build their own analytics and dashboards. If a customer enables CMB in their account, they will see the following message in the Digital Experience dashboard: "DEX data is unavailable because Customer Metadata Boundary configuration is on. Use Cloudflare LogPush to export DEX datasets."
The Server-Timing header now includes a new cfWorker metric that measures time spent executing Cloudflare Workers, including any subrequests performed by the Worker. This helps developers accurately identify whether high Time to First Byte (TTFB) is caused by Worker processing or slow upstream dependencies. Previously, Worker execution time was included in the edge metric, making it harder to identify true edge performance. The new cfWorker metric provides this visibility:
MetricDescriptionedgeTotal time spent on the Cloudflare edge, including Worker executionoriginTime spent fetching from the origin servercfWorkerTime spent in Worker execution, including subrequests but excluding origin fetch time Example response Server-Timing: cdn-cache; desc=DYNAMIC, edge; dur=20, origin; dur=100, cfWorker; dur=7 In this example, the edge took 20ms, the origin took 100ms, and the Worker added just 7ms of processing time. Availability The cfWorker metric is enabled by default if you have Real User Monitoring (RUM) enabled. Otherwise, you can enable it using Rules. This metric is particularly useful for:
Performance debugging: Quickly determine if latency is caused by Worker code, external API calls within Workers, or slow origins. Optimization targeting: Identify which component of your request path needs optimization. Real User Monitoring (RUM): Access detailed timing breakdowns directly from response headers for client-side analytics.
For more information about Server-Timing headers, refer to the W3C Server Timing specification.
Sandboxes and Containers now support running Docker for "Docker-in-Docker" setups. This is particularly useful when your end users or agents want to run a full sandboxed development environment. This allows you to:
Develop containerized applications with your Sandbox Run isolated test environments for images Build container images as part of CI/CD workflows Deploy arbitrary images supplied at runtime within a container
For Sandbox SDK users, see the Docker-in-Docker guide for instructions on combining Docker with the SandboxSDK. For general Containers usage, see the Containers FAQ.
We are updating naming related to some of our Networking products to better clarify their place in the Zero Trust and Secure Access Service Edge (SASE) journey. We are retiring some older brand names in favor of names that describe exactly what the products do within your network. We are doing this to help customers build better, clearer mental models for comprehensive SASE architecture delivered on Cloudflare. What's changing
Magic WAN → Cloudflare WAN Magic WAN IPsec → Cloudflare IPsec Magic WAN GRE → Cloudflare GRE Magic WAN Connector → Cloudflare One Appliance Magic Firewall → Cloudflare Network Firewall Magic Network Monitoring → Network Flow Magic Cloud Networking → Cloudflare One Multi-cloud Networking
No action is required by you — all functionality, existing configurations, and billing will remain exactly the same. For more information, visit the Cloudflare One documentation.
The latest release of the Agents SDK adds built-in retry utilities, per-connection protocol message control, and a fully rewritten @cloudflare/ai-chat with data parts, tool approval persistence, and zero breaking changes.
Retry utilities
A new this.retry() method lets you retry any async operation with exponential backoff and jitter. You can pass an optional shouldRetry predicate to bail early on non-retryable errors.
JavaScript class MyAgent extends Agent { async onRequest(request) { const data = await this.retry(() => callUnreliableService(), { maxAttempts: 4, shouldRetry: (err) => !(err instanceof PermanentError), }); return Response.json(data); }} TypeScript class MyAgent extends Agent { async onRequest(request: Request) { const data = await this.retry(() => callUnreliableService(), { maxAttempts: 4, shouldRetry: (err) => !(err instanceof PermanentError), }); return Response.json(data); }}
Retry options are also available per-task on queue(), schedule(), scheduleEvery(), and addMcpServer():
JavaScript // Per-task retry configuration, persisted in SQLite alongside the taskawait this.schedule( Date.now() + 60_000, "sendReport", { userId: "abc" }, { retry: { maxAttempts: 5 }, },);
// Class-level retry defaultsclass MyAgent extends Agent { static options = { retry: { maxAttempts: 3 }, };} TypeScript // Per-task retry configuration, persisted in SQLite alongside the taskawait this.schedule(Date.now() + 60_000, "sendReport", { userId: "abc" }, { retry: { maxAttempts: 5 },});
// Class-level retry defaultsclass MyAgent extends Agent { static options = { retry: { maxAttempts: 3 }, };}
Retry options are validated eagerly at enqueue/schedule time, and invalid values throw immediately. Internal retries have also been added for workflow operations (terminateWorkflow, pauseWorkflow, and others) with Durable Object-aware error detection.
Per-connection protocol message control
Agents automatically send JSON text frames (identity, state, MCP server lists) to every WebSocket connection. You can now suppress these per-connection for clients that cannot handle them — binary-only devices, MQTT clients, or lightweight embedded systems.
JavaScript class MyAgent extends Agent { shouldSendProtocolMessages(connection, ctx) { // Suppress protocol messages for MQTT clients const subprotocol = ctx.request.headers.get("Sec-WebSocket-Protocol"); return subprotocol !== "mqtt"; }} TypeScript class MyAgent extends Agent { shouldSendProtocolMessages(connection: Connection, ctx: ConnectionContext) { // Suppress protocol messages for MQTT clients const subprotocol = ctx.request.headers.get("Sec-WebSocket-Protocol"); return subprotocol !== "mqtt"; }}
Connections with protocol messages disabled still fully participate in RPC and regular messaging. Use isConnectionProtocolEnabled(connection) to check a connection's status at any time. The flag persists across Durable Object hibernation.
See Protocol messages for full documentation.
@cloudflare/ai-chat v0.1.0
The first stable release of @cloudflare/ai-chat ships alongside this release with a major refactor of AIChatAgent internals — new ResumableStream class, WebSocket ChatTransport, and simplified SSE parsing — with zero breaking changes. Existing code using AIChatAgent and useAgentChat works as-is.
Key new features:
Data parts — Attach typed JSON blobs (data-*) to messages alongside text. Supports reconciliation (type+id updates in-place), append, and transient parts (ephemeral via onData callback). See Data parts. Tool approval persistence — The needsApproval approval UI now survives page refresh and DO hibernation. The streaming message is persisted to SQLite when a tool enters approval-requested state. maxPersistedMessages — Cap SQLite message storage with automatic oldest-message deletion. body option on useAgentChat — Send custom data with every request (static or dynamic). Incremental persistence — Hash-based cache to skip redundant SQL writes. Row size guard — Automatic two-pass compaction when messages approach the SQLite 2 MB limit. autoContinueAfterToolResult defaults to true — Client-side tool results and tool approvals now automatically trigger a server continuation, matching server-executed tool behavior. Set autoContinueAfterToolResult: false in useAgentChat to restore the previous behavior.
Notable bug fixes:
Resolved stream resumption race conditions Resolved an issue where setMessages functional updater sent empty arrays Resolved an issue where client tool schemas were lost after DO hibernation Resolved InvalidPromptError after tool approval (approval.id was dropped) Resolved an issue where message metadata was not propagated on broadcast/resume paths Resolved an issue where clearAll() did not clear in-memory chunk buffers Resolved an issue where reasoning-delta silently dropped data when reasoning-start was missed during stream resumption
Synchronous queue and schedule getters getQueue(), getQueues(), getSchedule(), dequeue(), dequeueAll(), and dequeueAllByCallback() were unnecessarily async despite only performing synchronous SQL operations. They now return values directly instead of wrapping them in Promises. This is backward compatible — existing code using await on these methods will continue to work. Other improvements
Fix TypeScript "excessively deep" error — A depth counter on CanSerialize and IsSerializableParam types bails out to true after 10 levels of recursion, preventing the "Type instantiation is excessively deep" error with deeply nested types like AI SDK CoreMessage[]. POST SSE keepalive — The POST SSE handler now sends event: ping every 30 seconds to keep the connection alive, matching the existing GET SSE handler behavior. This prevents POST response streams from being silently dropped by proxies during long-running tool calls. Widened peer dependency ranges — Peer dependency ranges across packages have been widened to prevent cascading major bumps during 0.x minor releases. @cloudflare/ai-chat and @cloudflare/codemode are now marked as optional peer dependencies.
Upgrade To update to the latest version: npm i agents@latest @cloudflare/ai-chat@latest
A new Allow clientless access setting makes it easier to connect users without a device client to internal applications, without using public DNS. Previously, to provide clientless access to a private hostname or IP without a published application, you had to create a separate bookmark application pointing to a prefixed Clientless Web Isolation URL (for example, https://.cloudflareaccess.com/browser/https://10.0.0.1/). This bookmark was visible to all users in the App Launcher, regardless of whether they had access to the underlying application. Now, you can manage clientless access directly within your private self-hosted application. When Allow clientless access is turned on, users who pass your Access application policies will see a tile in their App Launcher pointing to the prefixed URL. Users must have remote browser permissions to open the link.
You can now assign Access policies to bookmark applications. This lets you control which users see a bookmark in the App Launcher based on identity, device posture, and other policy rules. Previously, bookmark applications were visible to all users in your organization. With policy support, you can now:
Tailor the App Launcher to each user — Users only see the applications they have access to, reducing clutter and preventing accidental clicks on irrelevant resources. Restrict visibility of sensitive bookmarks — Limit who can view bookmarks to internal tools or partner resources based on group membership, identity provider, or device posture.
Bookmarks support all Access policy configurations except purpose justification, temporary authentication, and application isolation. If no policy is assigned, the bookmark remains visible to all users (maintaining backwards compatibility). For more information, refer to Add bookmarks.
Cloudflare has deprecated the Workers Quick Editor dev tools inspector and replaced it with a lightweight log viewer. This aligns our logging with wrangler tail and gives us the opportunity to focus our efforts on bringing benefits from the work we have invested in observability, which would not be possible otherwise. We have made improvements to this logging viewer based on your feedback such that you can log object and array types, and easily clear the list of logs. This does not include class instances. Limitations are documented in the Workers Playground docs. If you do need to develop your Worker with a remote inspector, you can still do this using Wrangler locally. Cloning a project from your quick editor to your computer for local development can be done with the wrangler init --from-dash command. For more information, refer to Wrangler commands.
When AI systems request pages from any website that uses Cloudflare and has Markdown for Agents enabled, they can express the preference for text/markdown in the request: our network will automatically and efficiently convert the HTML to markdown, when possible, on the fly. This release adds the following improvements:
The origin response limit was raised from 1 MB to 2 MB (2,097,152 bytes). We no longer require the origin to send the content-length header. We now support content encoded responses from the origin.
If you haven’t enabled automatic Markdown conversion yet, visit the AI Crawl Control section of the Cloudflare dashboard and enable Markdown for Agents. Refer to our developer documentation for more details.
This week’s release introduces new detections for CVE-2025-68645 and CVE-2025-31125. Key Findings
CVE-2025-68645: A Local File Inclusion (LFI) vulnerability in the Webmail Classic UI of Zimbra Collaboration Suite (ZCS) 10.0 and 10.1 allows unauthenticated remote attackers to craft requests to the /h/rest endpoint, improperly influence internal dispatching, and include arbitrary files from the WebRoot directory. CVE-2025-31125: Vite, the JavaScript frontend tooling framework, exposes content of non-allowed files via ?inline&import when its development server is network-exposed, enabling unauthorized attackers to read arbitrary files and potentially leak sensitive information.
RulesetRule IDLegacy Rule IDDescriptionPrevious ActionNew ActionCommentsCloudflare Managed Ruleset695d76ff756844d384cab548833761f7 N/AZimbra - Local File Inclusion - CVE:CVE-2025-68645LogBlockThis is a new detection.Cloudflare Managed Ruleset38fff9f3deba46a2abc10a8f950ed8c8 N/AVite - WASM Import Path Traversal - CVE:CVE-2025-31125LogBlockThis is a new detection.
A new Workers Best Practices guide provides opinionated recommendations for building fast, reliable, observable, and secure Workers. The guide draws on production patterns, Cloudflare internal usage, and best practices observed from developers building on Workers. Key guidance includes:
Keep your compatibility date current and enable nodejs_compat — Ensure you have access to the latest runtime features and Node.js built-in modules.
wrangler.jsonc { "name": "my-worker", "main": "src/index.ts", // Set this to today's date "compatibility_date": "2026-04-03", "compatibility_flags": ["nodejs_compat"],} wrangler.toml name = "my-worker"main = "src/index.ts"# Set this to today's datecompatibility_date = "2026-04-03"compatibility_flags = [ "nodejs_compat" ]
Generate binding types with wrangler types — Never hand-write your Env interface. Let Wrangler generate it from your actual configuration to catch mismatches at compile time. Stream request and response bodies — Avoid buffering large payloads in memory. Use TransformStream and pipeTo to stay within the 128 MB memory limit and improve time-to-first-byte. Use bindings, not REST APIs — Bindings to KV, R2, D1, Queues, and other Cloudflare services are direct, in-process references with no network hop and no authentication overhead. Use Queues and Workflows for background work — Move long-running or retriable tasks out of the critical request path. Use Queues for simple fan-out and buffering, and Workflows for multi-step durable processes. Enable Workers Logs and Traces — Configure observability before deploying to production so you have data when you need to debug. Avoid global mutable state — Workers reuse isolates across requests. Storing request-scoped data in module-level variables causes cross-request data leaks. Always await or waitUntil your Promises — Floating promises cause silent bugs and dropped work. Use Web Crypto for secure token generation — Never use Math.random() for security-sensitive operations.
To learn more, refer to Workers Best Practices.
Fine-grained permissions for Access policies and Access service tokens are available. These new resource-scoped roles expand the existing RBAC model, enabling administrators to grant permissions scoped to individual resources. New roles
Cloudflare Access policy admin: Can edit a specific Access policy in an account. Cloudflare Access service token admin: Can edit a specific Access service token in an account.
These roles complement the existing resource-scoped roles for Access applications, identity providers, and infrastructure targets. For more information:
Resource-scoped roles Role scopes
NoteResource-scoped roles is currently in beta.
Disclaimer: Please note that v5.0.0-beta.1 is in Beta and we are still testing it for stability.
Full Changelog: v4.3.1...v5.0.0-beta.1 In this release, you'll see a large number of breaking changes. This is primarily due to a change in OpenAPI definitions, which our libraries are based off of, and codegen updates that we rely on to read those OpenAPI definitions and produce our SDK libraries. As the codegen is always evolving and improving, so are our code bases. There may be changes that are not captured in this changelog. Feel free to open an issue to report any inaccuracies, and we will make sure it gets into the changelog before the v5.0.0 release. Most of the breaking changes below are caused by improvements to the accuracy of the base OpenAPI schemas, which sometimes translates to breaking changes in downstream clients that depend on those schemas. Please ensure you read through the list of changes below and the migration guide before moving to this version - this will help you understand any down or upstream issues it may cause to your environments. Breaking Changes The following resources have breaking changes. See the v5 Migration Guide for detailed migration instructions.
abusereports acm.totaltls apigateway.configurations cloudforceone.threatevents d1.database intel.indicatorfeeds logpush.edge origintlsclientauth.hostnames queues.consumers radar.bgp rulesets.rules schemavalidation.schemas snippets zerotrust.dlp zerotrust.networks
Features New API Resources
abusereports - Abuse report management abusereports.mitigations - Abuse report mitigation actions ai.tomarkdown - AI-powered markdown conversion aigateway.dynamicrouting - AI Gateway dynamic routing configuration aigateway.providerconfigs - AI Gateway provider configurations aisearch - AI-powered search functionality aisearch.instances - AI Search instance management aisearch.tokens - AI Search authentication tokens alerting.silences - Alert silence management brandprotection.logomatches - Brand protection logo match detection brandprotection.logos - Brand protection logo management brandprotection.matches - Brand protection match results brandprotection.queries - Brand protection query management cloudforceone.binarystorage - CloudForce One binary storage connectivity.directory - Connectivity directory services d1.database - D1 database management diagnostics.endpointhealthchecks - Endpoint health check diagnostics fraud - Fraud detection and prevention iam.sso - IAM Single Sign-On configuration loadbalancers.monitorgroups - Load balancer monitor groups organizations - Organization management organizations.organizationprofile - Organization profile settings origintlsclientauth.hostnamecertificates - Origin TLS client auth hostname certificates origintlsclientauth.hostnames - Origin TLS client auth hostnames origintlsclientauth.zonecertificates - Origin TLS client auth zone certificates pipelines - Data pipeline management pipelines.sinks - Pipeline sink configurations pipelines.streams - Pipeline stream configurations queues.subscriptions - Queue subscription management r2datacatalog - R2 Data Catalog integration r2datacatalog.credentials - R2 Data Catalog credentials r2datacatalog.maintenanceconfigs - R2 Data Catalog maintenance configurations r2datacatalog.namespaces - R2 Data Catalog namespaces radar.bots - Radar bot analytics radar.ct - Radar certificate transparency data radar.geolocations - Radar geolocation data realtimekit.activesession - Real-time Kit active session management realtimekit.analytics - Real-time Kit analytics realtimekit.apps - Real-time Kit application management realtimekit.livestreams - Real-time Kit live streaming realtimekit.meetings - Real-time Kit meeting management realtimekit.presets - Real-time Kit preset configurations realtimekit.recordings - Real-time Kit recording management realtimekit.sessions - Real-time Kit session management realtimekit.webhooks - Real-time Kit webhook configurations tokenvalidation.configuration - Token validation configuration tokenvalidation.rules - Token validation rules workers.beta - Workers beta features
New Endpoints (Existing Resources) acm.totaltls
edit() update()
cloudforceone.threatevents
list()
contentscanning
create() get() update()
dns.records
scan_list() scan_review() scan_trigger()
intel.indicatorfeeds
create() delete() list()
leakedcredentialchecks.detections
get()
queues.consumers
list()
radar.ai
summary() timeseries() timeseries_groups()
radar.bgp
changes() snapshot()
workers.subdomains
delete()
zerotrust.networks
create() delete() edit() get() list()
General Fixes and Improvements Type System & Compatibility
Type inference improvements: Allow Pyright to properly infer TypedDict types within SequenceNotStr Type completeness: Add missing types to method arguments and response models Pydantic compatibility: Ensure compatibility with Pydantic versions prior to 2.8.0 when using additional fields
Request/Response Handling
Multipart form data: Correctly handle sending multipart/form-data requests with JSON data Header handling: Do not send headers with default values set to omit GET request headers: Don't send Content-Type header on GET requests Response body model accuracy: Broad improvements to the correctness of models
Parsing & Data Processing
Discriminated unions: Correctly handle nested discriminated unions in response parsing Extra field types: Parse extra field types correctly Empty metadata: Ignore empty metadata fields during parsing Singularization rules: Update resource name singularization rules for better consistency
We're excited to announce GLM-4.7-Flash on Workers AI, a fast and efficient text generation model optimized for multilingual dialogue and instruction-following tasks, along with the brand-new @cloudflare/tanstack-ai package and workers-ai-provider v3.1.1. You can now run AI agents entirely on Cloudflare. With GLM-4.7-Flash's multi-turn tool calling support, plus full compatibility with TanStack AI and the Vercel AI SDK, you have everything you need to build agentic applications that run completely at the edge. GLM-4.7-Flash — Multilingual Text Generation Model @cf/zai-org/glm-4.7-flash is a multilingual model with a 131,072 token context window, making it ideal for long-form content generation, complex reasoning tasks, and multilingual applications. Key Features and Use Cases:
Multi-turn Tool Calling for Agents: Build AI agents that can call functions and tools across multiple conversation turns Multilingual Support: Built to handle content generation in multiple languages effectively Large Context Window: 131,072 tokens for long-form writing, complex reasoning, and processing long documents Fast Inference: Optimized for low-latency responses in chatbots and virtual assistants Instruction Following: Excellent at following complex instructions for code generation and structured tasks
Use GLM-4.7-Flash through the Workers AI binding (env.AI.run()), the REST API at /run or /v1/chat/completions, AI Gateway, or via workers-ai-provider for the Vercel AI SDK. Pricing is available on the model page or pricing page. @cloudflare/tanstack-ai v0.1.1 — TanStack AI adapters for Workers AI and AI Gateway We've released @cloudflare/tanstack-ai, a new package that brings Workers AI and AI Gateway support to TanStack AI. This provides a framework-agnostic alternative for developers who prefer TanStack's approach to building AI applications. Workers AI adapters support four configuration modes — plain binding (env.AI), plain REST, AI Gateway binding (env.AI.gateway(id)), and AI Gateway REST — across all capabilities:
Chat (createWorkersAiChat) — Streaming chat completions with tool calling, structured output, and reasoning text streaming. Image generation (createWorkersAiImage) — Text-to-image models. Transcription (createWorkersAiTranscription) — Speech-to-text. Text-to-speech (createWorkersAiTts) — Audio generation. Summarization (createWorkersAiSummarize) — Text summarization.
AI Gateway adapters route requests from third-party providers — OpenAI, Anthropic, Gemini, Grok, and OpenRouter — through Cloudflare AI Gateway for caching, rate limiting, and unified billing. To get started: npm install @cloudflare/tanstack-ai @tanstack/ai workers-ai-provider v3.1.1 — transcription, speech, reranking, and reliability The Workers AI provider for the Vercel AI SDK now supports three new capabilities beyond chat and image generation:
Transcription (provider.transcription(model)) — Speech-to-text with automatic handling of model-specific input formats across binding and REST paths. Text-to-speech (provider.speech(model)) — Audio generation with support for voice and speed options. Reranking (provider.reranking(model)) — Document reranking for RAG pipelines and search result ordering.
import { createWorkersAI } from "workers-ai-provider";import { experimental_transcribe, experimental_generateSpeech, rerank,} from "ai"; const workersai = createWorkersAI({ binding: env.AI }); const transcript = await experimental_transcribe({ model: workersai.transcription("@cf/openai/whisper-large-v3-turbo"), audio: audioData, mediaType: "audio/wav",}); const speech = await experimental_generateSpeech({ model: workersai.speech("@cf/deepgram/aura-1"), text: "Hello world", voice: "asteria",}); const ranked = await rerank({ model: workersai.reranking("@cf/baai/bge-reranker-base"), query: "What is machine learning?", documents: ["ML is a branch of AI.", "The weather is sunny."],}); This release also includes a comprehensive reliability overhaul (v3.0.5):
Fixed streaming — Responses now stream token-by-token instead of buffering all chunks, using a proper TransformStream pipeline with backpressure. Fixed tool calling — Resolved issues with tool call ID sanitization, conversation history preservation, and a heuristic that silently fell back to non-streaming mode when tools were defined. Premature stream termination detection — Streams that end unexpectedly now report finishReason: "error" instead of silently reporting "stop". AI Search support — Added createAISearch as the canonical export (renamed from AutoRAG). createAutoRAG still works with a deprecation warning.
To upgrade: npm install workers-ai-provider@latest ai Resources
@cloudflare/tanstack-ai on npm workers-ai-provider on npm GitHub repository
Workers VPC now supports Cloudflare Origin CA certificates when connecting to your private services over HTTPS. Previously, Workers VPC only trusted certificates issued by publicly trusted certificate authorities (for example, Let's Encrypt, DigiCert). With this change, you can use free Cloudflare Origin CA certificates on your origin servers within private networks and connect to them from Workers VPC using the https scheme. This is useful for encrypting traffic between the tunnel and your service without needing to provision certificates from a public CA. For more information, refer to Supported TLS certificates.
Cloudflare WAN now displays your Anycast IP addresses directly in the dashboard when you configure IPsec or GRE tunnels. Previously, customers received their Anycast IPs during onboarding or had to retrieve them with an API call. The dashboard now pre-loads these addresses, reducing setup friction and preventing configuration errors. No action is required. All Cloudflare WAN customers can see their Anycast IPs in the tunnel configuration form automatically. For more information, refer to Configure tunnel endpoints.
We have significantly upgraded our Logo Matching capabilities within Brand Protection. While previously limited to approximately 100% matches, users can now detect a wider range of brand assets through a redesigned matching model and UI. What's new
Configurable match thresholds: Users can set a minimum match score (starting at 75%) when creating a logo query to capture subtle variations or high-quality impersonations. Visual match scores: Allow users to see the exact percentage of the match directly in the results table, highlighted with color-coded lozenges to indicate severity. Direct logo previews: Available in the Cloudflare dashboard — similar to string matches — to verify infringements at a glance.
Key benefits
Expose sophisticated impersonators who use slightly altered logos to bypass basic detection filters. Faster triage of the most relevant threats immediately using visual indicators, reducing the time spent manually reviewing matches.
Ready to protect your visual identity? Learn more in our Brand Protection documentation.
Cloudflare's network now supports real-time content conversion at the source, for enabled zones using content negotiation headers. When AI systems request pages from any website that uses Cloudflare and has Markdown for Agents enabled, they can express the preference for text/markdown in the request: our network will automatically and efficiently convert the HTML to markdown, when possible, on the fly. Here is a curl example with the Accept negotiation header requesting this page from our developer documentation: curl https://developers.cloudflare.com/fundamentals/reference/markdown-for-agents/ \ -H "Accept: text/markdown" The response to this request is now formatted in markdown: HTTP/2 200date: Wed, 11 Feb 2026 11:44:48 GMTcontent-type: text/markdown; charset=utf-8content-length: 2899vary: acceptx-markdown-tokens: 725content-signal: ai-train=yes, search=yes, ai-input=yes ---title: Markdown for Agents · Cloudflare Agents docs---
Markdown has quickly become the lingua franca for agents and AI systemsas a whole. The format’s explicit structure makes it ideal for AI processing,ultimately resulting in better results while minimizing token waste.... Refer to our developer documentation and our blog announcement for more details.
Radar now includes content type insights for AI bot and crawler traffic. The new content_type dimension and filter shows the distribution of content types returned to AI crawlers, grouped by MIME type category. The content type dimension and filter are available via the following API endpoints:
/ai/bots/summary/content_type /ai/bots/timeseries_groups/content_type
Content type categories:
HTML - Web pages (text/html) Images - All image formats (image/) JSON - JSON data and API responses (application/json, +json) JavaScript - Scripts (application/javascript, text/javascript) CSS - Stylesheets (text/css) Plain Text - Unformatted text (text/plain) Fonts - Web fonts (font/, application/font-) XML - XML documents and feeds (text/xml, application/xml, application/rss+xml, application/atom+xml) YAML - Configuration files (text/yaml, application/yaml) Video - Video content and streaming (video/*, application/ogg, mpegurl) Audio - Audio content (audio/) Markdown - Markdown documents (text/markdown) Documents - PDFs, Office documents, ePub, CSV (application/pdf, application/msword, text/csv) Binary - Executables, archives, WebAssembly (application/octet-stream, application/zip, application/wasm) Serialization - Binary API formats (application/protobuf, application/grpc, application/msgpack) Other - All other content types
Additionally, individual bot information pages now display content type distribution for AI crawlers that exist in both the Verified Bots and AI Bots datasets. Check out the AI Insights page to explore the data.