Firecrawl v2.8.0 is here!
Firecrawl v2.8.0 brings major improvements to agent workflows, developer tooling, and self-hosted deployments across the API and SDKs, including our new Skill.
- Parallel Agents for running thousands of
/agent queries simultaneously, powered by our new Spark 1 Fast model.
- Firecrawl CLI with full support for scrape, search, crawl, and map commands.
- Firecrawl Skill for enabling AI agents (Claude Code, Codex, OpenCode) to use Firecrawl autonomously.
- Three new models powering /agent: Spark 1 Fast for instant retrieval (currently only available in Playground), Spark 1 Mini for complex research queries, and Spark 1 Pro for advanced extraction tasks.
- Agent enhancements including webhooks, model selection, and new MCP Server tools.
- Platform-wide performance improvements including faster search execution and optimized Redis calls.
- SDK improvements including Zod v4 compatibility.
And much more, check it out below!
New Features
-
Parallel Agents
Execute thousands of /agent queries in parallel with automatic failure handling and intelligent waterfall execution. Powered by Spark 1-Fast for instant retrieval, automatically upgrading to Spark 1 Mini for complex queries requiring full research.
-
Firecrawl CLI
New command-line interface for Firecrawl with full support for scrape, search, crawl, and map commands. Install with npm install -g firecrawl-cli.
-
Firecrawl Skill
Enables agents like Claude Cursor, Codex, and OpenCode to use Firecrawl for web scraping and data extraction, installable via npx skills add firecrawl/cli.
-
Spark Model Family
Three new models powering /agent: Spark 1 Fast for instant retrieval (currently available in Playground), Spark 1 Mini (default) for everyday extraction tasks at 60% lower cost, and Spark 1 Pro for complex multi-domain research requiring maximum accuracy. Spark 1 Pro achieves ~50% recall while Mini delivers ~40% recall, both significantly outperforming tools costing 4-7x more per task.
-
Firecrawl MCP Server Agent Tools
New firecrawl_agent and firecrawl_agent_status tools for autonomous web data gathering via MCP-enabled agents.
-
Agent Webhooks
Agent endpoint now supports webhooks for real-time notifications on job completion and progress.
-
Agent Model Selection
Agent endpoint now accepts a model parameter and includes model info in status responses.
-
Multi-Arch Docker Images
Self-hosted deployments now support linux/arm64 architecture in addition to amd64.
-
Sitemap-Only Crawl Mode
New crawl option to exclusively use sitemap URLs without following links.
-
ignoreCache Map Parameter
New option to bypass cached results when mapping URLs.
-
Custom Headers for /map
Map endpoint now supports custom request headers.
-
Background Image Extraction
Scraper now extracts background images from CSS styles.
-
Improved Error Messages
All user-facing error messages now include detailed explanations to help diagnose issues.
API Improvements
- Search without concurrency limits — scrapes in search now execute directly without queue overhead.
- Return
400 for unsupported actions with clear errors when requested actions aren't supported by available engines.
- Job ID now included in search metadata for easier tracking.
- Metadata responses now include detected timezone.
- Backfill metadata title from
og:title or twitter:title when missing.
- Preserve
gid parameter when rewriting Google Sheets URLs.
- Fixed v2 path in batch scrape status pagination.
- Validate team ownership when appending to existing crawls.
- Screenshots with custom viewport or quality settings now bypass cache.
- Optimized Redis calls across endpoints.
- Reduced excessive
robots.txt fetching and parsing.
- Minimum request timeout parameter now configurable.
SDK Improvements
JavaScript SDK
- Zod v4 Compatibility — schema conversion now works with Zod v4 with improved error detection.
- Watcher Exports —
Watcher and WatcherOptions now exported from the SDK entrypoint.
- Agent Webhook Support — new webhook options for agent calls.
- Error Retry Polling — SDK retries polling after transient errors.
- Job ID in Exceptions — error exceptions now include
jobId for debugging.
Python SDK
- Manual pagination helpers for iterating through results.
- Agent webhook support added to agent client.
- Agent endpoint now accepts model selection parameter.
- Metadata now includes concurrency limit information.
- Fixed
max_pages handling in crawl requests.
Dashboard Improvements
- Dark mode is now supported.
- On the usage page, you can now view credit usage broken down by day.
- On the activity logs page, you can now filter by the API key that was used.
- The "images" output format is now supported in the Playground.
- All admins can now manage their team's subscriptions.
Quality & Performance
- Skip markdown conversion checks for large HTML documents.
- Export Google Docs as HTML instead of PDF for improved performance.
- Improved branding format with better logo detection and error messages for PDFs and documents.
- Improved
lopdf metadata loading performance.
- Updated
html-to-markdown module with multiple bug fixes.
- Increased markdown service body limit and added request ID logging.
- Better Sentry filtering for cancelled jobs and engine errors.
- Fixed extract race conditions and RabbitMQ poison pill handling.
- Centralized Firecrawl configuration across the codebase.
- Multiple security vulnerability fixes, including CVE-2025-59466 and lodash prototype pollution.
Self-Hosted Improvements
- CLI custom API URL support via
firecrawl --api-url http://localhost:3002 for local instances.
- ARM64 Docker support via multi-arch images for Apple Silicon and ARM servers.
- Fixed docker-compose database credentials out of the box.
- Fixed Playwright service startup caused by Chromium path issues.
- Updated Node.js to major version 22 instead of a pinned minor.
- Added RabbitMQ health check endpoint.
- Fixed PostgreSQL port exposure in docker-compose.
New Contributors
- @gemyago
- @loganaden
- @pcgeek86
- @dmlarionov
Full Changelog: https://github.com/firecrawl/firecrawl/compare/v2.7.0...v2.8.0
What's Changed