Faster by default: Requests are cached with maxAge defaulting to 2 days, and sensible defaults like blockAds, skipTlsVerification, and removeBase64Images are enabled.
New summary format: You can now specify "summary" as a format to directly receive a concise summary of the page content.
Updated JSON extraction: JSON extraction and change tracking now use an object format: { type: "json", prompt, schema }. The old "extract" format has been renamed to "json".
Enhanced screenshot options: Use the object form: { type: "screenshot", fullPage, quality, viewport }.
New search sources: Search across "news" and "images" in addition to web results by setting the sources parameter.
Smart crawling with prompts: Pass a natural-language prompt to crawl and the system derives paths/limits automatically. Use the new crawl-params-preview endpoint to inspect the derived options before starting a job.
const firecrawl = new Firecrawl({ apiKey: 'fc-YOUR-API-KEY' })firecrawl = Firecrawl(api_key='fc-YOUR-API-KEY')https://api.firecrawl.dev/v2/ endpoints."summary" where needed{ type: "json", prompt, schema } for JSON extractionstartCrawl + getCrawlStatus (or crawl waiter)startBatchScrape + getBatchScrapeStatus (or batchScrape waiter)startExtract + getExtractStatus (or extract waiter)prompt with crawl-params-previewScrape, Search, and Map
| v1 (FirecrawlApp) | v2 (Firecrawl) |
|---|---|
scrapeUrl(url, ...) | scrape(url, options?) |
search(query, ...) | search(query, options?) |
mapUrl(url, ...) | map(url, options?) |
Crawling
| v1 | v2 |
|---|---|
crawlUrl(url, ...) | crawl(url, options?) (waiter) |
asyncCrawlUrl(url, ...) | startCrawl(url, options?) |
checkCrawlStatus(id, ...) | getCrawlStatus(id) |
cancelCrawl(id) | cancelCrawl(id) |
checkCrawlErrors(id) | getCrawlErrors(id) |
Batch Scraping
| v1 | v2 |
|---|---|
batchScrapeUrls(urls, ...) | batchScrape(urls, opts?) (waiter) |
asyncBatchScrapeUrls(urls, ...) | startBatchScrape(urls, opts?) |
checkBatchScrapeStatus(id, ...) | getBatchScrapeStatus(id) |
checkBatchScrapeErrors(id) | getBatchScrapeErrors(id) |
Extraction
| v1 | v2 |
|---|---|
extract(urls?, params?) | extract(args) |
asyncExtract(urls, params?) | startExtract(args) |
getExtractStatus(id) | getExtractStatus(id) |
Other / Removed
| v1 | v2 |
|---|---|
generateLLMsText(...) | (not in v2 SDK) |
checkGenerateLLMsTextStatus(id) | (not in v2 SDK) |
crawlUrlAndWatch(...) | watcher(jobId, ...) |
batchScrapeUrlsAndWatch(...) | watcher(jobId, ...) |
Core Document Types
| v1 | v2 |
|---|---|
FirecrawlDocument | Document |
FirecrawlDocumentMetadata | DocumentMetadata |
Scrape, Search, and Map Types
| v1 | v2 |
|---|---|
ScrapeParams | ScrapeOptions |
ScrapeResponse | Document |
SearchParams | SearchRequest |
SearchResponse | SearchData |
MapParams | MapOptions |
MapResponse | MapData |
Crawl Types
| v1 | v2 |
|---|---|
CrawlParams | CrawlOptions |
CrawlStatusResponse | CrawlJob |
Batch Operations
| v1 | v2 |
|---|---|
BatchScrapeStatusResponse | BatchScrapeJob |
Action Types
| v1 | v2 |
|---|---|
Action | ActionOption |
Error Types
| v1 | v2 |
|---|---|
FirecrawlError | SdkError |
ErrorResponse | ErrorDetails |
Scrape, Search, and Map
| v1 | v2 |
|---|---|
scrape_url(...) | scrape(...) |
search(...) | search(...) |
map_url(...) | map(...) |
Crawling
| v1 | v2 |
|---|---|
crawl_url(...) | crawl(...) (waiter) |
async_crawl_url(...) | start_crawl(...) |
check_crawl_status(...) | get_crawl_status(...) |
cancel_crawl(...) | cancel_crawl(...) |
Batch Scraping
| v1 | v2 |
|---|---|
batch_scrape_urls(...) | batch_scrape(...) (waiter) |
async_batch_scrape_urls(...) | start_batch_scrape(...) |
get_batch_scrape_status(...) | get_batch_scrape_status(...) |
get_batch_scrape_errors(...) | get_batch_scrape_errors(...) |
Extraction
| v1 | v2 |
|---|---|
extract(...) | extract(...) |
start_extract(...) | start_extract(...) |
get_extract_status(...) | get_extract_status(...) |
Other / Removed
| v1 | v2 |
|---|---|
generate_llms_text(...) | (not in v2 SDK) |
get_generate_llms_text_status(...) | (not in v2 SDK) |
watch_crawl(...) | watcher(job_id, ...) |
AsyncFirecrawl mirrors the same methods (all awaitable)."markdown", "html", "rawHtml", "links", "summary".parsePDF use parsers: [ { "type": "pdf" } | "pdf" ]. curl -X POST https://api.firecrawl.dev/v2/scrape \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-d '{
"url": "https://docs.firecrawl.dev/",
"formats": [{
"type": "json",
"prompt": "Extract the company mission from the page."
}]
}'
curl -X POST https://api.firecrawl.dev/v2/scrape \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-d '{
"url": "https://docs.firecrawl.dev/",
"formats": [{
"type": "screenshot",
"fullPage": true,
"quality": 80,
"viewport": { "width": 1280, "height": 800 }
}]
}'
| v1 | v2 |
|---|---|
allowBackwardCrawling | (removed) use crawlEntireDomain |
maxDepth | (removed) use maxDiscoveryDepth |
ignoreSitemap (bool) | sitemap (e.g., "only", "skip", or "include") |
| (none) | prompt |
See crawl params preview examples:
curl -X POST https://api.firecrawl.dev/v2/crawl-params-preview \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-d '{
"url": "https://docs.firecrawl.dev",
"prompt": "Extract docs and blog"
}'
crawl:<id>:visited size in Redis by 16x by @mogery in https://github.com/firecrawl/firecrawl/pull/1936maxAge: 0 explicit in Index tests by @mogery in https://github.com/firecrawl/firecrawl/pull/1946Full Changelog: https://github.com/firecrawl/firecrawl/compare/v1.15.0...v2.0.0
Fetched April 11, 2026