Skill Index

cli/

firecrawl-crawl

OK · verified[skill]

Bulk extract content from an entire website or site section. Use this skill when the user wants to crawl a site, extract all pages from a docs section, bulk-scrape multiple pages following links, or says "crawl", "get all the pages", "extract everything under /docs", "bulk extract", or needs content from many pages on the same site. Handles depth limits, path filtering, and concurrent extraction.

$/plugin install cli

details

firecrawl crawl

Bulk extract content from a website. Crawls pages following links up to a depth/limit.

When to use

  • You need content from many pages on a site (e.g., all /docs/)
  • You want to extract an entire site section
  • Step 4 in the workflow escalation pattern: search → scrape → map → crawl → interact

Quick start

# Crawl a docs section
firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json

# Full crawl with depth limit
firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json

# Check status of a running crawl
firecrawl crawl <job-id>

Options

OptionDescription
--waitWait for crawl to complete before returning
--progressShow progress while waiting
--limit <n>Max pages to crawl
--max-depth <n>Max link depth to follow
--include-paths <paths>Only crawl URLs matching these paths
--exclude-paths <paths>Skip URLs matching these paths
--delay <ms>Delay between requests
--max-concurrency <n>Max parallel crawl workers
--prettyPretty print JSON output
-o, --output <path>Output file path

Tips

  • Always use --wait when you need the results immediately. Without it, crawl returns a job ID for async polling.
  • Use --include-paths to scope the crawl — don't crawl an entire site when you only need one section.
  • Crawl consumes credits per page. Check firecrawl credit-usage before large crawls.

See also

technical

github
firecrawl/cli
stars
321
license
unspecified
contributors
8
last commit
2026-04-16T16:19:41Z
file
skills/firecrawl-crawl/SKILL.md

related