Web crawler that scans domains for endpoints, secrets, API keys, and file extensions
Cariddi is a web crawler and security scanning tool that takes domain lists as input and crawls URLs to discover endpoints, secrets, API keys, tokens, and files with specific extensions. The tool accepts single URLs via stdin or lists of domains from files, then performs comprehensive crawling to identify potentially sensitive information.
The crawler offers multiple scan modes including intensive subdomain crawling, endpoint hunting, secret detection using regex patterns, error page discovery, and file extension analysis with seven levels of sensitivity. It supports custom wordlists for endpoints and secrets, allowing security researchers to tailor scans to specific targets. The tool can ignore specified file extensions and URL patterns to focus scanning efforts.
Cariddi includes extensive configuration options such as proxy support (HTTP and SOCKS5), custom headers, user agent rotation, request delays, and concurrency control. Output formats include plain text, JSON, HTML reports, and HTTP response storage for further analysis. The tool integrates with Burp Suite for intercepting requests and supports caching to improve performance on repeated scans.
Security researchers, penetration testers, and bug bounty hunters use cariddi for reconnaissance and vulnerability discovery during web application assessments. The tool's ability to systematically crawl and analyze web applications makes it valuable for identifying attack surfaces and potential security issues across multiple domains.
# via Homebrew
brew install cariddi
# via Go
go install -v github.com/edoardottt/cariddi/cmd/cariddi@latest
# via Snap
sudo snap install cariddi
