- get URLs first (hakrawler, gau, waybackurls, otxurls, burp crawl, gospider, cc.py, etc..)
- or scan domain with JSScanner (you will now have JS files)
- or
getJS --input URLs.txt (or --url <url> --resolve --method <POST/PUT>
- then scan each URL w/ linkfinder -o cli
- jsubfinder -f <urls.txt> [or -u ] -s (search for secrets)
- if you have domains.txt:
cat domains.txt|httpx | bash ./script.sh (JSScanner)
- or if probed domains.. (for each..) ./scripthunter.sh <probed.com>
- subjs -i <urls.txt> -c 50 (for subdomains from JS files)
cat demo-file.js | ./extract.rb curl -s https://hackerone.com/hacktivity | ./extract.rb
cat target.txt
https://site.com bash JSFSCan.sh -l target.txt --all -r -o jsfs-output
cat alive.txt
https://lol.com bash script.sh done, now you can scan js and db folders for secrets and links
python3 main.py -u URL -n NAME
./scripthunter.sh https://dev.verizon.com
Most basic usage to find endpoints in an online JavaScript file and output the HTML results to results.html: python linkfinder.py -i https://example.com/1.js -o results.html
CLI/STDOUT output (doesn't use jsbeautifier, which makes it very fast): python linkfinder.py -i https://example.com/1.js -o cli
Analyzing an entire domain and its JS files: python linkfinder.py -i https://example.com -d
Burp input (select in target the files you want to save, right click, Save selected items, feed that file as input): python linkfinder.py -i burpfile -b
Enumerating an entire folder for JavaScript files, while looking for endpoints starting with /api/ and finally saving the results to results.html: python linkfinder.py -i 'Desktop/*.js' -r ^/api/ -o results.html
host the repo on apache (php-based) python handler.py visit http://localhsot:8008