Site crawling for gathering information on sites can be very useful when doing OSINT, using a tool like Photon can save us some time and efforts by typing some simple commands and leave it to do its magic.
We can gather information on the target like the DNS, HEADERS and also perform “wayback” to look for left-overs information.
The tools is very easy to set up, first we download it from the GITHUB and follow these steps:
$ git clone https://github.com/s0md3v/Photon.git
$ cd Photon
$ pip3 install -r requirements
$ python3 photon.py -h
The help menu is very useful to start with,
We’re going to use a random site for this demonstration with some flags to scan for:
python3 photon.py -u https://guides.loc.gov/e-books/ -l 3 -t 100 –dns
the “-l” is for the level of how deep it’s going to crawl for information, the “-t” is for the threads used and the “—dns” to enumerate subdomains and DNS data.
The best part of the “—dns” is the output is in a form of a photo where you can enlarge it and see more information.
This is a folder where the saved information is stored in separate text files.
The file “loc.gov.png” is the DNS information:
We can see we have some external links associated with the site and stored in the file external.txt, also some internal files as internal.txt and the scripts used on the site.