Skip to content

aliasgharfathikhah/RECON

Repository files navigation

RECON

This program can find important information from the desired site.
I used this tool to get https://google.com information.
Next, I will explain how to use this program:
For this program to work for you, first install these libraries:
pip install whois
pip install socket
pip install python-nmap
pip install requests
pip install beautifulsoup4
pip install selenium

Introduction of commands:

1: --start-all : Performs all tasks related to recon
2: --find-link : The process finds links
3: --find-D2L : This process finds links of depth two
4: --find-Subdomain : This process finds the links in the subdomains of the site
5: --find-port : This process finds the open ports of the site
6: --regex : This process performs regex operations
7: --whois : This process performs the whois operation

Description of project files:

1: links.html : Links found from the desired site are saved in this file.
2: Depthtwo.html : The links found from the depth of two sites are stored in this file.
3: SubdomainANDLinks.html : The subdomains of the desired site are stored in this file in addition to the links inside them.
4: SubdomainANDLinks.txt : Subdomains are stored in this text file which is used for regex operations.
5: Port.html : The open ports of the desired site are stored in this file.
6: Regex.html : The phone numbers and emails found from the subdomains of the desired site are stored in this file.
7: screenshot.png : In this file, the screenshot taken from the desired site is saved.

Note: The function of finding subdomain links of depth 2 has been commented because it is time-consuming.


(☞゚ヮ゚)☞ FATHI ☜(゚ヮ゚☜)

About

Site scanner

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published