This repository contains a Hunter. applications that can be used for parsing or scrapping multiple server / hosts for getting big web-data using custom resources and reproject api's.
Buy VIP API Access RepProject • Why not open source? • Install • Documentation • Credits • Join Telegram
Feel free to contact me to buy access for accesing all repproject api's :
Back to explaining what Hunter is. Hunter is used to send requests across targets based on a config, leading to zero false positives and providing fast parsing,scanning and scrapping on a large number of lists and pages. Hunter using multiple method parameter to use in templates config. With powerful and flexible config templates, Hunter can be used to model all kinds of web scrapping and i think in the future no one helping me with this project so i decided to not opensources :) . Click here to know more about use customing config templates
Hunter not requires anything because its already compiled . it works on cross-platforms such as Unix,Windows and MacOS
Unix/Linux System
git clone https://github.com/t101804/Hunter.git
cd Hunter
chmod +x hunter_linux
./hunter_linux If you using get error "/lib/x86_64-linux-gnu/libc.so.6: version not found do command :
sudo add-apt-repository 'deb http://cz.archive.ubuntu.com/ubuntu jammy main'
sudo update
sudo apt install libc6Windows
Download directlyMacOS
Same as Unix/linux system but run ./hunter_macos
here are 3 documentations :
Basic config documentation is for the applications in config.json
"application": {
"apikey_repproject": "your_apikey_that_you_got_from_buy_access",
"reqtimeout": 60,
"reqmaxbytes": "50mb",
},-
the "apikey_repproject" is apikey server that will send to repcyber api's if your ip is registered in system you will can be accessed the server. to buy apikey and authorize your ip you can buy from me in here starts with 20$ only
-
the "reqtimeout" is a each requests timeout so if the response server is more then 60 seconds it will skip
-
the "reqmaxbytes" is a each requests max results response, so if each results is more then 50mb it wil only grab the 50mb and the rest is auto skip
Basic config documentation for repproject server
"server": {
"server_reverse_ip": {
"all records ( grab all of records )": "https://repcyber.com/allreverse/{ip}",
"a records ( a tld domains only )": "https://repcyber.com/reverse/{ip}",
"ns records ( nameserver only )": "https://repcyber.com/revip_ns/{ip}",
"mx recods ( mailserver only )": "https://repcyber.com/revip_mx/{ip}",
"cname records ( cname only )": "https://repcyber.com/revip_cname/{ip}"
},
"server_grabber": {
"grab ip records": "https://repcyber.com/vipgrab/ip/{total}",
"grab amazon records": "https://repcyber.com/vipgrab/amazonaws.com/total/{total}",
"grab tld domain records": "https://repcyber.com/vipgrab/tld_domains/{total}"
},
"server_utility": {
"analyze site ( check cms and tech )": "https://repcyber.com/analyzer/{site}",
"cpanel checker": "https://repcyber.com/checker/cpanel/{raw_lists}"
}
},for this you can check https://t.me//repproject and you can only using this if you alredy buy and get the apikey
before using please check server is alive or no and always check channel for more info
"custom_server": {
"urcustomname": {
"custom_header": [],
"method": "",
"post_data": "",
"regex": "",
"url": ""
},
"examples_custom_config_for_reverse_rapid_methods_GET": {
"custom_header": [
"Content-Type:*/*",
"User-agent:Mozilla/5.0"
],
"method": "get",
"post_data": "",
"regex": "<td>(.*?)</td>",
"url": "https://rapiddns.io/sameip/{ip}?full=1"
},
"examples_custom_config_for_reverse_seoaudit_methods_POST": {
"custom_header": [
"referer:tools.seo-auditor.com.ru/check-ip/"
],
"method": "post",
"post_data": "url={ip}",
"regex": "<th align=\"left\" nowrap>\u0414\u043E\u043C\u0435\u043D:<\/th>\\s+<td width=\"100%\">(.*?)<\/td>",
"url": "https://tools.seo-auditor.com.ru/tools/check-ip/"
}
}make sure u put "," if you want add a new custom name again if you have any json error make sure you make a valid json type
you can add unlimited custom server you want
you can put the url and if url have parameter you can put the parameter
you can fill whatever you want and this have 3 parameter that you can use in URL or POST_DATA
- {page} : loop the page examples you put 50 page it will loop 1-50 with threadings
- {total} : the total pages its using manual loopings ( if you confuse please watch the vidio )
- {site} : make sure the lists have format sitelist type then run the parameter
- {ip} : if the lists is site is auto change to ip then run the parameter
- {raw_lists} : run the lists without filtering anything
this only have 2 method "get" and "post" only ( make sure its lowercase )
example usage :
"custom_header": ["Content-Type:*/*", "User-agent:Mozilla/5.0"],
example usage :
"post_data": "param1=value¶m2=value2",- you can also use parameter
"post_data": "ip={ip}",
if your regex contains " escape with \ so it will be " for the escape example usage :
"regex": "/site/(.*?)\""