A simple web crawler built using Go, much like a simplified Google, takes a link and provides the corresponding page associated with that link.
- Clone the repo or download the zip file, and
cdinto the project folder.
cd go_web_crawler
go run main.go
- After the server starts, you have to open any browser and type the URL http://localhost:8000/
/webCrawler/ # Contains web crawler logic.
/webCrawler/model # Definitions struct (Page, Content)
/webCrawler/endpoint # Contain all endpoint handle function (/crawl,/numWorkers,/speedPerHour)
/webCrawler/retry # Contain retry page logic