An automated job scraper for finding the best remote software engineering jobs worldwide, with a focus on high-paying locations like the US, Dubai, and other top markets.
-
API-Based Scraping: Uses reliable public APIs from multiple job boards:
- RemoteOK - One of the largest remote job boards
- Remotive - Curated remote jobs in tech
- Arbeitnow - European and global remote opportunities
-
Smart Filtering:
- Focuses on full-stack and related software engineering roles
- Prioritizes jobs from high-paying locations (US, Dubai, Switzerland, UK, etc.)
- Removes duplicate listings
- Tags jobs with relevant technologies
-
Multiple Output Formats:
- JSON for easy parsing
- CSV for spreadsheet analysis
-
Web Interface:
- Beautiful embedded web server to browse and filter jobs
- Real-time filtering by location, company, and technology
- Statistics dashboard
-
Fast & Reliable:
- API-based scraping is more stable than HTML parsing
- Concurrent fetching from all sources
- Typically finds 400+ full-stack jobs in seconds
- Go 1.21 or higher
# Clone the repository
git clone <your-repo-url>
cd jobseeker
# Build the binary
go build -o jobseeker ./cmd/jobseeker
# Optional: Install globally
go install ./cmd/jobseekerScrape jobs and save to a JSON file:
./jobseeker scrape# Custom output file
./jobseeker scrape -o my-jobs.json
# Export to CSV as well
./jobseeker scrape --csv
# Custom timeout (in seconds)
./jobseeker scrape -t 600
# Verbose logging
./jobseeker scrape -v
# Custom data directory
./jobseeker scrape -d ./my-dataStart the web server to browse collected jobs:
./jobseeker serveThen open your browser to http://localhost:8080
# Custom port
./jobseeker serve -p 3000
# Custom data directory
./jobseeker serve -d ./my-data# 1. Scrape jobs with CSV export
./jobseeker scrape --csv -v
# 2. Browse the results in your browser
./jobseeker serve
# 3. Open http://localhost:8080 and filter by:
# - Location (e.g., "United States", "Dubai")
# - Company
# - Technology (e.g., "react", "golang")
# - Priority locations onlyJobs from these locations are marked as priority and sorted first:
- United States
- Dubai / UAE
- Switzerland
- Canada
- United Kingdom
- Australia
- Germany
- Netherlands
- Singapore
Jobs marked as "Anywhere", "Worldwide", or "Global" are also treated as priority.
The scraper automatically tags jobs with detected technologies:
- Frontend: React, Vue, Angular
- Backend: Node, Golang, Python, Ruby, Java, PHP, Rust
- Languages: JavaScript, TypeScript
- Seniority: Senior, Junior, Mid-level, Lead, Architect
[
{
"id": "abc123",
"title": "Senior Full Stack Engineer",
"company": "Acme Corp",
"location": "United States",
"salary": "$150k - $200k",
"description": "",
"url": "https://...",
"source": "WeWorkRemotely",
"posted_at": "2025-11-21T10:00:00Z",
"scraped_at": "2025-11-21T12:30:00Z",
"tags": ["senior", "react", "node", "full-stack"],
"is_priority": true
}
]jobseeker/
├── cmd/jobseeker/ # CLI application
│ ├── main.go # Entry point
│ ├── scrape.go # Scrape command
│ └── serve.go # Serve command
├── internal/
│ ├── models/ # Data models
│ ├── scrapers/ # Job board scrapers
│ ├── filters/ # Filtering logic
│ ├── storage/ # Data persistence
│ └── server/ # Web server
│ └── templates/ # HTML templates
├── data/ # Job data (created on first run)
└── README.md
To add support for a new job board:
- Create a new file in
internal/scrapers/, e.g.,newsite.go - Implement the
Scraperinterface:
type NewSiteScraper struct{}
func (s *NewSiteScraper) Name() string {
return "NewSite"
}
func (s *NewSiteScraper) Scrape(ctx context.Context) ([]models.Job, error) {
// Implement scraping logic
}- Register it in
cmd/jobseeker/scrape.go:
registry.Register(scrapers.NewNewSiteScraper())- API-based scraping (RemoteOK, Remotive, Arbeitnow)
- Smart filtering for full-stack roles
- Priority location marking and sorting
- JSON and CSV export
- Web interface for browsing jobs
-
Add More API Sources
- JustRemote API (Free)
- Himalayas API (Free)
- FindRemote.jobs API
- WeWorkRemotely API (if available)
- Explore other APIs (see list below)
-
Email Notifications
- Alert on new high-priority jobs
- Daily digest of new jobs
- Custom filters per user
-
SQLite Storage & History
- Track job history and changes
- Mark jobs as "seen" or "applied"
- Search through historical data
-
Rate Limiting & API Management
- Respect API rate limits
- API key management for paid services
- Retry logic with exponential backoff
-
Scheduling & Automation
- Cron job support
- Auto-run daily scraping
- Detect and notify on new jobs only
- LinkedIn Jobs API integration
- Job application tracking
- Salary range normalization and comparison
- Company research integration (Glassdoor, Crunchbase)
- Chrome extension for one-click saving
- Job change detection (track new vs seen jobs)
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License
This tool is for educational and personal use. Please respect the terms of service of the websites being scraped and use responsibly. Consider using official APIs where available.