Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Web Scraping Script for scraping jobs from devjobsscanner.com #386

Merged
merged 4 commits into from
May 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions WEB SCRAPING/devJobsScanner_Scraper/LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2024 Asib Hossen

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
81 changes: 81 additions & 0 deletions WEB SCRAPING/devJobsScanner_Scraper/ReadME.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# devJobScanner Job Scraper

## Description
This repository contains two scripts designed to scrape job listings from a specified website. Users can input their desired job title, remote work preference, sorting preference, and choose how to save the output (CSV, TXT, or both).

## Scripts

### Script 1: `job_scraper_static.py`
- Scrapes job listings using the `requests` library and `BeautifulSoup`.
- Displays job details in the console.
- Saves job details in CSV and/or TXT format.
- Suitable for static page scraping.

### Script 2: `job_scraper_dynamic.py`
- Enhanced to use `SeleniumBase` for dynamic page interaction.
- Supports infinite scrolling to load more job listings.
- Users can specify the number of job listings to scrape.
- More robust handling of dynamically loaded content.

## Requirements

### Common Requirements
- Python 3.x
- `beautifulsoup4` library
- `requests` library

### Dynamic Script Additional Requirements
- `seleniumbase` library
- WebDriver for your browser (e.g., ChromeDriver for Chrome)

## Installation
1. Clone the repository:
```bash
git clone https://github.com/asibhossen897/devJobsScanner-job-scraper.git
cd devJobsScanner-job-scraper
```

2. Install the required libraries:
```bash
pip install -r requirements.txt
```

3. For `job_scraper_dynamic.py`, ensure you have the appropriate WebDriver installed and available in your PATH.

## Usage

### Static Scraper (`job_scraper_static.py`)
1. Run the script:
```bash
python job_scraper_static.py
```
(**If ```python``` does not work, use ```python3```**)

2. Follow the prompts to input your job search criteria and preferences.

### Dynamic Scraper (`job_scraper_dynamic.py`)
1. Run the script:
```bash
python job_scraper_dynamic.py
```
(**If ```python``` does not work, use ```python3```**)

2. Follow the prompts to input your job search criteria, number of jobs to scrape, and preferences.

## File Structure
- `job_scraper_static.py`: Script for static job scraping.
- `job_scraper_dynamic.py`: Script for dynamic job scraping with SeleniumBase.
- `requirements.txt`: List of required Python libraries.
- `outputFiles/`: Directory where output files (CSV, TXT) are saved.

## Disclaimer
These scripts are for educational and personal use only. Scraping websites can be against the terms of service of the website being scraped. Always check the website’s terms and conditions before scraping any content. The author is not responsible for any misuse of these scripts. Use at your own risk.

## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## Author
Asib Hossen

## Date
May 21, 2024
184 changes: 184 additions & 0 deletions WEB SCRAPING/devJobsScanner_Scraper/job_scraper_dynamic.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,184 @@
# Author: Asib Hossen
# Date: May 21, 2024
# Description: This script scrapes job listings from https://www.devjobsscanner.com/ based on user input, displays the job details, and optionally saves them as CSV and/or TXT files.
# Version: 1.1


import os
import re
import csv
import time
from seleniumbase import Driver
from bs4 import BeautifulSoup

def get_user_input():
"""
Prompt user for job title, remote job preference, number of jobs to scrape,
sorting preference, and save option.

Returns:
tuple: A tuple containing job title (str), remote job preference (bool),
number of jobs to scrape (int), save option (str), and sorting preference (str).
"""
job = input("Enter the job title: ")
remote = input("Do you want remote jobs only? (yes/no): ").lower() == 'yes'
num_jobs = int(input("Enter the number of jobs you want to scrape: "))
sort_options = ['matches', 'newest', 'salary']
print(f"Sort options: {sort_options}")
sort_by = input("Enter the sorting preference (matches/newest/salary): ")
save_option = input("Do you want to save the output as CSV, TXT, or both of them? (csv/txt/both): ").lower()
return job, remote, num_jobs, save_option, sort_by

def construct_url(job, remote, sort_by):
"""
Construct the URL based on the job title, remote preference, and sorting preference.

Args:
job (str): The job title.
remote (bool): True if user wants remote jobs only, False otherwise.
sort_by (str): The sorting preference.

Returns:
str: The constructed URL.
"""
base_url = "https://www.devjobsscanner.com/search/"
search_params = f"?search={job}"
if remote is not None:
search_params += f"&remote={str(remote).lower()}"
if sort_by is not None:
search_params += f"&sort={sort_by}"
url = base_url + search_params
return url

def scrape_jobs(url, num_jobs):
"""
Scrape job listings from the provided URL using SeleniumBase.

Args:
url (str): The URL to scrape job listings from.
num_jobs (int): The number of jobs to scrape.

Returns:
list: A list of dictionaries containing job details.
"""
jobs = []
try:
driver = Driver(browser="Firefox", headless=False)
driver.get(url)
time.sleep(5) # Initial wait for page load

while len(jobs) < num_jobs:
soup = BeautifulSoup(driver.page_source, 'html.parser')
job_divs = soup.find_all('div', class_='flex p-3 rounded group relative overflow-hidden')

for job_div in job_divs:
if len(jobs) >= num_jobs:
break
title = job_div.find('h2').text.strip()
company = job_div.find('div', class_='jbs-dot-separeted-list').find('a').text.strip()
tags = [tag.text.strip() for tag in job_div.find_all('a', class_='tag')]
date_posted = job_div.find('span', class_='text-primary-text').text.strip()
salary = job_div.find('span', class_='text-gray-text').text.strip()

# Check if the salary contains at least two digits
if not re.search(r'\d{2}', salary):
salary = "Not mentioned"

job_url = job_div.find('a', class_='jbs-text-hover-link')['href']

jobs.append({
'title': title,
'company': company,
'company_url': f"https://www.devjobsscanner.com/company/{company.lower()}",
'tags': tags,
'date_posted': date_posted,
'salary': salary,
'job_url': job_url
})

# Scroll down to load more jobs
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(5) # Wait for new jobs to load

driver.quit()
return jobs[:num_jobs]
except Exception as e:
print("Error scraping jobs:", e)
return []

def display_jobs(jobs):
"""
Display job details to the console.

Args:
jobs (list): A list of dictionaries containing job details.
"""
for job in jobs:
print(f"Title: {job['title']}")
print(f"Company: {job['company']}")
print(f"Company URL: {job['company_url']}")
print(f"Tags: {', '.join(job['tags'])}")
print(f"Date Posted: {job['date_posted']}")
print(f"Salary: {job['salary']}")
print(f"Job URL: {job['job_url']}")
print("-" * 40)

def save_as_csv(jobs, filename):
"""
Save job details as CSV file.

Args:
jobs (list): A list of dictionaries containing job details.
filename (str): The name of the CSV file to save.
"""
output_dir = os.path.join(os.getcwd(), "outputFiles")
os.makedirs(output_dir, exist_ok=True)
keys = jobs[0].keys()
try:
with open(filename, 'w', newline='', encoding='utf-8') as output_file:
dict_writer = csv.DictWriter(output_file, fieldnames=keys)
dict_writer.writeheader()
dict_writer.writerows(jobs)
except IOError as e:
print("Error saving as CSV:", e)

def save_as_txt(jobs, filename):
"""
Save job details as text file.

Args:
jobs (list): A list of dictionaries containing job details.
filename (str): The name of the text file to save.
"""
try:
with open(filename, 'w', encoding='utf-8') as output_file:
for job in jobs:
output_file.write(f"Title: {job['title']}\n")
output_file.write(f"Company: {job['company']}\n")
output_file.write(f"Company URL: {job['company_url']}\n")
output_file.write(f"Tags: {', '.join(job['tags'])}\n")
output_file.write(f"Date Posted: {job['date_posted']}\n")
output_file.write(f"Salary: {job['salary']}\n")
output_file.write(f"Job URL: {job['job_url']}\n")
output_file.write("-" * 40 + "\n")
except IOError as e:
print("Error saving as TXT:", e)

if __name__ == '__main__':
job, remote, num_jobs, save_option, sort_by = get_user_input()
url = construct_url(job, remote, sort_by)
print(f"Scraping URL: {url}")
jobs = scrape_jobs(url, num_jobs)
if jobs:
display_jobs(jobs)
fileName = f"./outputFiles/{job}_jobs_remote_{str(remote).lower()}_sorted_by_{sort_by}"
if save_option == 'csv':
save_as_csv(jobs, f"{fileName}.csv")
elif save_option == 'txt':
save_as_txt(jobs, f"{fileName}.txt")
elif save_option == 'both':
save_as_csv(jobs, f"{fileName}.csv")
save_as_txt(jobs, f"{fileName}.txt")
print(f"Jobs saved as {save_option.upper()} file(s).")
else:
print("No jobs found. Exiting.")
Loading
Loading