Skip to content

jrdegbe/Reddit-Data-Pipeline

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Reddit Batch Data Pipeline

A data pipeline to extract Reddit data from r/dataengineering.

Output is a Google Data Studio report, providing insight into the Data Engineering official subreddit.

Motivation

Project was based on an interest in Data Engineering and the types of Q&A found on the official subreddit.

It also provided a good opportunity to develop skills and experience in a range of tools. As such, project is more complex than required, utilising dbt, airflow, docker and cloud based storage.

Architecture

  1. Extract data using Reddit API
  2. Load into AWS S3
  3. Copy into AWS Redshift
  4. Transform using dbt
  5. Create Google Data Studio Dashboard
  6. Orchestrate with Airflow in Docker
  7. Create AWS resources with Terraform

Setup

Follow below steps to setup pipeline. I've tried to explain steps where I can. Feel free to make improvements/changes.

NOTE: This was developed using an M1 Macbook Pro. If you're on Windows or Linux, you may need to amend certain components if issues are encountered.

As AWS offer a free tier, this shouldn't cost you anything unless you amend the pipeline to extract large amounts of data, or keep infrastructure running for 2+ months. However, please check AWS free tier limits, as this may change.

First clone the repository into your home directory and follow the steps.

git clone 
cd Reddit-batch-data-pipeline
  1. Overview
  2. Reddit API Configuration
  3. AWS Account
  4. Infrastructure with Terraform
  5. Configuration Details
  6. Docker & Airflow
  7. dbt
  8. Dashboard
  9. Final Notes & Termination
  10. Improvements

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •