Skip to content

xiaolong0728/imf-data-pipeline

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

IMF Economic Data Pipeline

This project fetches, transforms, and stores IMF economic indicators using Prefect, PostgreSQL, and Metabase. It covers ingestion, transformation with dbt, and dashboarding.


🏗️ Project Structure

imf-data-pipeline
├── create_table.sql 
├── docker-compose.yml 
├── ingestion/
│ └── fetch_data.py 
├── dbt/
│ └── ... # dbt project
├── flows/
│ └── imf_ingestion_flow.py 
├── requirements.txt
└── README.md

🚀 Getting Started

1. Clone this repository

git clone https://github.com/yourusername/imf-pipeline.git
cd imf-pipeline

2. Start Services

docker-compose up -d

PostgreSQL runs on localhost:5433. Metabase runs at http://localhost:3000.

3. Install Dependencies

Create a virtual environment and install Python packages:

python -m venv venv
source venv/bin/activate
pip install -r requirements.txt

4. Run the Pipeline

python pipeline/imf_pipeline.py

This will:

  • Create tables
  • Fetch indicators, countries, regions, and groups
  • Fetch and insert time-series economic data
  • Run dbt transformations

🧠 Technologies Used

📊 Dashboard (Metabase)

  • URL: http://localhost:3000

  • Set up using PostgreSQL with:

    • Host: postgres_imf
    • Port: 5432
    • DB Name: imf_data
    • User/Password: xiaolong / xiaolong

📦 Deployment Notes

  • PostgreSQL data is persisted via Docker volume postgres_data.

  • create_table.sql uses range partitioning for better performance.

  • dbt models are located in the /dbt directory.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published