Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
9fc0e08
Add gitignore file
Priscaruso Jan 26, 2025
da1a045
Add pip file in gitignore
Priscaruso Jan 26, 2025
c02c4f5
Add airflow in docker compose file
Priscaruso Jan 27, 2025
e4a6832
fix: python version in airflow
Priscaruso Jan 28, 2025
77cd1b8
add: meltano dag file
Priscaruso Jan 28, 2025
5c55a9b
add: meltano project files
Priscaruso Jan 28, 2025
820d837
add: postgres extractor to meltano
Priscaruso Jan 28, 2025
8df32c7
add: csv extractor to meltano
Priscaruso Jan 28, 2025
447e829
add: csv loader to meltano
Priscaruso Jan 28, 2025
08af69c
add: postgres loader to meltano
Priscaruso Jan 28, 2025
c6f2c0e
add: postgres database as final-db service in docker-compose file
Priscaruso Jan 28, 2025
a71ae0f
add: extractors and loaders config in meltano.yml file
Priscaruso Jan 28, 2025
2693b66
fix: steps at meltano dag file
Priscaruso Jan 29, 2025
64297eb
fix: remove airflow service
Priscaruso Jan 29, 2025
054ae00
add: separate airflow docker compose file
Priscaruso Jan 29, 2025
1b91880
fix: remove meltano dag file from main dag folder
Priscaruso Jan 29, 2025
ffe56d8
add: meltano dag file into moved dag folder
Priscaruso Jan 29, 2025
2498448
add: .env file to git ignore
Priscaruso Jan 29, 2025
f5d6285
add: logs file to airflow project
Priscaruso Jan 29, 2025
74abaf0
add: pycache files to git ignore
Priscaruso Jan 29, 2025
b67cab5
add: airflow logs and scheduler files to git ignore
Priscaruso Jan 29, 2025
5868716
add: README.md do projeto
Priscaruso Jan 29, 2025
bae89b5
fix: git ignore logs
Priscaruso Jan 29, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Virtual environments
venvs/

# Pip package
get-pip.py

# environment variables
.env

__pycache__/
*.pyc
*.pyo

# Airflow logs
logs/*
airflow.log

# Airflow scheduler and pid files
scheduler/
*.pid
95 changes: 34 additions & 61 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,84 +1,57 @@
# Indicium Tech Code Challenge

Code challenge for Software Developer with focus in data projects.
## Objetivo

O objetivo do projeto é elaborar um ETL usando as ferramentas postgresql, meltano e airflow, conforme a arquitetura mostrada abaixo:

## Context

At Indicium we have many projects where we develop the whole data pipeline for our client, from extracting data from many data sources to loading this data at its final destination, with this final destination varying from a data warehouse for a Business Intelligency tool to an api for integrating with third party systems.

As a software developer with focus in data projects your mission is to plan, develop, deploy, and maintain a data pipeline.


## The Challenge

We are going to provide 2 data sources, a PostgreSQL database and a CSV file.
![image](docs/diagrama_embulk_meltano.jpg)

The CSV file represents details of orders from an ecommerce system.
Na primeira etapa, os dados são extraídos de duas fontes diferentes, um arquivo csv chamado
'order_details' contendo detalhes sobre pedidos de um sistema de e-commerce, e um banco de dados Postgresql, contendo todas as demais tabelas sobre informações desse mesmo e-commerce armazenados no banco Northwind. Esses dados são armazenados no disco local no formato csv, já que se está lidando com uma pequena quantidade de dados e formato simples, o que facilita trabalhar com esse formato na segunda etapa.

The database provided is a sample database provided by microsoft for education purposes called northwind, the only difference is that the **order_detail** table does not exists in this database you are beeing provided with. This order_details table is represented by the CSV file we provide.
Na segunda etapa, esses dados no formato csv armazenados localmente devem ser extraídos novamente e carregados agora em um banco Postresql final, no qual será rodada uma query gerando uma tabela com todos os pedidos e seus detalhes, também no formato csv, por estar trabalhando com uma pequena quantidade de dados e ser um formato que facilita a visualização de tabelas.

Schema of the original Northwind Database:
A imagem abaixo mostra o schema original do banco Northwind:

![image](https://user-images.githubusercontent.com/49417424/105997621-9666b980-608a-11eb-86fd-db6b44ece02a.png)

Your challenge is to build a pipeline that extracts the data everyday from both sources and write the data first to local disk, and second to a PostgreSQL database. For this challenge, the CSV file and the database will be static, but in any real world project, both data sources would be changing constantly.

Its important that all writing steps (writing data from inputs to local filesystem and writing data from local filesystem to PostgreSQL database) are isolated from each other, you shoud be able to run any step without executing the others.

For the first step, where you write data to local disk, you should write one file for each table. This pipeline will run everyday, so there should be a separation in the file paths you will create for each source(CSV or Postgres), table and execution day combination, e.g.:
## Pré-requisitos

```
/data/postgres/{table}/2024-01-01/file.format
/data/postgres/{table}/2024-01-02/file.format
/data/csv/2024-01-02/file.format
```
Para conseguir executar o projeto é necessário ter os seguintes pré-requisitos:
- Ter o docker já instalado para criar os container (https://docs.docker.com/get-started/get-docker/)
- Criar um virtual environment para instalar o meltano e seus pacotes

You are free to chose the naming and the format of the file you are going to save.

At step 2, you should load the data from the local filesystem, which you have created, to the final database.
# Estrutura do projeto

The final goal is to be able to run a query that shows the orders and its details. The Orders are placed in a table called **orders** at the postgres Northwind database. The details are placed at the csv file provided, and each line has an **order_id** field pointing the **orders** table.
O projeto é composto das estruturas de pastas a seguir:

## Solution Diagram

As Indicium uses some standard tools, the challenge was designed to be done using some of these tools.

The following tools should be used to solve this challenge.

Scheduler:
- [Airflow](https://airflow.apache.org/docs/apache-airflow/stable/installation/index.html)

Data Loader:
- [Embulk](https://www.embulk.org) (Java Based)
**OR**
- [Meltano](https://docs.meltano.com/?_gl=1*1nu14zf*_gcl_au*MTg2OTE2NDQ4Mi4xNzA2MDM5OTAz) (Python Based)

Database:
- [PostgreSQL](https://www.postgresql.org/docs/15/index.html)

The solution should be based on the diagrams below:
![image](docs/diagrama_embulk_meltano.jpg)
- airflow-project: diretório do airflow, contendo as pastas dags, onde está a DAG 'meltano- dag' para executar o ETL com meltano, config, plugins e logs, além do arquivo docker-compose.yml, no qual consta a imagem do airflow e suas configurações
- data:
- dbdata: metadados do banco Postgresql de origem
- docs: onde é armazenado a imagem da arquitetura do projeto
- meltano-project: diretório do meltano, contendo os plugins dos extractors e loaders, os dados extraídos pelo ETL na pasta 'output', e o arquivo 'meltano.yml' com as configurações dos extractors e dos loaders.
- output-dbdata: metadados do banco Postgresql de destino
- git ignore: arquivo
- docker-compose.yml: arquivo contendo a imagem e configuração do banco Postgresql de origem e de destino
- README.md: documentação sobre o projeto


### Requirements
## Instalação dos bancos de dados Postgresql origem e destino
Nesse projeto, usa-se dois bancos de dados Postgresql, um para a origem dos dados, que contém o banco northwind, e um para o destino dos dados, que vai armazenar.
Dentro da pasta 'code-challenge' executar o seguinte comando:
"docker-compose up -d"

- You **must** use the tools described above to complete the challenge.
- All tasks should be idempotent, you should be able to run the pipeline everyday and, in this case where the data is static, the output shold be the same.
- Step 2 depends on both tasks of step 1, so you should not be able to run step 2 for a day if the tasks from step 1 did not succeed.
- You should extract all the tables from the source database, it does not matter that you will not use most of them for the final step.
- You should be able to tell where the pipeline failed clearly, so you know from which step you should rerun the pipeline.
- You have to provide clear instructions on how to run the whole pipeline. The easier the better.
- You must provide evidence that the process has been completed successfully, i.e. you must provide a csv or json with the result of the query described above.
- You should assume that it will run for different days, everyday.
- Your pipeline should be prepared to run for past days, meaning you should be able to pass an argument to the pipeline with a day from the past, and it should reprocess the data for that day. Since the data for this challenge is static, the only difference for each day of execution will be the output paths.

### Things that Matters
## Instalando o airflow usando o docker-compose
Dentro da pasta 'airflow-project' executar os seguintes comandos:
"docker-compose up airflow-init"
"docker compose up -d"

- Clean and organized code.
- Good decisions at which step (which database, which file format..) and good arguments to back those decisions up.
- The aim of the challenge is not only to assess technical knowledge in the area, but also the ability to search for information and use it to solve problems with tools that are not necessarily known to the candidate.
- Point and click tools are not allowed.

# Execução do pipeline

Thank you for participating!
Para executar o pipeline, ativar o ambiente virtual criado para o meltano e executar dentro da pasta 'meltano-project' os comandos abaixo:
"meltano etl tap-csv target-postgres"
"meltano etl tap-posgres target-postgres"
34 changes: 34 additions & 0 deletions airflow-project/dags/meltano-dag.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from datetime import datetime, timedelta

default_args = {
'owner': 'airflow',
'depends_on_past': False,
'retries': 1,
'retry_delay': timedelta(minutes=5),
}

dag = DAG(
'meltano-daily-extraction',
default_args=default_args,
description='A simple DAG to run meltano ETL daily',
schedule_interval='@daily',
start_date = datetime(2025, 1, 1),
catchup=False,
)

step1_meltano = BashOperator(
task_id='step1_meltano_etl',
bash_command='meltano etl tap-postgres target-csv && meltano etl tap-csv target-csv',
dag=dag,
)

step2_meltano = BashOperator(
task_id='step2_meltano_etl',
bash_command='meltano etl tap-csv target-postgres',
dag=dag,
)

# task dependencies
step1_meltano >> step2_meltano
Loading