From a37dff13f36c16a2f996baa2775b318f5b534cb4 Mon Sep 17 00:00:00 2001 From: Guilherme Zanotelli Date: Mon, 5 Feb 2024 19:10:31 -0300 Subject: [PATCH 1/5] adding destination database --- docker-compose.yml | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/docker-compose.yml b/docker-compose.yml index 111b1acb40..b35104a4e7 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -1,7 +1,7 @@ version: '3' services: - db: + source_database: image: postgres:12 environment: POSTGRES_DB: northwind @@ -11,4 +11,13 @@ services: - ./dbdata:/var/lib/postgresql/data - ./data/northwind.sql:/docker-entrypoint-initdb.d/northwind.sql ports: - - 5432:5432 \ No newline at end of file + - 5433:5432 + + destination_database: + image: postgres:12 + environment: + POSTGRES_DB: indicium + POSTGRES_USER: admin + POSTGRES_PASSWORD: password + ports: + - 5434:5432 From 51a97945a83af091229aea3ebadee1ac0b75cd79 Mon Sep 17 00:00:00 2001 From: Guilherme Zanotelli Date: Mon, 5 Feb 2024 19:12:17 -0300 Subject: [PATCH 2/5] fixing some grammar issues --- README.md | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/README.md b/README.md index cc9b6504bc..1420659fc1 100644 --- a/README.md +++ b/README.md @@ -1,13 +1,13 @@ # Indicium Tech Code Challenge -Code challenge for Software Developer with focus in data projects. +Code challenge for Software Developer with a focus on data projects. ## Context -At Indicium we have many projects where we develop the whole data pipeline for our client, from extracting data from many data sources to loading this data at its final destination, with this final destination varying from a data warehouse for a Business Intelligency tool to an api for integrating with third party systems. +At Indicium we have many projects where we develop the whole data pipeline for our client, spanning from data extraction of diverse data sources, up to loading the extracted data at its final destination, which varies from a data warehouse for a Business Intelligence (BI) tool to an API that integrates with third-party systems or software. -As a software developer with focus in data projects your mission is to plan, develop, deploy, and maintain a data pipeline. +As a software developer with a focus on data projects, your mission is to plan, develop, deploy, and maintain data pipelines. ## The Challenge @@ -63,22 +63,22 @@ The solution should be based on the diagrams below: ### Requirements -- You **must** use the tools described above to complete the challenge. -- All tasks should be idempotent, you should be able to run the pipeline everyday and, in this case where the data is static, the output shold be the same. -- Step 2 depends on both tasks of step 1, so you should not be able to run step 2 for a day if the tasks from step 1 did not succeed. +- You **must** use a combination of the tools described above to complete the challenge. +- All tasks should be idempotent, you should be able to run the pipeline every day and, in this case, where the data is static, the output should be the same. +- Step 2 depends on both Step 1 tasks, therefore Step 2 should not run in case *any of Step 1 do not succeed*. - You should extract all the tables from the source database, it does not matter that you will not use most of them for the final step. -- You should be able to tell where the pipeline failed clearly, so you know from which step you should rerun the pipeline. +- You should be able to tell exactly where the pipeline failed, so you know from where to rerun the pipeline. - You have to provide clear instructions on how to run the whole pipeline. The easier the better. -- You must provide evidence that the process has been completed successfully, i.e. you must provide a csv or json with the result of the query described above. -- You should assume that it will run for different days, everyday. -- Your pipeline should be prepared to run for past days, meaning you should be able to pass an argument to the pipeline with a day from the past, and it should reprocess the data for that day. Since the data for this challenge is static, the only difference for each day of execution will be the output paths. +- You must provide evidence that the process has been completed successfully, i.e. you must provide a CSV or JSON with the result of the query described above. +- You should assume that the pipeline will run for different days, every day. +- Your pipeline should be prepared to run for past days, meaning you should be able to pass an argument to the pipeline with a day from the past, and it should reprocess the data for that day. Since the data for this challenge is static, the only difference for each day of execution will be local file system paths. ### Things that Matters - Clean and organized code. - Good decisions at which step (which database, which file format..) and good arguments to back those decisions up. -- The aim of the challenge is not only to assess technical knowledge in the area, but also the ability to search for information and use it to solve problems with tools that are not necessarily known to the candidate. -- Point and click tools are not allowed. +- The aim of the challenge is not only to assess technical knowledge in the field but also the ability to search for information and use it to solve problems with tools that are not necessarily known to the candidate. +- Point-and-click tools are not allowed. Thank you for participating! \ No newline at end of file From f9a90332f5675063a51a415e6c069eebe9a04e3a Mon Sep 17 00:00:00 2001 From: Guilherme Zanotelli Date: Mon, 5 Feb 2024 19:12:38 -0300 Subject: [PATCH 3/5] making text more ludic --- README.md | 47 ++++++++++------------------------------------- 1 file changed, 10 insertions(+), 37 deletions(-) diff --git a/README.md b/README.md index 1420659fc1..ca910f6a4e 100644 --- a/README.md +++ b/README.md @@ -12,56 +12,29 @@ As a software developer with a focus on data projects, your mission is to plan, ## The Challenge -We are going to provide 2 data sources, a PostgreSQL database and a CSV file. +Consider the Northwind business, which has most of its data in a single database, a PostgreSQL instance, here is an entity-relation (ER) diagram of the database: -The CSV file represents details of orders from an ecommerce system. +![Northwind ER Diagram](https://user-images.githubusercontent.com/49417424/105997621-9666b980-608a-11eb-86fd-db6b44ece02a.png) -The database provided is a sample database provided by microsoft for education purposes called northwind, the only difference is that the **order_detail** table does not exists in this database you are beeing provided with. This order_details table is represented by the CSV file we provide. +The database has all the company's data, apart from the details of Northwind's orders, which come from a separate e-commerce system. This system outputs all details of the orders daily as a CSV file, this is the only format and frequency the system can operate. -Schema of the original Northwind Database: +Furthermore, Clyde, a new Northwind data analyst, has shown the CEO some bad-ass dashboards he made using the company database. Since then, the CEO has become very fond of the information from the dashboards, such that now he is interested in seeing a panel which Clyde has determined requires the details of the orders. To this end, the CEO asked the IT team to provision a data warehouse (a secondary PostgreSQL), since he does not wish to undermine the production database with an analytical processing load. -![image](https://user-images.githubusercontent.com/49417424/105997621-9666b980-608a-11eb-86fd-db6b44ece02a.png) +Now Clyde needs you to join the data from the production (source) database along with the CSV file containing the details of the orders that the system outputs. He insisted it is important to do this in two steps, first, extract the data from its source into the local filesystem and then load the data into the data warehouse (destination database). -Your challenge is to build a pipeline that extracts the data everyday from both sources and write the data first to local disk, and second to a PostgreSQL database. For this challenge, the CSV file and the database will be static, but in any real world project, both data sources would be changing constantly. +Clyde is no expert and he is open to new ideas, but he has **lots** of experience working with other data engineers in the past. He knows they mostly use something called Airflow to orchestrate the data pipelines, and they also like to use tools such as Embulk and Meltano for these extraction/loading tasks. Finally, he made this visual schematic to further clear the air if you had any doubts: -Its important that all writing steps (writing data from inputs to local filesystem and writing data from local filesystem to PostgreSQL database) are isolated from each other, you shoud be able to run any step without executing the others. +![Solution diagram](docs/diagrama_embulk_meltano.jpg) -For the first step, where you write data to local disk, you should write one file for each table. This pipeline will run everyday, so there should be a separation in the file paths you will create for each source(CSV or Postgres), table and execution day combination, e.g.: +Clyde also said it would be nice if the extraction files did not get overwritten with new data or deleted every day the piipeline runs. This would ensure an extraction backup in any event and also would help to debug any issues in the future. Finally, he managed to gather some extra links to help you get started: -``` -/data/postgres/{table}/2024-01-01/file.format -/data/postgres/{table}/2024-01-02/file.format -/data/csv/2024-01-02/file.format -``` - -You are free to chose the naming and the format of the file you are going to save. - -At step 2, you should load the data from the local filesystem, which you have created, to the final database. - -The final goal is to be able to run a query that shows the orders and its details. The Orders are placed in a table called **orders** at the postgres Northwind database. The details are placed at the csv file provided, and each line has an **order_id** field pointing the **orders** table. - -## Solution Diagram - -As Indicium uses some standard tools, the challenge was designed to be done using some of these tools. - -The following tools should be used to solve this challenge. - -Scheduler: +- [Docker](https://www.docker.com/) - [Airflow](https://airflow.apache.org/docs/apache-airflow/stable/installation/index.html) - -Data Loader: - [Embulk](https://www.embulk.org) (Java Based) -**OR** - [Meltano](https://docs.meltano.com/?_gl=1*1nu14zf*_gcl_au*MTg2OTE2NDQ4Mi4xNzA2MDM5OTAz) (Python Based) - -Database: - [PostgreSQL](https://www.postgresql.org/docs/15/index.html) -The solution should be based on the diagrams below: -![image](docs/diagrama_embulk_meltano.jpg) - - -### Requirements +Now it is up to you! You can show Clyde the output of a select query to demonstrate that the data from the details of the orders are in the provisioned data warehouse. - You **must** use a combination of the tools described above to complete the challenge. - All tasks should be idempotent, you should be able to run the pipeline every day and, in this case, where the data is static, the output should be the same. From b569bd0af062bdd458c77fda96276ca6885a2751 Mon Sep 17 00:00:00 2001 From: Guilherme Zanotelli Date: Mon, 5 Feb 2024 19:13:26 -0300 Subject: [PATCH 4/5] adding getting started section --- README.md | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/README.md b/README.md index ca910f6a4e..58bdf09c27 100644 --- a/README.md +++ b/README.md @@ -36,6 +36,19 @@ Clyde also said it would be nice if the extraction files did not get overwritten Now it is up to you! You can show Clyde the output of a select query to demonstrate that the data from the details of the orders are in the provisioned data warehouse. +## Getting started + +First, ensure you have [Docker](https://www.docker.com/) installed on your system (alongside the [compose plugin](https://docs.docker.com/compose/install/linux/) - not sure it is installed? run `docker compose version`). Now you may deploy both PostgreSQL instances using: +```shell +docker compose up -d +``` + +This will deploy two containers representing the Northwind database and data warehouse. You are free to add further services to this specification but are not allowed to modify existing configurations. + +Follow the tutorials or devise your own Airflow deployment, but remember to document the steps required to get your solution up and running on other people's hardware (document the constraints you have tested on - Windows, Linux, etc). Although it is advised to follow Clyde's guidelines, you are free to design your own solution to the Northwind issue. + +Hint: inspect the docker-compose file to find useful information. + - You **must** use a combination of the tools described above to complete the challenge. - All tasks should be idempotent, you should be able to run the pipeline every day and, in this case, where the data is static, the output should be the same. - Step 2 depends on both Step 1 tasks, therefore Step 2 should not run in case *any of Step 1 do not succeed*. From 1363a39a830e505b70677f7b998f25ae23905393 Mon Sep 17 00:00:00 2001 From: Guilherme Zanotelli Date: Mon, 5 Feb 2024 19:13:39 -0300 Subject: [PATCH 5/5] adding some trivia info from previous sections --- README.md | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 58bdf09c27..31d502b8ee 100644 --- a/README.md +++ b/README.md @@ -49,6 +49,13 @@ Follow the tutorials or devise your own Airflow deployment, but remember to docu Hint: inspect the docker-compose file to find useful information. +## Trivia + +The actual Northwind database is a sample database provided by Microsoft for educational purposes. The actual database differs from the copy provided herein only for the `orders_details` table, which has been extracted as a CSV and provided as an external file. + + +## Requirements + - You **must** use a combination of the tools described above to complete the challenge. - All tasks should be idempotent, you should be able to run the pipeline every day and, in this case, where the data is static, the output should be the same. - Step 2 depends on both Step 1 tasks, therefore Step 2 should not run in case *any of Step 1 do not succeed*. @@ -59,7 +66,7 @@ Hint: inspect the docker-compose file to find useful information. - You should assume that the pipeline will run for different days, every day. - Your pipeline should be prepared to run for past days, meaning you should be able to pass an argument to the pipeline with a day from the past, and it should reprocess the data for that day. Since the data for this challenge is static, the only difference for each day of execution will be local file system paths. -### Things that Matters +### Yes, it matters... - Clean and organized code. - Good decisions at which step (which database, which file format..) and good arguments to back those decisions up.