As a Data Engineer, you will join our ambitious and forward thinking colleagues in the Data Engineering team. At HomeToGo we capture, process and store hundreds of gigabytes of new data on a daily basis using technologies such as Apache Kafka and Apache Spark. Our data lake holds hundreds of terabytes of data in AWS S3 which is utilised in various ways by different teams running data jobs in Apache Airflow: from building self-service analytics dashboards via AWS Redshift, Apache Druid, Redash and Tableau to training ML models which make thousands of decisions per second on our websites every day. You will contribute to HomeToGo's data platform by developing our Data Warehouse, which has to meet challenging scalability, performance and usability requirements. Your work will be critical to ensure that everyone at HomeToGo can make data-driven decisions efficiently and reliably on a daily basis.

Want to read more?

Subscribe to to keep reading this exclusive post.

Subscribe Now