Do you want to join a fast-paced startup and work on the bleeding edge of recommendation systems, where your work immediately impacts the user experience for millions of users? We are looking for a Data Engineer who thrives in a fast-moving environment.
What you will be building:
Recommendation systems that are guaranteed to always show relevant recommendations for any scenario and that are easy to maintain, deploy and improve.
We need someone who we can count on to:
🙋 Own: Data pipelines that are easy to work with as well as creating and improving tooling to deploy, introspect and monitor our recommendation systems to constantly make them better.
💻 Teach: What data should we collect and when? Tradeoffs between iteration speed and complexity of data platform architecture. How to maintain data warehouse structure and schemas that are easy to work with.
🚴 Improve: How we operate our data pipelines, iteration speed, and robustness of our entire system, to delight the end-users of our product.
Desired skills and qualities
- GCP data engineering stack (including BigQuery, Airflow, PubSub, Cloudfunctions, Dataflow, Kubernetes)
- Being able to set up (managed) infrastructure that runs data pipeline jobs
- Experience working in mature data engineering codebases
- Basic data science knowledge, being able to debug issues by performing simple analysis on log data
💎We expect you to:
- Own data pipelines that ingest our events reliably into tables that are easy to work with
- Own monitoring and alerting around the data-pipelines
- Design and maintain data warehouse structure/schemas that make it easy for the rest of the team to work with
- Identify and communicate shortcomings in how we currently operate our data pipelines and push for improvements
- Educate the rest of the tech team on how they bring our tracking to the next level.