FreshRemote.Work

Senior Data Engineer

US - Remote

About Us

Since 2016, dbt Labs has been on a mission to help analysts create and disseminate organizational knowledge. dbt Labs pioneered the practice of analytics engineering, built the primary tool in the analytics engineering toolbox, and has been fortunate enough to see a fantastic community coalesce to help push the boundaries of the analytics engineering workflow. Today there are 30,000 companies using dbt every week, 100,000 dbt Community members, and over 4,100 dbt Cloud customers. You can learn more about our values here.  

About the role:

As a Senior Data Engineer at dbt Labs, you will take the lead in building and owning data infrastructure (e.g. infrastructure, pipelines, data products). This data ecosystem is crucial for powering analyses, guiding business decisions, accelerating growth, and driving efficiency across business operations. This team combines strategic, operational, and problem-solving skills with a pragmatic sense of how to get things done and drive change across the organization.

In this role, you can expect to:

  • Design, build and manage our data pipelines, ensuring all user and product event data is seamlessly integrated into our data warehouse.
  • Develop canonical datasets to track key product metrics including user growth, engagement, and revenue.
  • Work collaboratively with various teams, including Infrastructure, Product, Marketing, Finance, and GTM to understand their data needs and provide solutions.
  • Implement robust and fault-tolerant systems for data ingestion and processing.
  • Participate in data architecture and engineering decisions, bringing your strong experience and knowledge to the table.
  • Ensure the security, integrity, and compliance of data according to industry and company standards.

You are a good fit if you have:

  • Worked asynchronously as part of a fully-remote, distributed team.
  • Have 5+ years of experience as a data engineer and 8+ years of any software engineering experience (including data engineering).
  • Proficiency in at least one programming language commonly used within Data Engineering, such as Python, Scala, or Java.
  • Strong data infrastructure and data architecture skills.
  • Expertise with any of ETL schedulers such as Airflow, Dagster, Prefect or similar frameworks.
  • Solid understanding of Spark and ability to write, debug and optimize Spark code.
  • A bias for action and urgency, not letting perfect be the enemy of the effective.
  • A “full-stack mindset”, not hesitating to do what it takes to solve a problem end-to-end, even it it requires …
This job isn't fresh anymore!
Search Fresh Jobs