Senior Data Engineer
Remote (US & Canada)
Company Overview
Totango + Catalyst have joined forces to build a leading customer growth platform that helps businesses protect and grow their revenue. Built by an experienced team of industry leaders, our software integrates with all the tools CS teams already use to provide one centralized view of customer data. Our modern and intuitive dashboards help CS leaders develop impactful workflows and take the right actions to understand health, prevent churn, increase adoption, and drive expansion.
Position Overview
Insights and intelligence are the cornerstones of our product offering. We ingest and process massive amounts of data from a variety of sources to help our users understand the overall health of their customers at each stage of their journey. As a Senior Data Engineer, you will be directly responsible for designing and implementing the next-generation data architecture leveraging technologies such as Databricks, TiDB, and Kafka.
This role is open to remote work anywhere within Canada and the U.S.
What You’ll Do
- Drive high impact, cross-functional data engineering projects built on top of a modern, best-in-class data stack, working with a variety of open source and Cloud technologies
- Solve interesting and unique data problems at high volume and large scale
- Build and optimize the performance of batch, stream, and queue-based solutions including Kafka and Apache Spark
- Collaborate with stakeholders from different teams to drive forward the data roadmap
- Implement data retention, security and governance standards
- Work with all engineering teams to help drive best practices for ownership and self-serve data processing
- Support and expand standards, guidelines, tooling and best practices for data engineering at Catalyst
- Support other data engineers in delivering our critical pipelines
- Focus on data quality, cost effective scalability, and distributed system reliability and establish automated mechanisms
- Work cross functionally with application engineers, SRE, product, data analysts, data scientists, or ML engineers
What You’ll Need
- 3+ years of experience successfully implementing modern data architectures
- Strong Project Management skills
- Demonstrated experience implementing ETL pipelines with Spark (we use Pyspark)
- Proficiency in Python, SQL and/or other modern programming language
- Deep understanding of SQL/New SQL with relational data stores such as Postgres/MySQL
- A strong desire to show ownership of problems you identify
- Experience with modern Data Warehouses and Lakes such as Redshift, Snowflake, …
This job isn't fresh anymore!
Search Fresh JobsJob Profile
Benefits/PerksCompetitive compensation Comprehensive benefits Equity Mental Health Days Remote-first company Remote work Unlimited PTO
Tasks- Collaborate with teams
- Ensure data quality
Airflow Apache Spark Change Data Capture CI/CD Databricks Data engineering Dbt Delta Lake DevOps ElasticSearch ETL IaC Kafka Monitoring MySQL Postgres Python Redis Redshift Snowflake SQL
Experience3 years
Timezones