Staff Data Engineer
Remote - US
About the team & opportunity
What’s so great about working on Calendly’s Engineering team? We make things possible for our customers through innovation.
Why do we need you? We are looking for a Staff Data Engineer to help lead the next phase of our data platform evolution. In this role, you’ll bring deep expertise in building and maintaining both batch and streaming data pipelines, supporting critical use cases for internal teams and external customers alike. You’ll ensure high standards of reliability, accuracy, and observability across our data infrastructure.
You will report to the Data Platform Engineering Manager and be the technical lead for the Data Platform team, you’ll partner closely with Product and Engineering to design scalable systems, mentor other engineers, and drive forward key architectural decisions. You’ll be hands-on, building new capabilities using technologies like Apache Flink and Google Cloud Platform (e.g. BigQuery, Kubernetes, Dataflow, Pub/Sub, GCS, and more).
A day in the life of a Staff Data Engineer at Calendly
On a typical day, you will be working on:
- Designing and building net-new batch and streaming data pipelines that power analytics, product features, and customer-facing experiences as the company scales
- Serving as the technical lead on the centralized Data Platform team, mentoring engineers, driving architectural best practices, and raising the bar for code and design reviews
- Building for scale and reliability by ensuring robust monitoring, alerting, and self-healing systems—identifying issues before they affect users or the business
- Working hands-on with our modern data stack: Apache Flink, Beam, Airflow, Kubernetes, Google Cloud Storage (GCS), BigQuery, and Datadog
- Partnering closely with data consumers across product, engineering, and analytics to understand evolving needs and deliver scalable, reusable data solutions
- Helping lay the technical foundations for a platform that can support increasing data volume, complexity, and business use cases
- Contributing to a culture of ownership, quality, and continuous improvement, ensuring the Data Platform is a trusted, high-leverage layer for the company\
What do we need from you?
- 5+ years of experience with streaming and messaging systems like Beam, Flink, Spark, Kafka, and/or Pub/Sub
- 8+ years of experience with managing enterprise-grade cloud data warehouses (BigQuery, Snowflake, Databricks, etc) using change-data-capture strategies (e.g. systems like Debezium) that are open-source and self-managed
- Expertise in SQL, Python, and ideally Java
- Experience mentoring high-potential data engineers and contributing to team culture and best practices
- Availability for participation in an on-call rotation, ensuring prompt and effective responses to business-critical alerts outside of regular working hours.
- Authorized to work lawfully in the United States of America as Calendly does not engage in immigration sponsorship at this time
What’s in it for you?
Ready to make a serious impact? Millions of people already rely on Calendly’s products, and we’re still in the midst of our growth curve — it’s a fantastic time to join us. Everything you’ll work on here will accelerate your career to the next level. If you want to learn, grow, and do the best work of your life alongside the best people you’ve ever worked with, then we hope you’ll consider allowing Calendly to be a part of your professional journey.
If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please contact us at recruiting@calendly.com .
Calendly is registered as an employer in many, but not all, states. If you are located in Alaska, Alabama, Delaware, Hawaii, Idaho, Montana, North Dakota, South Dakota, Nebraska, Iowa, West Virginia, and Rhode Island, you will not be eligible for employment. Note that all individual roles will specify location eligibility.
All candidates can find our Candidate Privacy Statement here
Candidates residing in California may visit our Notice at Collection for California Candidates here: Notice at Collection
The ranges listed below are the expected annual base salary for this role, subject to change.
Calendly takes a number of factors into consideration when determining an employee’s starting salary, including relevant experience, relevant skills sets, interview performance, location/metropolitan area, and internal pay equity.
Base salary is just one component of Calendly’s total rewards package. All full-time (30 hours/week) employees are also eligible for our Quarterly Corporate Bonus program (or Sales incentive), equity awards, and competitive benefits.
Calendly uses the zip code of an employee’s remote work location, or the onsite building location if hybrid, to determine which metropolitan pay range we use. Current geographic zones are as follows:
- Tier 1: San Francisco, CA, San Jose, CA, New York City, NY
- Tier 2: Chicago, IL, Austin, TX, Denver, CO, Boston, MA, Washington D.C., Philadelphia, PA, Portland, OR, Seattle, WA, Miami, FL, and all other cities in CA.
- Tier 3: All other locations not in Tier 1 or Tier 2
Job Profile
Authorized to work lawfully in the United States of America Must be authorized to work in the U.S. No Immigration Sponsorship Not fully remote
Benefits/PerksCareer growth opportunities Collaborative environment Competitive benefits Equity awards Growth in data engineering Impactful work Quarterly corporate bonus Remote-first company Remote work Work on modern data stack
Tasks- Collaborate with product and engineering teams
- Contribute to data platform evolution
- Design and build data pipelines
- Design scalable systems
- Develop scalable data solutions
- Drive architectural decisions
- Ensure system reliability and observability
- Implement monitoring and alerting
- Lead data platform team
- Mentor engineers
- Mentoring engineers
- Participate in on-call rotations
- Support data infrastructure
Airflow Alerting Analytics Apache Flink Architectural Design BigQuery C Change Data Capture Cloud data warehouses Continuous Improvement Data Architecture Databricks Datadog Data engineering Dataflow Data Infrastructure Data Observability Data Pipeline Development Data Pipelines Data platform engineering Data platform leadership Data reliability Debezium Engineering Enterprise data management GCS Google Cloud Platform Innovation Java Kafka Kubernetes Mentoring Mentoring engineers Messaging systems Monitoring Observability On-call support Privacy Pub/Sub Python Recruiting Reliable data systems Scalable system design Scalable systems Self-healing systems Self-managed data warehouses Snowflake SQL Streaming Data Streaming systems System alerting System Monitoring System reliability System scalability System self-healing Team Collaboration
Experience8 years
Education TimezonesAmerica/Anchorage America/Chicago America/Denver America/Los_Angeles America/New_York Pacific/Honolulu UTC-10 UTC-5 UTC-6 UTC-7 UTC-8 UTC-9