Senior Data Engineer - Ames


The Senior Data Engineer at Workiva will be an instrumental part of data workflows throughout the organization. You will build distributed services to support multiple data analytics teams and business intelligence engineers reliably and at scale using AWS cloud environments. You will provide cutting-edge, reliable, and easy-to-use systems for ingesting and processing data and help the teams that build data-intensive applications be successful. 

This role will collaborate with many cross functional teams on the planning, execution, and successful completion of technical projects with the ultimate purpose of improving customer experience. You will build and maintain batch and real-time data flows used for business intelligence, analytics, and machine learning within all organizations across Workiva. This also involves storing and exposing data via a Database, Data Lake, and other APIs. Senior Data Engineers work primarily with other Data Engineers but also with Data Scientists, ML Engineers, and business partners to ensure quality, reliability, and performance at the highest level.

What You'll Do

  • Develop data extraction and integration code modules for batch and incremental data flow from various data sources using new and existing patterns

  • Use existing tools and processes to deploy to integration and production environments

  • Maintain the deployment processes

  • Maintain the health of the data ecosystem by configuring monitors, defining alerts on common failure points, and giving feedback on data quality to data owners and business partners

  • Test software, validate data, and write automated tests (unit, integration, functional, etc.)

  • Review peer code and submit thorough and actionable feedback based on team standards and industry best practices

  • Triage and resolve production issues. Communicate with individual business partners on status and escalate as needed

  • Design data lake storage and access patterns to match customer requirements and conform to naming standards

  • Understand the data at a deep level, apply security appropriately, and escalate as needed

  • Tune processes and SQL to reduce cost and wait time. Implement systems to balance data volume, latency and customer requirements

  • Work with business partners to write requirements and test deployed code

  • Join rotation to support production workflows during off hours

What You'll Need

Minimum qualifications

  • 4+ years of relevant experience in the data engineering role, including data warehousing and business intelligence tools, techniques, and technology, or experience in analytics, business analysis or comparable consumer analytics solutions

  • Undergraduate Degree or equivalent combination of education and experience in a related field


Preferred qualifications

  • Bachelor’s degree in Computer Science, Engineering, Math, Finance, Statistics or related discipline

  • Experience in big data processing and using databases in a business environment with large-scale, complex datasets. (SQL, Hadoop, Spark, Flink, Beam etc)

  • Experience with AWS cloud technologies including S3, Redshift, Spark, Lambda and Kinesis

  • Knowledge and direct experience using business intelligence reporting tools. (Quicksight, Tableau, Splunk etc.)

  • Extensive knowledge of SQL query design and tuning for performance and accuracy

  • Experience with Python, R, or other data-relevant scripting languages preferred

  • Experience with Data Lake design and philosophy

  • Experience in an Agile/Sprint working environment preferred

  • Proficient research skills to locate market information using numerous internal and external sources of data

  • Excellent communication (verbal and written) and interpersonal skills and an ability to effectively communicate with both business and technical teams

  • Strong planning and organizing skills to prioritize numerous projects and ensure data is delivered in an accurate and understandable manner to the end user


Working conditions & Travel 

  • Reliable internet access for any period of time working remotely, not in a Workiva office

  • Less than 10%

How You’ll Be Rewarded

✅ Salary range in the US: $79,000.00 - $134,000.00

✅ A discretionary bonus typically paid annually

✅ Restricted Stock Units granted at time of hire

✅ 401(k) match and comprehensive employee benefits package

The salary range represents the low and high end of the salary range for this job in the US. Minimums and maximums may vary based on location. The actual salary offer will carefully consider a wide range of factors, including your skills, qualifications, experience and other relevant factors.

Workiva is an Equal Employment Opportunity and Affirmative Action Employer.  We believe that great minds think differently.  We value diversity of backgrounds, beliefs, and interests, and we recognize diversity as an important source of intellectual thought, varied perspective, and innovation.  Employment decisions are made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression genetic information, marital status, citizenship status or any other protected characteristic.  We strongly encourage and welcome people from historically marginalized groups to apply.

Workiva is committed to working with and providing reasonable accommodations to applicants with disabilities. To request assistance with the application process, please email

Workiva employees are required to undergo comprehensive security and privacy training tailored to their roles, ensuring adherence to company policies and regulatory standards.

Workiva supports employees in working where they work best - either from an office or remotely from any location within their country of employment.

#LI-LP1 Apply

Job Profile


North America


United States


401(k) match Comprehensive employee benefits package Discretionary bonus Employee benefits package Restricted Stock Units


Agile Analytics APIs AWS Business Intelligence Cloud Technologies Communication Computer Science Customer Experience Databases Data engineering Data Lake Data Quality Finance Hadoop Interpersonal Machine Learning Python R Reporting Research Security Spark Splunk SQL Training

  • Analytics
  • Configure monitors and define alerts
  • Design data lake storage and access patterns
  • Develop data extraction and integration code modules
  • Maintain deployment processes
  • Resolve production issues
  • Review peer code and provide feedback
  • Test software and validate data
  • Training
  • Tune processes and SQL
  • Write requirements and test deployed code

4+ years


Business Computer Science Engineering Finance Math Related Field Statistics Undergraduate Degree


America/Anchorage America/Chicago America/Denver America/Los_Angeles America/New_York Pacific/Honolulu UTC-10 UTC-5 UTC-6 UTC-7 UTC-8 UTC-9