FreshRemote.Work

AI/ML Data Engineer (Databricks)

US - Work From Home, United States

The Opportunity

 

QuidelOrtho unites the strengths of Quidel Corporation and Ortho Clinical Diagnostics, creating a world-leading in vitro diagnostics company with award-winning expertise in immunoassay and molecular testing, clinical chemistry and transfusion medicine. We are more than 6,000 strong and do business in over 130 countries, providing answers with fast, accurate and consistent testing where and when they are needed most – home to hospital, lab to clinic.

Our culture puts our team members first and prioritizes actions that support happiness, inspiration and engagement. We strive to build meaningful connections with each other as we believe that employee happiness and business success are linked. Join us in our mission to transform the power of diagnostics into a healthier future for all.

The Role

As we continue to grow as QuidelOrtho, we are seeking an AI/ML Data Engineer to support our Global Data and Analytics team. The AI/ML Data Engineer will be responsible for designing, building, and optimizing data pipelines and infrastructure using Databricks to support AI and machine learning (ML) initiatives. This role will involve working closely with business stakeholders to identify high-value AI/ML use cases and translating business requirements into technical solutions. The engineer will work to ensure the successful deployment of AI/ML solutions at scale, leveraging Azure services and Databricks tools.

This position is remote eligible.  

The Responsibilities

  • Work directly with business stakeholders to identify and define AI/ML use cases, translating business needs into technical requirements.
  • Design, develop, and optimize scalable data pipelines in Databricks for AI/ML applications, ensuring efficient data ingestion, transformation, and storage.
  • Build and manage Apache Spark-based data processing jobs in Databricks, ensuring performance optimization and resource efficiency.
  • Implement ETL/ELT processes and orchestrate workflows using Azure Data Factory, integrating various data sources such as Azure Data Lake, Blob Storage, and Microsoft Fabric.
  • Collaborate with Data Engineering teams to meet data infrastructure needs for model training, tuning, and deployment within Databricks and Azure Machine Learning.
  • Monitor, troubleshoot, and resolve issues within Databricks workflows, ensuring smooth operation and minimal downtime.
  • Implement best practices for data security, governance, and compliance within Databricks and Azure environments.
  • Automate data and machine learning workflows using CI/CD pipelines through Azure DevOps.
  • Maintain documentation of workflows, processes, and best practices to ensure knowledge sharing across teams.
  • Perform other work-related duties as assigned.

The Individual

Required:

  • Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).
  • 3+ years of experience …
This job isn't fresh anymore!
Search Fresh Jobs

Job Profile

Regions

North America

Countries

United States

Benefits/Perks

Employee happiness focus Remote-eligible Team member support Work From Home

Tasks
  • Automate workflows using CI/CD
  • Build and manage data processing jobs
  • Collaborate with data engineering teams
  • Design and optimize data pipelines
  • Identify AI/ML use cases
  • Implement ETL/ELT processes
  • Monitor and troubleshoot workflows
Skills

AI Apache Spark Azure Blob Storage Azure Data Factory Azure Data Lake Azure DevOps Azure Synapse Analytics Databricks Databricks Delta Lake Data engineering Machine Learning MLFlow Python Scala

Experience

3 years

Education

Bachelor's degree Computer Science Engineering Related Field

Timezones

America/Anchorage America/Chicago America/Denver America/Los_Angeles America/New_York Pacific/Honolulu UTC-10 UTC-5 UTC-6 UTC-7 UTC-8 UTC-9