FreshRemote.Work

Senior Data Engineer

Remote USA

Softrams is one of the fastest growing digital services firms in the Washington Metropolitan regions crafting human-centered solutions and empowering digital services with a focus on HX, AI, cloud, DevOps and cyber security. Our offices are located in Leesburg VA, Baltimore MD, and Plano TX, and our teams are spread across the U.S. 
Recognized as a Top Workplace USA (2024)Recognized as one of the Top Workplaces in Technology (2023, 2021) INC 5000, Fastest growing companies in America (2023, 2022) Washington Business Journal Top 75 Fastest Growing Companies in Greater Washington area 2020 NXT UP - Top Federal Emerging Technology and consulting firms 2020 Inaugural DC Metro’s Most Successful Companies 2020 Washington Technology Fast 50 NVTC Tech 100 (2020, 2019) 
Job Description Softrams is seeking a Data Engineer for a position in federal health IT solutions.The selected candidate will wrangle large, complex datasets and set up data pipelines to provide select data for quality analysis and network with appropriate internal sources to gather and/or exchange data on specialized matters. This position requires a combination of technical expertise, strong problem-solving skills, and a comprehensive understanding of data engineering within a cloud environment. The ideal candidate will play a key role in building and maintaining robust data pipelines and infrastructure, ensuring the availability, quality, and security of data to support business intelligence and advanced analytics initiatives. 

Federal Requirements:  

  • Ability to obtain a U.S. Federal Position of Trust clearance designation.  
  • Must reside in and be able to perform work in the United States.  
  • Must have lived in the United States for 3 of the last 5 years.  

Required Qualifications:

  • Master's degree in computer science, Data Engineering, or a related field with a minimum of 4 years of experience in data engineering (PhD is a plus). 
  • At least 5 years of experience in programming with Python, focusing on data engineering tasks and scripting. 
  • A minimum of 3 years of hands-on experience with Apache Spark for large-scale data processing, including building data visualizations using PySpark and Jupyter Notebooks. 
  • Proficiency in data manipulation and analysis using Python libraries such as NumPy and Pandas. 
  • Proven expertise in designing and managing data pipelines using AWS services, including AWS EMR and AWS S3. 
  • 4+ years of experience working with relational databases and AWS Redshift, with a strong understanding of SQL for data manipulation and querying. 
  • At least 2 years of experience utilizing Jupyter Notebooks for …
This job isn't fresh anymore!
Search Fresh Jobs