FreshRemote.Work

Cloud Data Integration and Interoperability Engineer (DevOps Engineer)

6314 Remote/Teleworker US

Leidos seeks a Cloud Data Integration and Interoperability Engineer (DevOps Engineer) to support its multi-billion-dollar Health & Civil Sector opportunity pipeline.  Working for the Sector CTO’s Innovation Lab, this highly visible position will support growth across several of Leidos strategic programs and capture opportunities, including cloud and data modernization efforts at the Department of Defense, Department of Health and Human Services, Department and Justice, etc.  Our focus is on supporting new and innovative applications of cloud infrastructure, data analytics, AI/ML, and cybersecurity, all of which require input from a next generation of technology leaders, like you.

WHAT YOU WILL BE DOING
The Cloud Data Integration and Interoperability Engineer will join the Capabilities & Integration team as a key team member responsible for the team’s DevSecOps toolsets and activities across all projects.  Your efforts will support the team’s portfolio of leading-edge projects, including tasks related implementation, deployment, and sustainment of new capabilities.  

Primary responsibilities include:
•    Provide build and operations engineering support for digital health application infrastructure environments in AWS Cloud, as well as other cloud service providers such as Azure, Oracle and Google.
•    Test, Plan, and Deploy COTS, OS, and infrastructure updates and patches to the project environments.
•    Understand and administer IdAM solutions (Jump Cloud, KeyCloak, Okta, Microsoft Entra, etc.) supporting team access.
•    Configure and maintain user accounts and network access to our SecDevOps environment to include supporting requests for new/modifications to existing Jira and Confluence project tools.
•    Support resolution of operational issues and employ best practice in troubleshooting and resolving tickets received from the user community.
•    Support the scheduling and execution of pipeline build and deployment activities (GitLab) within the project cloud computing environments.
•    Coordinate with Tiered engineering team and COTS vendor engineering, as required, to resolve build and operational tickets.
•    Contribute to the design and use of Infrastructure as Code (analyze, design, build, test, deploy, support) for large scale infrastructure implementations.
•    Assist in the introduction of new capabilities, the enhancement, or troubleshooting of existing capabilities; Ability to understand/learn new technologies when encountered.
•    Integrate commercial products with semi-custom developed solutions to meet functional requirements while working within the guidelines of the client’s infrastructure.
•    Communicate with Product Team Owners, application development and integration teams, network teams, and application security teams.
•    100% remote work with occasional travel to our Reston, Virginia corporate headquarters
•    Work independently within a matrixed organization.

REQUIRED Qualifications:
•    Requires BS degree and 8+ years of prior relevant experience or Masters with 6+ years of prior relevant experience. 
•    Minimum of 3+ years of hands-on fulltime Cloud experience required – this is a hands-on position. 
•    US Citizenship 
•    At least 1 AWS certification
•    Experience includes: 
    4+ yrs AWS configuration and operations use (EC2, Lambda, EKS, ECS, S3, etc.)
    Large enterprise AWS Kubernetes (EKS or ECS) implementation experience
    Hands on experience at command line with Kubernetes (kubectl)
    Infrastructure as Code (IaC) design, deployment, migration, and sustainment (Terraform, Ansible, GitLab pipeline automation, etc)
    GitLab, Jenkins, AWS CodePipeline (CI/CD pipeline toolchain)
    REST API’s – build, design, manage documentation, etc.
    JSON / XML – programmatically read, write, transform data objects in JSON/XML format
    Linux Systems Administration
    Windows Server (2016, 2019) Administration
•    Strong verbal and written communications skills, ability to communicate technical concepts. 
•    Demonstrated self-starter and team contributor with strong analytical and problem-solving skills. 
•    Understanding of software development (including testing) methodologies, and cross functional experience throughout life-cycle phases.

PREFERRED Qualifications:
•    6+ years of hands-on, full-time AWS cloud experience highly desirable
•    Implementation Experience with:
    Docker and other container orchestration
    Significant Python experience
    Scripting: BASH/shell, Python, or Golang (Go)
    Mulesoft (AnyPoint, DataWeave, etc.)
    InterSystems
    FHIR / HL7
    HashiCorp applications (Packer, Terraform, Consul, Vault)
    Oracle Cerner, Epic or other electronic health record system
•    Large scale infrastructure implementation and sustainment
Cybersecurity best practices and remediation
 

Original Posting Date:

2024-09-16

While subject to change based on business needs, Leidos reasonably anticipates that this job requisition will remain open for at least 3 days with an anticipated close date of no earlier than 3 days after the original posting date as listed above.

Pay Range:

Pay Range $101,400.00 - $183,300.00

The Leidos pay range for this job level is a general guideline only and not a guarantee of compensation or salary. Additional factors considered in extending an offer include (but are not limited to) responsibilities of the job, education, experience, knowledge, skills, and abilities, as well as internal equity, alignment with market data, applicable bargaining agreement (if any), or other law.

Apply

Job Profile

Regions

North America

Countries

United States

Restrictions

100% remote Occasional travel Remote/Teleworker US

Benefits/Perks

100% Remote 100% remote work Fully remote Innovation Occasional travel Occasional travel to headquarters Remote work

Tasks
  • Administer IDAM solutions
  • Build and operations engineering support
  • Communicate with product teams
  • Configuration
  • Configure and maintain user accounts
  • Coordinate with engineering teams
  • Data integration
  • Design
  • Design and use infrastructure as code
  • Documentation
  • Infrastructure as Code
  • Integrate commercial products
  • Schedule and execute pipeline activities
  • Software development
  • Support operational issue resolution
  • Test
  • Testing
  • Test, plan, and deploy updates
  • Troubleshooting
Skills

AI AI/ML Analytical Analytics Ansible Application Development Automation AWS AWS Cloud Azure Bash Best Practices CI/CD Cloud Cloud Computing Cloud Infrastructure Communications Configuration Confluence Container Orchestration COTS Cybersecurity Data Data & Analytics Data Integration Data Modernization Deployment Design DevOps DevSecOps Docker Documentation EC2 ECS Education EKS Engineering Execution FHIR GitLab Golang Google Cloud HL7 IaC IDAM Implementation Infrastructure Infrastructure as Code Innovation Integration Jenkins Jira JSON JumpCloud Keycloak Kubernetes Lambda Linux Microsoft Entra ML Network Okta Operations Oracle Orchestration Organization Problem-solving Python Remediation REST REST API REST APIs S3 Scheduling Scripting Security Self-Starter Shell Software Software Development Support Sustainment Systems Administration Teams Technical Terraform Testing Troubleshooting Windows Windows Server Windows Server 2016 XML

Experience

8 years

Education

AS B.S. Business Communications Engineering Master's Security Software Development

Certifications

AWS Certification DevOps OS

Timezones

America/Anchorage America/Chicago America/Denver America/Los_Angeles America/New_York Pacific/Honolulu UTC-10 UTC-5 UTC-6 UTC-7 UTC-8 UTC-9