Data Engineer

Remote, USA

Applications have closed
Science 37 logo
Science 37

Posted 1 month ago

Science 37 is accelerating the research and development of breakthrough biomedical treatments by bringing clinical trials to patients' homes. Backed by venture investors such as Glynn Capital, Google Ventures, Redmile Group, dRx Capital and Lux Capital, we are revolutionizing the clinical trial industry one patient at a time. To help us achieve our goal, we are seeking a razor-sharp Data Engineer eager to make an impact within a mission-driven organization. 

Science 37 is accelerating the research and development of breakthrough biomedical treatments by bringing clinical trials to patients' homes. By leveraging the latest innovations in mobile technology, cloud services, telemedicine, we are breaking down traditional geographic barriers to patient trial participation while shortening the time needed to bring new treatments to the market.

As part of the Science 37 Tech team, the Data Engineer collaborates with motivated, energetic, and entrepreneurial individuals working together to achieve Science 37’s mission of changing the world of clinical research through patient-centered design. They have a hands-on role with the building and developing the data pipeline/platform that enables Science 37’s groundbreaking clinical research model and collaborates with Product, Data, Clinical Operations, and other relevant stakeholders to define study-specific platform requirements.

The Data Engineer helps drive data democracy at Science. This position will report to the Data Architect and will work with our architects, software engineers, product managers, and DevOps to help design and build data solutions and architecture.  They will learn how Science 37 data is used and help make and drive the accessibility of the data that is needed, keeping in mind regulations and data privacy policies.  They will be able to use data processing libraries and tools to help the end users of our data get the insights they need.

Duties include but are not limited to:

  1. Understand how to Install, configure, monitor and maintain databases in the production, development, testing environments
  2. Working with cloud vendors like AWS or GCP
  3. Working with cloud distributed file systems, data lakes, and data warehouses
  4. Creating a data pipelines to help with Internal and External analytics users
  5. Define and implement database schemas and configurations working with our development teams
  6. Optimize database performance by identifying and resolving application bottlenecks, tuning of DB queries, implementation of stored procedures, conducting performance tests, troubleshooting and integrating new elements
  7. Work with development team design and implement reporting capabilities
  8. Implement solutions for database performance monitoring and tuning
  9. Recommend operational efficiencies, eliminate duplicate work efforts and remove unnecessary complexities; create and implement new procedures and workflows
  10. Process database change requests, including the creation and modification of databases, tables, views, stored procedures, triggers, jobs, etc. in accordance with change control policies
  11. Utilize an understanding of Agile management to help the team with all release and configuration related tasks around software builds into preproduction and production environments.

Qualifications

  1. Bachelor's degree in Computer Science or equivalent
  2. Knowledge architectural & database design skills
  3. Experience using SQL, NoSQL and Graph Databases
  4. Must have experience with AWS (other cloud providers are a plus)
  5. Scripting experience with Python or Bash required.
  6. Proficient with SQL and Programming Languages like Python, Java, or Scala
  7. Have an understanding of data architecture for microservices
  8. Experience across different database platforms and tools such as MySQL, PostgreSQL, SQL Server, DynamoDB, MongoDB, AWS Neptune, Cassandra, Neo4j
  9. Experience designing and building data lake and data warehouse solutions
  10. Linux Server basic hands-on admin experience.
  11. Some Experience with Cloud Computing management on the AWS platform
  12. Experience with Monitoring/Alert planning for data services.
  13. Some Experience with highly available database technologies like clustering, replication, mirroring, etc.
  14. Knowledge of administration, replication, backup and restore of relational databases
  15. Experience with data tools like Jupyter

Preferred Qualifications

  1. Experience with MuleSoft Anypoint Platform and Dataweave a plus.
  2. Experience in Clinical Trials and/or life science industry
  3. Understanding of regulatory framework for software delivery
  4. Experience with operational efficiency improvement initiatives
  5. Experience with CSV (Computer Systems Validation)
  6. Experience with SAFe methodology
  7. Experience with JIRA, Confluence, SpiraTest is a plus

Competencies

  1. Thrive in fast-paced, agile environments, and able to learn new areas quickly
  2. Broad knowledge of common infrastructure technologies such as web servers, load balancers, etc.
  3. Excellent troubleshooting skills and ability to understand complex relationships between components of multi-tiered and distributed applications.
  4. Solid understanding of load balancing and high volume, high availability environments
  5. Knowledge of SDLC and project management methodologies (JIRA experience is a plus)
  6. Able to analyze and review current functionality to determine potential areas of improvement and cost savings
  7. Ability to work independently with minimal guidance in a fast-paced environment
  8. Demonstrate excellent communication skills including the ability to effectively communicate with internal and external customers
  9. Strong work ethic with good time management with the ability to work with diverse teams and lead meetings
  10. Ability to work with all levels of the organization
  11. Experience using both SQL, NoSQL and Graph Databases
  12. Experienced in automation and automation tools such as Jenkins, Puppet, Chef, etc.
  13. Amazon: RDS, Aurora, Athena, DocumentDB, DynamoDB, Neptune,
  14. Apache Cassandra
  15. Neo4j
  16. Snowflake
  17. Experience programming in Python, Java, or Scala

Supervision

The incumbent reports to the Data Architect, who will also assign projects, provide general direction and guidance. Incumbent is expected to perform duties and responsibilities with minimal supervision.

Direct Reports

None

We value employee well-being and aim to provide team members with everything they need to succeed.

Submit your resume to apply!

Job tags: AWS Chef DevOps Entrepreneurial Java Jenkins Jira Linux Mobile MongoDB MySQL NoSQL PostgreSQL Project Management Puppet Python Research Scala SQL
Job region(s): North America