Senior Data Engineer
United States of America : Remote
JOB DESCRIPTION:
Interested in applying your wealth of technical knowledge and experience towards an opportunity in the medical field and improving the lives of people with diabetes? The candidate will be responsible for big data engineering, data wrangling, and data analysis in the Cloud. The role will also contribute to defining and implementing Big Data Strategy for the organization along with driving the implementation of IT solutions for the business. The candidate will be working with other data engineers, data analysts and data scientists to focus on applying data engineering, data science and machine learning approaches to solve business problems.
As a senior member of the Data Engineering & Analytics team, you will be building big data collection
and analytics capabilities to uncover customer, product and operational insights. Candidate should be able to work on a geographically distributed team to develop data pipelines capable of handling complex data sets quickly and securely as well as operationalize data science solutions. Additionally, they will be working in a technology-driven environment utilizing the latest tools and techniques such as Databricks, Redshift, S3, Lambda, DynamoDB, Spark and Python.
The candidate should have a passion for software engineering to help shape the direction of the team. Highly sought-after qualities include versatility and a desire to continuously learn, improve, and empower other team members. Candidate will support building scalable, highly available, efficient, and secure software solutions for big data initiatives.
Responsibilities
Design and implement data pipelines to be processed and visualized across a variety of projects and initiatives
Develop and maintain optimal data pipeline architecture by designing and implementing data ingestion solutions on AWS using AWS native services.
Design and optimize data models on AWS Cloud using Databricks and AWS data stores such as Redshift, RDS, S3
Integrate and assemble large, complex data sets that meet a broad range of business requirements
Read, extract, transform, stage and load data to selected tools and frameworks as required and requested
Customizing and managing integration tools, databases, warehouses, and analytical systems
Process unstructured data into a form suitable for analysis and assist in analysis of the processed data
Working directly with the technology and engineering teams to integrate data processing and business objectives
Monitoring and optimizing data performance, uptime, and scale; Maintaining high standards of code quality and thoughtful design
Create software architecture and design documentation for the supported solutions and overall best practices and patterns
Support team with technical planning, design, and code reviews including peer code reviews
Provide Architecture and Technical Knowledge training and support for the solution groups
Develop good working relations with the other solution teams and groups, such as Engineering, Marketing, Product, Test, QA.
Stay current with emerging trends, making recommendations as needed to help the organization innovate
Required Qualifications
Bachelors Degree in Computer Science, Information Technology or other relevant field
At least 2 to 6 years of recent experience in Software Engineering, Data Engineering or Big Data
Ability to work effectively within a team in a fast-paced changing environment
Knowledge of or direct experience with Databricks and/or Spark.
Software development experience, ideally in Python, PySpark, Kafka or Go, and a willingness to learn new software development languages to meet goals and objectives.
Knowledge of strategies for processing large amounts of structured and unstructured data, including integrating data from multiple sources
Knowledge of data cleaning, wrangling, visualization and reporting
Ability to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations and experience
Familiarity of databases, BI applications, data quality and performance tuning
Excellent written, verbal and listening communication skills
Comfortable working asynchronously with a distributed team
Preferred Qualifications
Knowledge of or direct experience with the following AWS Services desired S3, RDS, Redshift, DynamoDB, EMR, Glue, and Lambda.
Experience working in an agile environment
Practical Knowledge of Linux
The base pay for this position is
$72,700.00 – $145,300.00In specific locations, the pay range may vary from the range posted.
JOB FAMILY:
Product Development
DIVISION:
ADC Diabetes Care
LOCATION:
United States of America : Remote
ADDITIONAL LOCATIONS:
WORK SHIFT:
Standard
TRAVEL:
Yes, 5 % of the Time
MEDICAL SURVEILLANCE:
No
SIGNIFICANT WORK ACTIVITIES:
Continuous sitting for prolonged periods (more than 2 consecutive hours in an 8 hour day), Keyboard use (greater or equal to 50% of the workday)Abbott is an Equal Opportunity Employer of Minorities/Women/Individuals with Disabilities/Protected Veterans.
EEO is the Law link - English: http://webstorage.abbott.com/common/External/EEO_English.pdf
EEO is the Law link - Espanol: http://webstorage.abbott.com/common/External/EEO_Spanish.pdf Apply
Job Profile
Remote
Benefits/Perks Tasks- Create software architecture documentation
- Data Analysis
- Data Collection
- Design and implement data pipelines
- Develop and maintain data architecture
- Documentation
- Integrate and assemble complex data sets
- Monitor and optimize data performance
- Product development
- Reporting
- Support technical planning and code reviews
- Training and Support
Agile Analytical Analytics AWS Best Practices Big Data Big data engineering Big data strategy Branded generic medicines C Cloud Cloud Computing Code Quality Code Review Communication Computer Science Data analysis Data Architecture Databricks Data Cleaning Data Collection Data engineering Data ingestion Data Mining Data Modeling Data Pipelines Data processing Data Quality Data Science Data Wrangling Diabetes Care Diagnostics Documentation DynamoDB Engineering English Go Healthcare IT Lambda Machine Learning Marketing Medical Devices Monitoring Nutritionals Planning Product Development Python Redshift S3 Software Software architecture Software Development Software Engineering Spark Technical knowledge Technical Planning Training
Experience2-6 years
EducationBachelor's degree Business Computer Science Data Science Engineering Healthcare Information Technology Marketing Relevant Field Science Software Engineering
TimezonesAmerica/Anchorage America/Chicago America/Denver America/Los_Angeles America/New_York Pacific/Honolulu UTC-10 UTC-5 UTC-6 UTC-7 UTC-8 UTC-9