FreshRemote.Work

Principal Data Engineer

Remote US Canada

What You'll Do:As a pivotal member of the team, you will lead the design and development of a robust data architecture that guides data modeling, integration, processing, and delivery standards enabling modern data product development at Scribd.
You will also serve as a data and analytics solution architect, leading architecture initiatives encompassing data warehousing, data pipeline development, data integrations, and data modeling. You will shape Scribd’s data strategy, guiding stakeholders in how they consume and act on data.
We’re looking for someone with proven proficiency in architecting, designing and development experience with batch and real time streaming infrastructure and workloads. Your expertise will help establish standards for data modeling, integration, processing, and delivery and also help translate business requirements into technical specifications.
At Scribd, we leverage deep data insights to inform every aspect of our business, from product development, experimentation, to understanding our subscriber engagement and tracking key performance indicators. You'll join a data engineering team tackling complex challenges within a rich domain encompassing three distinct brands – Scribd, Everand, and Slideshare – all serving a massive user base with over 200 million monthly visitors and 2 million paying subscribers. You'll have the opportunity to make a real impact as we are heavily investing in improving our core data layer and this exciting new role puts you right at the forefront of this initiative.
Based on the project, this might involve cross-functional work with the Data Science, Analytics, and other Engineering and Business teams to design cohesive data models, database schemas and data storage solutions, consumption strategies and patterns. Almost everything you will be working on will be to increase the "customer satisfaction" for internal customers of Scribd data.
Required Skills:• 7+ years of experience in data strategy, data architecture, modeling, solution design, data engineering, or a similar role• Hands-on experience and knowledge of data lake technologies (Databricks, Snowflake, etc),data storage formats (Parquet, Avro etc.) and query engines (Athena,Presto etc.), data schemas, optimization of queries and associated concepts for building optimized solutions at scale• Strong understanding of distributed systems, Restful APIs and data consumption patterns• Proficiency in data modeling, ETL processes, and real-time and batch analytics frameworks.• Proficient with at least one dialect of SQL.• Hands-on experience in Scala or Python.
Desired Skills:• Experience and working knowledge of streaming platforms, typically based around Kafka.• Strong grasp of AWS data platform services and their strengths/weaknesses.• Hands on experience in implementing data …
This job isn't fresh anymore!
Search Fresh Jobs