Lead Data Engineer
Remote - USA
About Us
Wizard is revolutionizing the shopping experience using the power of generative AI and rich messaging technologies to build a personalized shopping assistant for every consumer. We scour the entire internet of products and ratings across brands and retailers to find the best products for every consumer’s personalized needs. Using an effortless text-based interface, Wizard AI is always just a text away. The future of shopping is here. Shop smarter with Wizard.
The Role
We seek a Lead Data Engineer to take charge of our data engineering initiatives, focusing on enhancing data collection, storage, and analysis across all of Wizard's dynamic services. This senior position is pivotal to our data infrastructure, enabling data-driven decision-making and supporting our ambitious growth objectives.
Key Responsibilities:
- Architect and scale a state-of-the-art data infrastructure capable of handling batch and real-time data processing needs with unparalleled performance.
- Collaborate closely with the data science team to oversee data systems, ensuring accurate monitoring and insightful analysis of business processes.
- Design and implement robust ETL (Extract, Transform, Load) data pipelines, optimizing data flow and accessibility.
- Develop comprehensive backend data solutions to bolster microservices architecture, ensuring seamless data integration and management.
- Engineer and manage integrations with third-party e-commerce platforms, expanding Wizard's data ecosystem and capabilities.
You
- Bachelor's degree in Computer Science or a related field, with a solid foundational knowledge of data engineering principles.
- 7-10 years of software development experience, significantly focusing on data engineering.
- Proficiency in Python or Java, with a deep understanding of software engineering best practices.
- Expertise in distributed computing and data modeling, capable of designing scalable data systems.
- Demonstrated experience in building ETL pipelines using tools such as Apache Spark, Databricks, or Hadoop.
- Extensive experience with NoSQL databases, including MongoDB, Cassandra, DynamoDB, and CosmosDB.
- Proficiency in real-time stream processing systems such as Kafka, AWS Kinesis, or GCP Data Flow.
- Skilled in utilizing caching and search technologies like Redis, Elasticsearch, or Solr.
- Experience with message queuing systems, including RabbitMQ, AWS SQS, or GCP Cloud Tasks.
- Familiarity with Delta Lake, Parquet files, AWS, GCP, or Azure cloud services.
- A strong advocate for Test Driven Development (TDD) and experienced in version control using Git platforms like GitHub or Bitbucket.
Additional Preferred Qualifications
- Exceptional written and verbal communication skills, capable of articulating complex technical concepts clearly and concisely.
- A collaborative team player, eager to share knowledge and learn from peers, passionate about mentoring junior team members, and leading by …
This job isn't fresh anymore!
Search Fresh JobsJob Profile
401(k) Plan Competitive compensation packages Equity Health and dental insurance Life & Disability insurance
SkillsApache Spark AWS Azure Bitbucket Cassandra Databricks Delta Lake DynamoDB ElasticSearch GCP Generative AI Git GitHub Hadoop Java Kafka MongoDB NoSQL NoSQL databases Python RabbitMQ Redis
Experience7-10 years
Education TimezonesAmerica/Anchorage America/Chicago America/Denver America/Los_Angeles America/New_York Pacific/Honolulu UTC-10 UTC-5 UTC-6 UTC-7 UTC-8 UTC-9