FreshRemote.Work

Lead Data Engineer - Remote - USA

About Us

Wizard is revolutionizing the shopping experience using the power of generative AI and rich messaging technologies to build a personalized shopping assistant for every consumer. We scour the entire internet of products and ratings across brands and retailers to find the best products for every consumer’s personalized needs. Using an effortless text-based interface, Wizard AI is always just a text away. The future of shopping is here. Shop smarter with Wizard.

The Role

We seek a Lead Data Engineer to take charge of our data engineering initiatives, focusing on enhancing data collection, storage, and analysis across all of Wizard's dynamic services. This senior position is pivotal to our data infrastructure, enabling data-driven decision-making and supporting our ambitious growth objectives.

Key Responsibilities:

  • Architect and scale a state-of-the-art data infrastructure capable of handling batch and real-time data processing needs with unparalleled performance.
  • Collaborate closely with the data science team to oversee data systems, ensuring accurate monitoring and insightful analysis of business processes.
  • Design and implement robust ETL (Extract, Transform, Load) data pipelines, optimizing data flow and accessibility.
  • Develop comprehensive backend data solutions to bolster microservices architecture, ensuring seamless data integration and management.
  • Engineer and manage integrations with third-party e-commerce platforms, expanding Wizard's data ecosystem and capabilities.

You

  • Bachelor's degree in Computer Science or a related field, with a solid foundational knowledge of data engineering principles.
  • 7-10 years of software development experience, significantly focusing on data engineering.
  • Proficiency in Python or Java, with a deep understanding of software engineering best practices.
  • Expertise in distributed computing and data modeling, capable of designing scalable data systems.
  • Demonstrated experience in building ETL pipelines using tools such as Apache Spark, Databricks, or Hadoop.
  • Extensive experience …

Hey, this job isn't fresh anymore!

Search Fresh Jobs