FreshRemote.Work

Senior Data Engineer

Remote - USA

About Us

Wizard is revolutionizing the shopping experience using the power of generative AI and rich messaging technologies to build a personalized shopping assistant for every consumer. We scour the entire internet of products and ratings across brands and retailers to find the best products for every consumer’s personalized needs. Using an effortless text-based interface, Wizard AI is always just a text away. The future of shopping is here. Shop smarter with Wizard.

The Role

We are seeking a Senior Data Engineer to join our dynamic data engineering team and play a pivotal role in enhancing Wizard's data collection, storage, and analysis capabilities. This position is critical to strengthening our data infrastructure to support data-driven decision making and ambitious growth initiatives.

Key Responsibilities:

  • Develop and maintain scalable data infrastructure to support batch and real-time data processing with high performance and reliability.
  • Build and optimize ETL pipelines for efficient data flow and accessibility.
  • Collaborate with data scientists and cross-functional teams to ensure accurate monitoring and insightful analysis of business processes.
  • Design backend data solutions to support a microservices architecture, ensuring seamless data integration and management.
  • Implement and manage integrations with third-party e-commerce platforms to enhance Wizard's data ecosystem.

You

  • 5+ years of professional experience in software development with a strong focus on data engineering.
  • Bachelor's degree in Computer Science or a related field, or equivalent practical experience.
  • Proficiency in Python with experience implementing software engineering best practices.
  • Strong expertise in building ETL pipelines using tools such as Apache Spark, Databricks, or Hadoop (Spark experience is required).
  • Solid understanding of distributed computing and data modeling for scalable systems.
  • Hands-on experience with NoSQL databases like MongoDB, Cassandra, DynamoDB, or CosmosDB.
  • Proficiency in real-time stream processing systems such as Kafka, AWS Kinesis, or GCP Data Flow.
  • Experience with Delta Lake, Parquet files, and cloud platforms (AWS, GCP, or Azure).
  • Familiarity with caching and search technologies such as Redis, Elasticsearch, or Solr.
  • Knowledge of message queuing systems like RabbitMQ, AWS SQS, or GCP Cloud Tasks.
  • Advocate for Test-Driven Development (TDD) and experienced in using version control tools like GitHub or Bitbucket.
  •  

Additional Preferred Qualifications:

  • Exceptional written and verbal communication skills, capable of articulating complex technical concepts clearly and concisely.
  • A collaborative team player, eager to share knowledge and learn from peers
  • Passionate about problem-solving, with a proactive approach to finding innovative solutions.

The expected salary for this role is $165,000 - $210,000 depending on experience and level



Apply