Sensor Fusion Engineer, Autonomy
USA (remote)
At Serve Robotics, we’re reimagining how things move in cities. Our personable sidewalk robot is our vision for the future. It’s designed to take deliveries away from congested streets, make deliveries available to more people, and benefit local businesses.
The Serve fleet has been delighting merchants, customers, and pedestrians along the way in Los Angeles while doing commercial deliveries. We’re looking for talented individuals who will grow robotic deliveries from surprising novelty to efficient ubiquity.
Who We Are
We are tech industry veterans in software, hardware, and design who are pooling our skills to build the future we want to live in. We are solving real-world problems leveraging robotics, machine learning and computer vision, among other disciplines, with a mindful eye towards the end-to-end user experience. Our team is agile, diverse, and driven. We believe that the best way to solve complicated dynamic problems is collaboratively and respectfully.
Serve Robotics aims to develop dependable and proficient sidewalk autonomy software. Our Autonomy team is looking for a highly skilled Sensor Fusion Engineer. In this role, you will be responsible for developing and maintaining our onboard sensor-to-obstacle pipeline, ensuring robust and reliable perception for our sidewalk delivery robots. You will work with a diverse set of sensor modalities—including mono and stereo cameras, LiDAR etc and dive deep into the mechanics of sensor optics, firmware, and calibration. This role demands strong production-level C++ expertise, with additional skills in CUDA, edge-device perception, machine learning, and sensor calibration considered a plus.
Responsibilities
Lead, design, implement, and optimize sensor fusion algorithms to reliably detect and track obstacles.
Develop robust pipelines that integrate data from multiple sensor modalities (cameras, LiDAR, RealSense) in real-time.
Develop geometric perception pipelines such as point cloud labeling, occupancy grid map generation, obstacle detection etc tailored for resource-constrained, real-time edge devices.
Develop ML based sensor fusion and obstacle detection pipelines. Fine-tune and deploy learning-based perception models using data-centric techniques.
Validate sensor fusion outputs against ground-truth data and refine models as necessary.
Write, maintain, and optimize production-level C++ code for real-time sensor data processing while maintaining strict latency requirements.
Implement parallel computing solutions using CUDA where applicable to enhance performance on edge devices.
Analyze and understand sensor specifications, operating principles, and firmware intricacies, working closely with systems and hardware teams.
Collaborate with system and hardware team, firmware developers, and ML specialists to ensure seamless integration of sensors with the overall perception system.
Work closely with sensor calibration teams to design strategies that ensure high precision and reliability.
Qualifications
3+ years demonstrated industry experience working on sensor fusion applications for robotic or autonomous systems in a fast-paced innovative environment.
Deep understanding of sensor technologies (cameras, LiDAR, RealSense) including optics, firmware, and data characteristics.
Experience in developing real-time applications and low-latency systems.
Production quality programming experience in C++ and Python.
Demonstrated skill building large system designs, gathering system requirements and identifying key stakeholders.
Strong communication and collaboration skills.
What Makes You Standout
5+ years demonstrated industry experience working on sensor fusion applications for robotic or autonomous systems in a fast-paced innovative environment.
Hands-on experience with CUDA for parallel processing and performance optimization.
Experience with state of the art ML approaches for obstacle detection and tracking.
Exposure to integrating ML models within sensor fusion pipelines.
Job Profile
Benefits/Perks Tasks- Collaborate with teams
- Communication
- Develop sensor fusion algorithms
- Integrate data from sensors
- Optimize real-time processing
- Validate models
Agile Autonomous Systems Autonomy Autonomy Software C C++ Cameras Collaboration Communication Computer Vision CUDA Data processing Edge devices Geometric Perception Lidar Machine Learning Obstacle Detection Performance Optimization Python RealSense Robotics Sensor Fusion Tracking User Experience
Experience3 years
Timezones