<Back to Search
Member of Technical Staff | Robotics (Computer Vision / VLA / ML Infrastructure)
San Mateo, CAApril 1st, 2026
About the CompanyDeepReach is building the next-generation data infrastructure for robotics. We help bridge the gap between promising robot models and real-world deployment by building the systems, data pipelines, and learning loops needed to make robots improve in production. We believe robotics progress will be driven not just by better models, but by better data engines: how data is collected, filtered, evaluated, and turned into measurable gains on real tasks. Our team works across robot deployment, teleoperation, data generation, model training, and evaluation, with a strong bias toward hands-on execution and fast iteration.About the RoleThis combined opening covers three core technical tracks for our robotics team:Computer Vision (Perception)VLA & Robot LearningML Infrastructure You will be matched to the track aligned with your background after application. All roles focus on applied research, real physical robot deployment, closing the loop between data, models and production performance, and fast iteration in a startup environment.ResponsibilitiesTrack 1:Member of Technical Staff, Computer VisionBuild and improve computer vision pipelines for robotics, including perception for manipulation, scene understanding, tracking, and multi-camera systemsWork on camera calibration, synchronization, sensor integration, and data quality improvement for real-world robot setupsDevelop tools and pipelines for generating, filtering, curating, and validating robotics vision datasetsSupport data collection and annotation workflows by improving visual quality, consistency, and task relevanceDesign experiments to measure how perception improvements affect downstream robotic performanceDebug perception failures in real environments, including issues caused by lighting, motion blur, occlusion, calibration drift, or sensor noiseRead and implement recent vision research, reproduce promising methods, and adapt them to production robotics workflows Track 2:Member of Technical Staff, VLATrain and fine-tune VLA, diffusion policy, or related robot learning models for real-world tasksBuild data and training pipelines that turn deployment and teleoperation data into better policy performanceDesign experiments to identify what actually improves task success rates on real robotsCollaborate with data and deployment teams to close the loop between model failures and data collection strategyDeploy and debug learned policies on physical robot systems, including robot arms, grippers, and multi-camera setupsDefine internal evaluation frameworks tied to real operational tasks rather than benchmark-only performanceRead and implement recent papers, reproduce promising results, and adapt them for our stack and constraints Track 3:Member of Technical Staff, ML InfrastructureBuild and maintain the infrastructure for robotics data processing, model training, evaluation, and experiment managementDevelop scalable pipelines for ingesting, filtering, curating, versioning, and serving robotics datasetsImprove internal tooling for training runs, distributed jobs, checkpointing, dataset management, and metrics trackingBuild systems that connect deployment data, teleoperation data, and model evaluation into a fast iteration loopCollaborate with research and deployment teammates to remove bottlenecks in training and evaluation workflowsDesign internal benchmarks and experiment infrastructure that make model progress measurable and reproducibleRead and adapt ideas from research and open-source tooling to improve our internal platform QualificationsBachelor’s degree or equivalent practical experience in Computer Science, Robotics, Electrical Engineering, or a related field Required SkillsStrong background in computer vision for real-world systemsExperience with one or more: multi-view geometry, calibration, visual tracking, segmentation, detection, 3D vision, point clouds, depth sensing, video understandingStrong coding skills in Python and solid experience with PyTorch or related vision toolingAbility to move from research ideas to robust working systemsReal-world experience working with physical camera and sensor systems in roboticsHigh ownership mindset and comfort in a fast-paced startup Preferred SkillsStrong hands-on background in robot learning: imitation learning, RL, diffusion policies, VLA, visuomotor policiesStrong PyTorch skills and experience with modern model training workflowsAbility to move from paper to implementation quickly and independentlyStrong systems intuition across perception, policy, and controlReal-world experience deploying or debugging policies on physical robotsHigh ownership mindset and comfort in an early-stage, fast-changing environment Pay range and compensation packageSalary reference: $130K - $180K per yearEqual Opportunity StatementWe conduct regular resume screening for applications from both channels simultaneously. If your profile passes the initial review, our team will send an interview invitation via email for further communication. After shortlisting, we will match your background to the suitable track: Computer Vision / VLA / ML Infrastructure, and share the full role details accordingly.How to ApplyYou may apply in two optional ways:Directly submit your resume by applying to this position on the current job platform;Visit our official website talex.ai to find and submit your application for the corresponding combined role.
Showing all 537 matching similar jobs
- AI Robotics Engineer, Vision-Language-Action (VLA)
- Member of Technical Staff | Robotics (Computer Vision / VLA / ML Infrastructure)
- Machine Learning Engineer
- Machine Learning Engineer, Reinforcement Learning, Self-Driving
- Senior Machine Learning Engineer - ML Agents and Planning
- Machine Learning Scientist - Quant AI - Senior Associate - Machine Learning Center of Excellence
- Senior Machine Learning Engineer
- Machine Learning Engineer, 3D Simulation
- Machine Learning Engineer - Perception Mapping
- Senior Machine Learning Engineer
- Software Engineer - Motion Planning
- Senior Machine Learning Engineer - 3D Sensor Simulation
- Senior Perception Engineer
- AI-Driven Software Engineer | Research-to-Product
- Robotics Production Software Engineer
- Staff Machine Learning Engineer (ADAS/Autonomous Driving)
- Senior Perception Engineer
- Research Scientist - - Foundational Models & World Models for Robotics
- Engineering Manager, Machine Learning Behavior Planning & Prediction
- VP of Research, Machine Learning
- Manager, Data Science (Recommender Systems)
- Machine Learning Engineer, Safety
- Machine Learning Engineer: AI for Biology & Open Science
- AI Engineer, Manipulation, Optimus
- ML Engineer — End-to-End Autonomous Driving
- Machine Learning Engineer
- Machine Learning Engineer
- Robotics Engineer, Wireless Networking
- Software Engineer
- Senior Embedded Controls Engineer, Body Controls
- Senior ML Scientist, Quant AI — Production-Ready Models
- Machine Learning Engineer, Destination & Special Behaviors, Self-DrivingPalo Alto, CAApril 1st, 2026
- Manager, Data Science (Recommender Systems)
- Senior ML Scientist, Quant AI — Production-Ready Models
- Embedded Software Engineer, Implant Embedded Systems
- AI Developer Productivity Engineer
- Senior ML Engineer
- Machine Learning Intern (Summer 2026)
- Senior Software Engineer, Research Systems
- Senior/Staff Backend EngineerSan Mateo, CAMarch 31st, 2026