<Back to Search
Member of Technical Staff - Distributed Training Engineer
Millbrae, CAMarch 25th, 2026
About Liquid AISpun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.The OpportunityOur Training Infrastructure team is building the distributed systems that power our next-generation Liquid Foundation Models. As we scale, we need to design, implement, and optimize the infrastructure that enables large-scale training.This is a high-ownership training systems role focused on runtime/performance/reliability (not a general platform/SRE role). You'll work on a small team with fast feedback loops, building critical systems from the ground up rather than inheriting mature infrastructure.While San Francisco and Boston are preferred, we are open to other locations.What We're Looking ForWe need someone who:Loves distributed systems complexity: Our team builds systems that keeps long training runs stable, debugs training failures across GPU clusters, and improves performance.Wants to build: We need builders who find satisfaction in robust, fast, reliable infrastructure.Thrives in ambiguity: Our systems support model architectures that are still evolving. We make decisions with incomplete information and iterate quickly.Aligns with team priorities and delivers: Our best engineers align with team priorities while pushing back with data when they see problems.The WorkDesign and build core systems that make large training runs fast and reliableBuild scalable distributed training infrastructure for GPU clustersImplement and tune parallelism/sharding strategies for evolving architecturesOptimize distributed efficiency (topology-aware collectives, comm/compute overlap, straggler mitigation)Build data loading systems that eliminate I/O bottlenecks for multimodal datasetsDevelop checkpointing mechanisms balancing memory constraints with recovery needsCreate monitoring, profiling, and debugging tools for training stability and performanceDesired ExperienceMust-have:Hands-on experience building distributed training infrastructure (PyTorch Distributed DDP/FSDP, DeepSpeed ZeRO, Megatron-LM TP/PP)Experience diagnosing performance bottlenecks and failure modes (profiling, NCCL/collectives issues, hangs, OOMs, stragglers)Understanding of hardware accelerators and networking topologiesExperience optimizing data pipelines for ML workloadsNice-to-have:MoE (Mixture of Experts) training experienceLarge-scale distributed training (100+ GPUs)Open-source contributions to training infrastructure projectsWhat Success Looks Like (Year One)Training throughput has increasedOverall training efficiency/cost has improvedTraining stability has improved (fewer failures, faster recovery)Data loading bottlenecks are eliminated for multimodal workloadsWhat We OfferGreenfield challenges: Build systems from scratch for novel architectures. High ownership from day one.Compensation: Competitive base salary with equity in a unicorn-stage companyHealth: We pay 100% of medical, dental, and vision premiums for employees and dependentsFinancial: 401(k) matching up to 4% of base payTime Off: Unlimited PTO plus company-wide Refill Days throughout the year
Showing 50 of 39,992 matching similar jobs
- Senior SWE - Hooglee
- System Software Engineer, Staff (Security)
- Lead Distributed Systems Engineer – AI Data Platform
- Staff Engineer - Infotainment Base Software Development
- Senior Software Engineer
- Senior Principal/Staff Software Engineer
- Principal Software Engineer – Large-Scale LLM Memory and Storage Systems
- Senior Software Engineer, Platform
- SYSTEMS ENGINEER II
- Systems Integration Engineer
- Principal Software Engineer – Large-Scale LLM Memory and Storage Systems
- Lead Distributed Systems Engineer – AI Data Platform
- Software Dev Engineer II, Technical Content Experience (TCX) Engineering
- Software Dev Engineer II - AMZ9673866
- Software Dev Engineer- Embedded , Ring Device Software & Systems
- Platform Research Engineer I
- Associate Systems Engineer (Hazelwood)
- Java Software Engineer
- Senior Software Engineer, Core Platform - Moveworks
- Senior Software Engineer
- Senior Backend Engineer - ML-Powered Job Matching (Remote)
- Senior Software Engineer - Cryptographic Hardware Services
- Fermah - Solidity & Rust Engineer
- Software Principal Engineer
- Principal Communication Systems Engineer
- Senior Software Engineer with Test Equipment
- Sr IT Engineer - End Users Computing
- Software Systems Engineer- Linux Internals- Kernel Compiler
- Staff Software Engineer, Crypto
- Computer Software Engineering Intern - Orlando, FL - Fall 2026 - ATAS
- AI-Driven Digital Systems Engineer
- Senior Software Engineer, Infrastructure, Google Workspace
- Senior Staff Backend Architect - Financial Hub
- Principal Software Engineer - Manufacturing & Factory
- Staff Software Development Manager
- Senior Platform Engineer, Foundations & Scale (Backend)
- Senior Machine Learning Engineer, Platform & Personalization
- AI-Ready Data Platform Architect - Senior Staff
- Senior Platform Backend Engineer - Ledger Scale
- Junior Software Engineer: Web Apps & REST ServicesColumbus, OHMarch 20th, 2026