<Back to Search
AI Systems & Inference Frameworks Engineer
New York, NYMarch 25th, 2026
About UsMost AI is frozen in place - it doesn't adapt to the world. We think that's backwards. Our mandate is to build efficient intelligence that evolves in real-time. Our vision is AI systems that are flexible, personalized, and accessible to everyone. We believe efficiency is what makes this possible - it's how we expand access and ensure innovation benefits the many, not the few. We believe in talent density: bringing together the best and most driven individuals to push the boundaries of continual adaptation. We're looking for builders and creative thinkers ready to shape the next era of intelligence.The RoleYou'll work directly with our founders to design and build the inference and optimization systems that power our core product. This role bridges research and production, combining deep exploration of inference techniques with hands-on ownership of scalable, high-performance serving infrastructure. You'll own the full lifecycle of LLM inference-from experimentation and performance analysis to deployment and iteration in production-thriving in a zero-to-one environment and helping define the technical foundations of our inference stack.ResponsibilitiesInference Research & Systems: design and build our LLM inference stack from zero to one, exploring and implementing advanced techniques for low-latency, high-throughput serving of language and multimodal models.Frameworks & Optimization: develop and optimize inference using modern frameworks (e.g., vLLM, SGLang, TensorRT-LLM), experimenting with batching strategies, KV-cache management, parallelism, and GPU utilization to push performance and cost efficiency.Software-Hardware Co-Design: collaborate closely with founders and model developers to analyze bottlenecks across the stack, co-optimizing model execution, infrastructure, and deployment pipelines.QualificationsStrong experience building and optimizing LLM inference systems in production or research environmentsHands-on expertise with inference frameworks such as vLLM, SGLang, TensorRT-LLM, or similarDeep performance mindset with experience in GPU-backed systems, latency/throughput optimization, and resource efficiencySolid understanding of transformer inference, serving architectures, and KV-cache-based executionStrong programming skills in Python; experience with CUDA, Triton, or C++ a plusComfort working in ambiguous, zero-to-one environments and driving research ideas into production systemsNice to have: experience with model quantization or pruning, speculative decoding, multimodal inference, open-source contributions, or prior work in systems or ML research labsAbove all, we're looking for great teammates who make work feel lighter and aren't afraid to go out on a limb with bold ideas. You don't need to be perfect, but you do need to be adaptable. We encourage you to apply, even if you don't check every box.BenefitsFlexible work: In-person collaboration in the Bay Area, a distributed global-first team, and quarterly offsites.Adaption Passport: Annual travel stipend to explore a country you've never visited. We're building intelligence that evolves alongside you, so we encourage you to keep expanding your horizons.Lunch Stipend: Weekly meal allowance for take-out or grocery delivery.Well-Being: Comprehensive medical benefits and generous paid time off.
Showing 150 of 26,781 matching similar jobs in Springbrook, ND
- AI Engineer
- AI Frameworks Engineer (OpenVINO, GenAI)
- Director of AI
- Senior Machine Learning Engineer - Ranking & Recommendations (Generative AI)
- AI Engineer - Data and AI
- AI Engineer - Data and AI
- AI Engineer - Data and AI
- Senior ML Infra Engineer - Scalable AI for Drug Discovery
- Senior AI Engineer - Data and AI
- Principal Software Engineer
- AI Engineer - Data and AI
- AI Engineer - Data and AI
- Senior AI Engineer - Data and AI
- Senior AI Engineer - Data and AI
- Senior AI Engineer - Data and AI
- Generative AI Engineer - LLMs, Fine-Tuning & Deployment
- Principal AI Application Engineer
- Senior Gen AI & Scalable ML Platform Engineer
- Founding ML Systems Engineer - RL & Scalable Training
- KP AI/ML Engineer
- Senior Manager, Applied AI and Data Science
- AI Forward Deployed Engineer - SF
- Senior Manager, Applied AI
- Staff AI Engineer - Production ML/AI Architect
- Staff, Software Engineer - Backend
- Principal Software Engineer u2013 Marketplace Platforms
- Founding Full Stack Engineer (AI-Native)
- Senior ML Framework Performance Engineer - AI for Science at Scale
- Principal, Software Engineer u2013 Conversational AI
- (USA) Principal, Software Engineer - AI Evangelist
- Fraud Prevention Staff Engineer (AI & Decision Systems)
- Staff Software Engineer u2013 Marketplace
- Staff Machine Learning Engineer - Generative AI (Remote)
- Senior Machine Learning Engineer - Generative AI (Remote)
- Hybrid AI/ML Architect: Design & Lead Scalable AI Systems
- Web3/AI Engineer Intern - Path to Junior Role
- Staff Software Engineer
- Senior ML Inference Systems Engineer
- Production AI Agent Engineer
- LLM/AI Agent Backend Engineer