Luma AI Logo

Luma AI

ML Engineer - Inference Serving

Posted 24 Days Ago
Hybrid
London, Greater London, England, GBR
Mid level
Hybrid
London, Greater London, England, GBR
Mid level
The ML Engineer will integrate model architectures, optimize deployment workflows, maintain CI/CD pipelines, and ensure reliability of inference services across large-scale systems.
The summary above was generated by AI
About Luma AI
Luma's mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.

Luma’s mission is to build multimodal AI to expand human imagination and capabilities.
We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. We are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to affect change. We know we are not going to reach our goal with reliable & scalable infrastructure, which is going to become the differentiating factor between success and failure.

Role & Responsibilities
  • Ship new model architectures by integrating them into our inference engine
  • Collaborate closely across research, engineering and infrastructure to streamline and optimize model efficiency and deployments
  • Build internal tooling to measure, profile, and track the lifetime of inference jobs and workflows
  • Automate, test and maintain our inference services to ensure maximum uptime and reliability
  • Optimize deployment workflows to scale across thousands of machines
  • Manage and optimize our inference workloads across different clusters & hardware providers
  • Build sophisticated scheduling systems to optimally leverage our expensive GPU resources while meeting internal SLOs
  • Build and maintain CI/CD pipelines for processing/optimizing model checkpoints, platform components, and SDKs for internal teams to integrate into our products/internal tooling

Background
  • Strong Python and system architecture skills
  • Experience with model deployment using PyTorch, Huggingface, vLLM, SGLang, tensorRT-LLM, or similar
  • Experience with queues, scheduling, traffic-control, fleet management at scale
  • Experience with Linux, Docker, and Kubernetes
  • Bonus points: 
    • Experience with modern networking stacks, including RDMA (RoCE, Infiniband, NVLink)
    • Experience with high performance large scale ML systems (>100 GPUs)
    • Experience with FFmpeg and multimedia processing

Example Projects
  • Create a resilient artifact store that manages all checkpoints across multiple versions of multiple models
  • Enable hotswapping of models for our GPU workers based on live traffic patterns
  • Build a robust queueing system for our jobs that take into account cluster availability and user priority
  • Architect a e2e model serving deployment pipeline for a custom vendor
  • Integrate our inference stack into an online reinforcement learning pipeline
  • Regression & precision testing across different hardware platforms
  • Building a full tracing system to trace the end-to-end lifetime of any inference workload

Tech stackMust have
  • Python
  • Redis
  • S3-compatible Storage
  • Model serving (one of: PyTorch, vLLM, SGLang, Huggingface)
  • Understanding of large-scale orchestration, deployment, scheduling (via Kubernetes or similar)
Nice to have
  • CUDA
  • FFmpeg

Compensation
The base pay range for this role is $187,500 – $395,000 per year.

Top Skills

Cuda
Ffmpeg
Huggingface
Kubernetes
Python
PyTorch
Redis
S3-Compatible Storage
Sglang
Vllm

Similar Jobs

An Hour Ago
Easy Apply
Remote or Hybrid
Easy Apply
Senior level
Senior level
Artificial Intelligence • Cloud • Computer Vision • Hardware • Internet of Things • Software
The Manager II Sales Operations will streamline customer experiences for the Enterprise sales team, lead a team of coordinators and analysts, and enhance overall sales processes in collaboration with various departments, focusing on revenue growth.
Top Skills: SalesforceTableau
An Hour Ago
In-Office
Junior
Junior
Artificial Intelligence • Big Data • Cloud • Information Technology • Software • Cybersecurity • Data Privacy
The role involves supporting pipeline generation, conducting program ideation and execution, developing content, and collaborating with various marketing teams to drive customer engagement.
Top Skills: Digital MarketingEmail MarketingPaid Advertising
An Hour Ago
In-Office
Expert/Leader
Expert/Leader
Aerospace • Artificial Intelligence • Hardware • Machine Learning • Software • Defense • Manufacturing
Lead a team in designing and developing high-performance electro-optical systems for space applications, overseeing the full development lifecycle and ensuring compliance with mission requirements.
Top Skills: Code VEo/Ir Imaging SystemsHyperspectral ImagingLidarOptical EngineeringOpticstudioZemax

What you need to know about the London Tech Scene

London isn't just a hub for established businesses; it's also a nursery for innovation. Boasting one of the most recognized fintech ecosystems in Europe, attracting billions in investments each year, London's success has made it a go-to destination for startups looking to make their mark. Top U.K. companies like Hoptin, Moneybox and Marshmallow have already made the city their base — yet fintech is just the beginning. From healthtech to renewable energy to cybersecurity and beyond, the city's startups are breaking new ground across a range of industries.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account