Platform engineer, MLOps

Posted 22 Days Ago
Be an Early Applicant
London, Greater London, England
5-7 Years Experience
Artificial Intelligence • Marketing Tech • Software • Generative AI
The Role
Platform engineer, MLOps role at Writer, a full-stack generative AI platform company empowering leading enterprises. Responsibilities include designing CI/CD pipelines, managing monitoring systems, optimizing training environments, and overseeing large Kubernetes clusters. Ideal for individuals experienced with Terraform, Python, Bash, cloud platforms, and ML systems.
Summary Generated by Built In

✍🏽 About Writer

Writer is the full-stack generative AI platform delivering transformative ROI for the world’s leading enterprises. Named one of the top 50 companies in AI by Forbes, Writer empowers hundreds of customers like Accenture, Intuit, L’Oreal, and Vanguard to transform the way they work. 

Our all-in-one solution makes it easy to deploy customized AI apps and workflows that accelerate growth, increase productivity, and ensure compliance. Designed to provide enterprise-grade accuracy, security, and efficiency, Writer’s suite of development tools is supported by Palmyra – Writer’s state-of-the-art family of LLMs – alongside our industry-leading graph-based RAG and customizable AI guardrails. 

Founded in 2020 with offices in San Francisco, New York City, and London, Writer is backed by strategic investors, including ICONIQ Growth, Insight Partners, WndrCo, Balderton Capital, and Aspect Ventures. 

Our team of over 200 employees thinks big and moves fast, and we’re looking for smart, hardworking builders and scalers to join us on our journey to create a better future of work.

📐 About this role 

As a Platform engineer, MLOps, you will be critical to deploying and managing cutting-edge infrastructure crucial for AI/ML operations, and you will collaborate with AI/ML engineers and researchers to develop a robust CI/CD pipeline that supports safe and reproducible experiments. Your expertise will also extend to setting up and maintaining monitoring, logging, and alerting systems to oversee extensive training runs and client-facing APIs. You will ensure that training environments are optimally available and efficiently managed across multiple clusters, enhancing our containerization and orchestration systems with advanced tools like Docker and Kubernetes. 

This role demands a proactive approach to maintaining large Kubernetes clusters, optimizing system performance, and providing operational support for our suite of software solutions. If you are driven by challenges and motivated by the continuous pursuit of innovation, this role offers the opportunity to make a significant impact in a dynamic, fast-paced environment.

🦸🏻‍♀️ Your responsibilities:

  • Work closely with AI/ML engineers and researchers to design and deploy a CI/CD pipeline that ensures safe and reproducible experiments.

  • Set up and manage monitoring, logging, and alerting systems for extensive training runs and client-facing APIs.

  • Ensure training environments are consistently available and prepared across multiple clusters.

  • Develop and manage containerization and orchestration systems utilizing tools such as Docker and Kubernetes.

  • Operate and oversee large Kubernetes clusters with GPU workloads.

  • Improve reliability, quality, and time-to-market of our suite of software solutions

  • Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating for continual improvement

  • Provide primary operational support and engineering for multiple large-scale distributed software applications

⭐️ Is this you? 

  • You have professional experience with: 

    • Model training

    • Huggingface Transformers

    • Pytorch

    • vLLM

    • TensorRT

    • Infrastructure as code tools like Terraform

    • Scripting languages such as Python or Bash

    • Cloud platforms such as Google Cloud, AWS or Azure

    • Git and GitHub workflows

    • Tracing and Monitoring

  • Familiar with high-performance, large-scale ML systems

  • You have a knack for troubleshooting complex systems and enjoy solving challenging problems

  • Proactive in identifying problems, performance bottlenecks, and areas for improvement

  • Take pride in building and operating scalable, reliable, secure systems

  • Familiar with monitoring tools such as Prometheus, Grafana, or similar

  • Are comfortable with ambiguity and rapid change

Preferred skills and experience:

  • Familiar with monitoring tools such as Prometheus, Grafana, or similar

  • 5+ years building core infrastructure 

  • Experience running inference clusters at scale

  • Experience operating orchestration systems such as Kubernetes at scale

Curious to learn more about who we are and how we operate? Visit us here

🍩 Benefits & perks

  • Generous PTO, plus company holidays

  • Medical, dental, and vision coverage for you and your family

  • Paid parental leave for all parents (12 weeks)

  • Fertility and family planning support

  • Early-detection cancer testing through Galleri

  • Flexible spending account and dependent FSA options

  • Health savings account for eligible plans with company contribution

  • Annual work-life stipends for:

    • Home office setup, cell phone, internet

    • Wellness stipend for gym, massage/chiropractor, personal training, etc.

    • Learning and development stipend

  • Company-wide off-sites and team off-sites

  • Competitive compensation, company stock options and 401k

Writer is an equal-opportunity employer and is committed to diversity. We don't make hiring or employment decisions based on race, color, religion, creed, gender, national origin, age, disability, veteran status, marital status, pregnancy, sex, gender expression or identity, sexual orientation, citizenship, or any other basis protected by applicable local, state or federal law. Under the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.

Top Skills

Bash
Python
Terraform
The Company
HQ: San Francisco, CA
50 Employees
Remote Workplace
Year Founded: 2019

What We Do

Writer is the leading AI writing platform for teams.
We empower GTM leaders to build a consistent brand across every customer touchpoint.
Our automated language generation and writing suggestions make it possible for teams to accelerate content, align brand, and empower more writers across all types of content and communications.
Enterprise-class and simple to roll out, Writer is deployed widely at leading brands like Twitter, Intuit, UiPath, UnitedHealthcare, Accenture, and Spotify.

Gallery

Gallery

Jobs at Similar Companies

SharkNinja Logo SharkNinja

Environmental Manager

Beauty • Robotics • Design • Appliances • Manufacturing
Easy Apply
London, Greater London, England, GBR
3600 Employees

SharkNinja Logo SharkNinja

Senior Marketing Communications Insight Manager

Beauty • Robotics • Design • Appliances • Manufacturing
Easy Apply
London, Greater London, England, GBR
3600 Employees

SharkNinja Logo SharkNinja

eCommerce IT Operations Coordinator

Beauty • Robotics • Design • Appliances • Manufacturing
Easy Apply
London, Greater London, England, GBR
3600 Employees

Verkada Inc Logo Verkada Inc

Enterprise Development Representative - DACH

Cloud • Hardware • Security • Software
London, Greater London, England, GBR
2000 Employees

Similar Companies Hiring

Contentsquare Thumbnail
Software • Marketing Tech • Enterprise Web • Design • Analytics
SG
1800 Employees
Fusion Risk Management Thumbnail
Software • Professional Services
Chicago, IL
273 Employees
GRAIL Thumbnail
Software • Machine Learning • Healthtech • Biotech • Big Data • Artificial Intelligence
Menlo Park, CA
1300 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account