Ohme is on a mission to accelerate the global transition to clean, affordable energy. We do that by serving as an integrated hardware and software smart-grid platform, focused on the residential EV charging market.
The worlds of energy, transport and artificial intelligence are colliding and Ohme is at the heart of this new era. By using technology and data integrations to connect cars, chargers, people, energy providers and more, Ohme has a powerful platform that puts the consumer at the core.
Ohme has been selling its chargers to consumers since mid 2019 and has had exponential growth since. We are now operating in multiple countries and have partnerships with the likes of VW, Mercedes, Octopus Energy, and other innovative brands.
We are scaling up the business and are building out the team for rapid growth. If you’re interested joining a fast-growing cleantech venture on a data and AI-first journey to speed up the global transition to clean, affordable energy, read on!
We’re hiring an AI Engineer to help us build, run and scale the agentic platform powering Ohme’s applied AI work. You’ll own and evolve our agent runtime: the middleware that orchestrates LLM calls, tool execution, MCP connections, memory and routing, and contribute across the wider platform: integration layers, internal MCP servers, tools design, evaluation, observability and the dev experience our team relies on.
Our agents are already live with internal and external users, and we’re scaling fast. The work this year is hardening what’s running, opening new use cases, and turning real production traffic into the feedback loop that shapes what we ship next.
This is a hands-on, engineering-led role at the intersection of applied AI, cloud infrastructure on AWS, modern agent protocols, and the energy, grid, EV charging and customer outcomes domains we work in every day. Our stack is Python and TypeScript on AWS, and MCP is central to how our agents reach the systems they need.
Key Responsibilities
- Own the design, build and operation of our agent runtime: the host service that orchestrates models, tools, MCP connections and memory for everything we ship.
- Build and harden the integration layer between agents and the systems they reach: our internal MCP server, third-party connectors, our data platform and our APIs.
- Contribute across the full stack (backend services, integration glue, AWS infrastructure-as-code, dev experience) to keep the platform scaling reliably as load and use cases grow.
- Maintain and evolve the evaluation, tracing and observability we rely on to measure agent quality and operate confidently in production.
- Keep agent services secure, cost-efficient and operationally healthy as they grow.
- Productionise patterns from the wider MCP and agent ecosystem (advanced tool use, code orchestration, agent skills, MCP server design, elicitation and sampling) and translate them into our stack.
- Partner with data scientists, AI engineers, applied AI analysts and product to ship end-to-end across applied AI use cases.
- Stay close to the fast-moving agent ecosystem (frontier model releases, MCP spec evolution, new client capabilities) and bring practical, high-leverage patterns back to the team. Set engineering standards as we go.
About You Must-haves
- Production engineering rigour. Several years building and operating backend services in Python and/or TypeScript. You’ve shipped to real users, written tests that mattered, run on-call, and reasoned about cost and reliability, not just built prototypes.
- Hands-on with agents, with a production mindset. You’ve built seriously with LLMs; tool-using agents, multi-step workflows, agent memory, evaluation harnesses. Whether that’s in production, in PoCs, hackathon projects, internal tools or ambitious side projects. You think about what would break under real load, how you’d know, and how you’d fix it. If you’ve shipped to real users, even better; if you haven’t yet, show us you’ve been pushing on the hard parts anyway.
- Deep MCP fluency, beyond "tools". You understand MCP as a protocol, not just a buzzword. You know the difference between tools, resources, prompts and sampling, you’ve used or built servers with elicitation, and you’re tracking newer capabilities like the MCP Apps extension. You can explain when MCP is the right choice over a direct API call or a CLI, and when it isn't.
- Daily driver of frontier coding agents. You use coding agents (e.g. Claude Code, OpenCode, Pi, Hermes, similar) as a core part of how you work. You can show what your workflow looks like, what you’ve automated, and where you’ve pushed the tooling, including authoring skills, sub-agents or plugins of your own.
- Cloud-native delivery. Comfortable shipping on AWS with infrastructure-as-code. You don’t need to have used our exact stack; pragmatic familiarity with cloud primitives and IaC of any flavour is what we care about.
- Obsessive curiosity and a builder’s instinct. The agent space is moving weekly and you’re the kind of engineer who keeps up because you can’t help it. You read the specs, try the new clients, automate the annoying parts of your own dev loop, and turn weekend experiments into things your team actually uses. Show us the receipts: dev-ex automations, PoCs, hackathon wins, side projects, anything where you saw a sharp edge and went after it.
- You’ve built and shipped MCP servers or contributed to the open MCP ecosystem.
- You’ve thought about agent context efficiency: progressive disclosure, tool search, programmatic tool calling, code execution sandboxes, REPL-style tool patterns.
- You’ve designed agent skills/playbooks for procedural knowledge, separately from tools.
- You’ve built or used agent eval harnesses, golden datasets, or production tracing for agents.
- Frontend / UI experience: chat surfaces, internal tooling, agent-facing dashboards.
- AWS specifics: CDK, Bedrock, AgentCore, Lambda, DynamoDB, API Gateway.
- Background in energy, grid, EV charging, IoT-heavy or other data-rich domains.
We care about evidence over claims. Things that catch our attention: agents or services you’ve shipped to users, MCP servers or skills you’ve open-sourced, blog posts or talks where you’ve broken down an agent design, contributions to the MCP spec or SDKs, or a workflow setup you’d be proud to demo. If your best work isn’t public, tell us about it in the application.
Our values- Customer Obsessed: The customer is at the heart of everything we do.
- Brave: We try new things and lead through possibility.
- Collaborative: We believe our success is built on strong relationships.
- Do Good: We care about people and the environment.
- Progressive: We innovate, disrupt and are always learning.
Benefits
You’ll get to work in a fast-paced and rapidly growing scale-up with global ambitions that is cutting edge, passionate about sustainability and seeks to make the world a better place.
- Competitive salary and bonus
- Hybrid office (Office-based with the option to work 2 days remotely)
- Beautiful central London office
- Private Health Insurance
- Aegon Pension Scheme
- Life Assurance Scheme with death in service benefit of 4x salary
- Income Protection Scheme for long term illness
- Ride to Work Scheme
- Payroll Giving Scheme
- Season Ticket Loan to spread cost of travel over 12 months
Diversity, Equity and Inclusion are at the heart of what we do, and we encourage a culture where everyone can be themselves at work. We actively seek out a diverse range of talent, and our policies ensure that every job application and employee is treated fairly, with equal opportunity to succeed and to feel included.
Ohme London, England Office
125-130 The Strand, London, United Kingdom, WC2R 0AP



