Open Source LLM Engineering Platform that helps teams build useful AI applications via tracing, evaluation, and prompt management (mission, product). We are now part of ClickHouse.
We're building the "Datadog" of this category; model capabilities continue to improve but building useful applications is really hard, both in startups and enterprises
Largest open source solution in this category: trusted by 19 of the Fortune 50, >2k customers, >26M monthly SDK downloads, >6M Docker pulls
We joined ClickHouse in January 2026 because LLM observability is fundamentally a data problem and Langfuse already ran on ClickHouse. Together we can move faster on product while staying true to open source and self-hosting, and join forces on GTM and sales to accelerate revenue
Previously backed by Y Combinator, Lightspeed, and General Catalyst
We're a small, engineering-heavy, and experienced team in Berlin and San Francisco. We are also hiring for engineering in EU timezones and expect one week per month in our Berlin office (how we work).
In Short: We're looking for a senior frontend engineer to build the UI for the most widely adopted open source LLM engineering platform — where the core challenge is rendering massive, data-intensive views fast and making complex workflows feel simple.
Why Frontend Engineering at LangfuseYour work will have an outsized impact.
Thousands of users sign into Langfuse daily to inspect traces, update prompts, and run experiments. They obsess over what they see in Langfuse to improve the systems they just shipped to production. Megabytes of JSON data should be available at their fingertips, searchable and presented in a readable way. Your job is to make that experience fast and intuitive.
When you ship an improvement to a view, eager users often find your PR and give feedback before you've even released it. They'll tweet about it before you write a changelog post. Everything you build is open source (MIT-licensed) and immediately visible.
You will also have direct exposure to how cutting-edge LLM applications are built. The users you serve are some of the best and most ambitious software engineers in the world, and they leverage Langfuse to understand and improve their applications. Deeply understanding their workflows will help you design the best possible UI — and along the way makes you an expert on LLM engineering yourself.
What you'll doBuild performant rendering for large LLM data — Langfuse displays full LLM inputs and outputs, which can be tens of thousands of tokens per observation. You'll own the performance of our trace views, finding bottlenecks in rendering, data loading, and reducing rerenders so that even the largest agent traces feel local-level snappy. You love diving deep into profiling re-renders? This is a match!
Own our React setup - you can shape and own how we build components, structure our code and write tests. Make our React setup thrive for agents and be up-to-date with new features (for example the React Compiler).
Build complex interactive features — recent examples: agent graph visualization that infers execution flow from observation timings, a redesigned trace view with tree/timeline toggle and search, inline comments anchored to specific text selections (like Google Docs), and dashboard widgets for tool usage analytics. These are the kinds of features you'll own end-to-end.
Own UI/UX across the product — you'll work closely with a designer to improve the experience across all of Langfuse. This means thinking about visual consistency and begin to build out a design system. You'll think deeply about how to present complex data in our product.
Ship features end-to-end — while your focus is the frontend, you'll connect to our tRPC APIs and sometimes extend them. We don't have hard boundaries — if a feature needs a backend change, you'll make it.
Senior frontend engineer who cares deeply about performance — you love to profile React renders, debugged layout thrashing, and know when to virtualize, memoize, or restructure
Strong in React, TypeScript, and modern CSS — you write clean, maintainable component code
Like to collaborate with a designer to turn complex requirements into intuitive interfaces
Someone who can PM and project-manage themselves — you have strong opinions about what to build and how to ship it. Talking to users makes you happy.
Interest in open source software and genuine enjoyment talking to users about their experience
Thrives in a small, accountable team where your output is visible and matters
CS or quantitative degree preferred
Bonus points:
Experience with large data applications and rendering this on the frontend
Former founder
Background in data visualization or building developer tools
Feature you could have built: Agent Graph Visualization
Feature you could have built: Redesigned Trace View
Feature you could have built: Inline Comments on Observations
Our codebase: GitHub Repository
React Performance Tracks
You link to this article often
Ben Kuhn on Impact, Agency and Taste
Who we are & how we work
We can run the full process to your offer letter in less than 7 days (hiring process).
Tech StackWe run a TypeScript monorepo: Next.js on the frontend, Express workers for background jobs, PostgreSQL for transactional data, ClickHouse for tracing at scale, S3 for file storage, and Redis for queues and caching. You should be familiar with a good chunk of this, but we trust you'll pick up the rest quickly (Stack, Architecture).
How we shipLink to handbook
We trust you to take ownership (ownership overview) for your area. You identify what to build, propose solutions (RFCs), and ship them. Everyone here thinks about the user experience and the technical implementation at the same time. Everyone manages their own Linear.
You're never alone. Anyone from the team is happy to go into a whiteboard session with you. 15 minutes of shared discussion can very much improve the overall output.
We implement maker schedule and communication. There are two recurring meetings a week: Monday check-in on priorities (15 min) and a demo session on Fridays (60 min).
Code reviews are mentorship. New joiners get all PRs reviewed to learn the codebase, patterns, and how the systems work (onboarding guide).
We use AI as much as possible in our workflows to make our users happy. We encourage everyone to experiment with new tooling and AI workflows.
This role puts you at the forefront of the AI revolution, partnering with engineering teams who are building the technology that will define the next decade(s).
This is an open-source devtools company. We ship daily, talk to customers constantly, and fight for great DX. Reliability and performance are central requirements.
Your work ships under your name. You'll appear on changelog posts for the features you build, and during launch weeks, you'll produce videos to announce what you've shipped to the community. You’ll own the full delivery end to end.
We're solving hard engineering problems: figuring out which features actually help users improve AI product performance, building SDKs developers love, visualizing data-rich traces, rendering massive LLM prompts and completions efficiently in the UI, and processing terabytes of data per day through our ingestion pipeline.
You'll work closely with the ClickHouse team and learn how they build a world-class infrastructure company. We're in a period of strong growth: Langfuse is growing organically and accelerating through ClickHouse's GTM. (Why we joined ClickHouse)
If you wonder what to build next, our users are a Slack message or a Github discussions post away.
You’re on a continuous learning journey. The AI space develops at breakneck speed and our customers are at the forefront. We need to be ready to meet them where they are and deliver the tools they need just-in-time.


