Open Source LLM Engineering Platform that helps teams build useful AI applications via tracing, evaluation, and prompt management (mission, product). We are now part of ClickHouse.
We're building the "Datadog" of this category; model capabilities continue to improve but building useful applications is really hard, both in startups and enterprises
Largest open source solution in this category: trusted by 19 of the Fortune 50, >2k customers, >26M monthly SDK downloads, >6M Docker pulls
We joined ClickHouse in January 2026 because LLM observability is fundamentally a data problem and Langfuse already ran on ClickHouse. Together we can move faster on product while staying true to open source and self-hosting, and join forces on GTM and sales to accelerate revenue
Previously backed by Y Combinator, Lightspeed, and General Catalyst
We're a small, engineering-heavy, and experienced team in Berlin and San Francisco. We are also hiring for engineering in EU timezones and expect one week per month in our Berlin office (how we work).
Why Product Engineering at LangfuseYour work will have an outsized impact.
Everything you build is open source (MIT-licensed) and immediately visible. Langfuse has
21,900+ GitHub stars and our SDKs are downloaded 23M+ times per month. When you ship a new feature the community notices in GitHub issues, on X, and in our community channels.
Product engineers at Langfuse are the people who deliver new features end-to-end to our users. You own the product thinking from understanding the user problem to deciding what to build to shipping the solution.
You will also have direct exposure to how cutting-edge LLM applications are built. The users
you serve are some of the best and most ambitious software engineers leveraging Langfuse to
improve their applications. Deeply understanding the problems they solve will help you design
the best possible product and along the way makes you an expert on LLM engineering yourself.
You will grow at Langfuse byOwn features end-to-end: from talking to users, to writing the spec, to shipping the code and docs, to writing the changelog post with your name on it. To give you a sense: recent features a single engineer owned include the Langfuse for Agents release (new trace views, agent graphs, semantic span labels), the TypeScript SDK v4 rewrite on OpenTelemetry, and the hosted MCP server for prompt management.
Work across the full stack: and we mean full. Our surface area is wide: React frontend, Next.js backend, ClickHouse analytics, Python and TypeScript SDKs, public docs, Terraform modules, and more. You'll touch whatever the feature needs. One week you might be writing a new ClickHouse query for cost analytics, the next you're designing an SDK API, the next you're building a new React view for agent traces.
Talk to users constantly: jump on calls, respond in community channels, and route support requests. At Langfuse, support is how we build conviction about what to build next. Our handbook says it best: support by engineers is a core value proposition.
Write public documentation about the features you own. At Langfuse, docs are product.
Ship publicly during Launch Weeks: we've run four Launch Weeks so far, dropping a new feature every day for a week. Engineers present the features they built to the community.
Someone who has built features or even entire products from scratch end-to-end
Experience building and shipping user-facing products end-to-end, ideally in TypeScript/React and with exposure to data-intensive backends
Someone who can manage projects themselves. You know how to develop strong conviction about what to build and how to ship it.
Taste for great developer experience: you care about API design, UI details, and clear documentation
Interest in open source software and genuine excitement about talking to developers about their problems
Thrives in a small, accountable team where your output is visible and matters
CS or quantitative degree preferred
Bonus points:
Experience with ClickHouse, analytics databases, or building complex data visualizations
Former founder or early startup experience
Contributions to popular open source projects
ML/AI background or familiarity with the LLM framework ecosystem
No candidate checks all boxes. If you feel you are a good fit for this role, please go ahead and apply.
ProcessWe can run the full process to your offer letter in less than 7 days (hiring process).
Tech StackWe run a TypeScript monorepo: Next.js on the frontend, Express workers for background jobs, PostgreSQL for transactional data, ClickHouse for tracing at scale, S3 for file storage, and Redis for queues and caching. You should be familiar with a good chunk of this, but we trust you'll pick up the rest quickly (Stack, Architecture).
How we shipLink to handbook
We trust you to take ownership (ownership overview) for your area. You identify what to build, propose solutions (RFCs), and ship them. Everyone here thinks about the user experience and the technical implementation at the same time. Everyone manages their own Linear.
You're never alone. Anyone from the team is happy to go into a whiteboard session with you. 15 minutes of shared discussion can very much improve the overall output.
We implement maker schedule and communication. There are two recurring meetings a week: Monday check-in on priorities (15 min) and a demo session on Fridays (60 min).
Code reviews are mentorship. New joiners get all PRs reviewed to learn the codebase, patterns, and how the systems work (onboarding guide).
We use AI as much as possible in our workflows to make our users happy. We encourage everyone to experiment with new tooling and AI workflows.
This role puts you at the forefront of the AI revolution, partnering with engineering teams who are building the technology that will define the next decade(s).
This is an open-source devtools company. We ship daily, talk to customers constantly, and fight for great DX. Reliability and performance are central requirements.
Your work ships under your name. You'll appear on changelog posts for the features you build, and during launch weeks, you'll produce videos to announce what you've shipped to the community. You’ll own the full delivery end to end.
We're solving hard engineering problems: figuring out which features actually help users improve AI product performance, building SDKs developers love, visualizing data-rich traces, rendering massive LLM prompts and completions efficiently in the UI, and processing terabytes of data per day through our ingestion pipeline.
You'll work closely with the ClickHouse team and learn how they build a world-class infrastructure company. We're in a period of strong growth: Langfuse is growing organically and accelerating through ClickHouse's GTM. (Why we joined ClickHouse)
If you wonder what to build next, our users are a Slack message or a Github discussions post away.
You’re on a continuous learning journey. The AI space develops at breakneck speed and our customers are at the forefront. We need to be ready to meet them where they are and deliver the tools they need just-in-time.



