Open Source LLM Engineering Platform that helps teams build useful AI applications via tracing, evaluation, and prompt management (mission, product). We are now part of ClickHouse.
We're building the "Datadog" of this category; model capabilities continue to improve but building useful applications is really hard, both in startups and enterprises
Largest open source solution in this category: trusted by 19 of the Fortune 50, >2k customers, >26M monthly SDK downloads, >6M Docker pulls
We joined ClickHouse in January 2026 because LLM observability is fundamentally a data problem and Langfuse already ran on ClickHouse. Together we can move faster on product while staying true to open source and self-hosting, and join forces on GTM and sales to accelerate revenue
Previously backed by Y Combinator, Lightspeed, and General Catalyst
We're a small, engineering-heavy, and experienced team in Berlin and San Francisco. We are also hiring for engineering in EU timezones and expect one week per month in our Berlin office (how we work).
Why Growth Engineering at LangfuseYour work will have an outsized impact.
Langfuse is growing on an exponential curve while we are adding more users. Our existing users also increase their token use and build more LLM based applications and features.
We sit at the center and enable our users to build LLM applications faster, more reliably and safer.
It’s your chance to jump on the curve and increase its slope directly through any wild ideas you have. Be at the forefront of agentic development and show the world how to use Langfuse to develop even faster.
Maybe you believe the future users of Langfuse will be agents and most growth stems from them? Amazing - run an experiment to validate this hypothesis.
You have real data to work with in terms of analytics and users who are the most agile and up-to-date developers building out AI applications - the perfect environment to try big bets and shape them to success. All while learning how the best agentic applications are built.
Your impactYour work directly impacts our most important growth metrics:
Top of Funnel: More customers want to try Langfuse
Activation: Customers see faster time to value
Adoption: More customers adopt all features across our product suite
Customers require less handholding because the product explains it all
Expansion: help customers expand from one product area to all of our offering
You build and maintain our data stack (Posthog, BigQuery, DBT, Metabase) to generate insights into user behavior
Map our core funnel for each feature and make it easy to monitor (we are using posthog for this)
You are the advocate within the team for how customers actually adopt/use our product
Ship constant improvements that reduce onboarding friction for users
Experiment with changes to signup flows
Collaborate with feature focused Product Engineers and Designers
Build loops to help customer expand into all of our product areas
Experiment with loops that make people tweet about langfuse (like langfuse wrapped but better)
Experiment with features that turn our customers accounts into user generated content (e.g.. let customers share their prompts in langfuse in a public prompt library)
Jump on hot topics to integrate into Langfuse (e.g. ship an OpenClaw integration) and write about it
Drive to learn fresh things - you experiment on many different topics and come up with ideas
A hang for business - you envision how Langfuse succeeds through product lead growth
Experience in full-stack development - you can ship changes that touch FE and BE in our product with confidence (frontend overhang and collaborate on backend)
SQL skills (for analytics)
You are excited to talk to and think like our users
Experience with devtools / OSS ecosystems and developer-centric go to market
Taste for UX / Design big plus
Familiarity with AI/LLM Engineering
Former Founders are very welcome for this role!
Links
We are big fans of PostHog, check out how Engineers on their Growth team are working
See how we work in our Handbook, especially “Who are our customers?”
We can run the full process to your offer letter in less than 7 days (hiring process).
Tech StackWe run a TypeScript monorepo: Next.js on the frontend, Express workers for background jobs, PostgreSQL for transactional data, ClickHouse for tracing at scale, S3 for file storage, and Redis for queues and caching. You should be familiar with a good chunk of this, but we trust you'll pick up the rest quickly (Stack, Architecture).
How we shipLink to handbook
We trust you to take ownership (ownership overview) for your area. You identify what to build, propose solutions (RFCs), and ship them. Everyone here thinks about the user experience and the technical implementation at the same time. Everyone manages their own Linear.
You're never alone. Anyone from the team is happy to go into a whiteboard session with you. 15 minutes of shared discussion can very much improve the overall output.
We implement maker schedule and communication. There are two recurring meetings a week: Monday check-in on priorities (15 min) and a demo session on Fridays (60 min).
Code reviews are mentorship. New joiners get all PRs reviewed to learn the codebase, patterns, and how the systems work (onboarding guide).
We use AI as much as possible in our workflows to make our users happy. We encourage everyone to experiment with new tooling and AI workflows.
This role puts you at the forefront of the AI revolution, partnering with engineering teams who are building the technology that will define the next decade(s).
This is an open-source devtools company. We ship daily, talk to customers constantly, and fight for great DX. Reliability and performance are central requirements.
Your work ships under your name. You'll appear on changelog posts for the features you build, and during launch weeks, you'll produce videos to announce what you've shipped to the community. You’ll own the full delivery end to end.
We're solving hard engineering problems: figuring out which features actually help users improve AI product performance, building SDKs developers love, visualizing data-rich traces, rendering massive LLM prompts and completions efficiently in the UI, and processing terabytes of data per day through our ingestion pipeline.
You'll work closely with the ClickHouse team and learn how they build a world-class infrastructure company. We're in a period of strong growth: Langfuse is growing organically and accelerating through ClickHouse's GTM. (Why we joined ClickHouse)
If you wonder what to build next, our users are a Slack message or a Github discussions post away.
You’re on a continuous learning journey. The AI space develops at breakneck speed and our customers are at the forefront. We need to be ready to meet them where they are and deliver the tools they need just-in-time.



