Open Source LLM Engineering Platform that helps teams build useful AI applications via tracing, evaluation, and prompt management (mission, product). We are now part of ClickHouse.
We're building the "Datadog" of this category; model capabilities continue to improve but building useful applications is really hard, both in startups and enterprises
Largest open source solution in this category: trusted by 19 of the Fortune 50, >2k customers, >26M monthly SDK downloads, >6M Docker pulls
We joined ClickHouse in January 2026 because LLM observability is fundamentally a data problem and Langfuse already ran on ClickHouse. Together we can move faster on product while staying true to open source and self-hosting, and join forces on GTM and sales to accelerate revenue
Previously backed by Y Combinator, Lightspeed, and General Catalyst
We're a small, engineering-heavy, and experienced team in Berlin and San Francisco. We are also hiring for engineering in EU timezones and expect one week per month in our Berlin office (how we work).
Why SDK Engineering at LangfuseYour work will have an outsized impact.
Our SDKs are downloaded 26M+ times per month. When you ship a new integration or improve SDK performance, thousands of developers benefit the same day — and they'll tell you about it in GitHub issues, on Twitter, and in our community channels. Everything you build is open source (MIT-licensed) and immediately visible.
The SDK is often the very first thing a developer touches when they try Langfuse, so your work directly shapes their first impression.
You will also have direct exposure to how cutting-edge LLM-applications are built. The users you serve are some of the best and most ambitious software engineers in the world and that leverage Langfuse to improve their applications. Deeply understanding the problems they solve will help you design the best possible SDK experience and along the way makes you an expert on LLM engineering yourself.
You will grow at Langfuse byOwning our SDKs. We maintain SDKs in Python and TypeScript both built on OpenTelemetry. You'll be responsible for their reliability, performance, and developer experience. These SDKs run inside our users' production systems, which means they must never fail, must have minimal impact on CPU and memory, and must work flawlessly across every environment — from serverless functions to long-running agent loops. You'll think deeply about the exposed interfaces, error handling, batching, async flushing, and graceful degradation.
Designing APIs that developers love. How are functions named, how are parameters structured, how are breaking changes communicated, how does the documentation read? Developer experience is a core differentiator for Langfuse and you'll own every detail of it. When we shipped the Python SDK v3 and TypeScript SDK v4, both were full rewrites on OpenTelemetry — that's the kind of thoughtful, high-impact SDK work you'll lead. Apart from tracing, we also support core workflows of engineers building with LLMs with our SDKs, see our Experiment Runner SDK for example.
Integrating with every major AI framework: Langfuse integrates with 40+ frameworks and model providers: OpenAI SDK, Vercel AI SDK, AI SDK, LangChain, LlamaIndex, Pydantic AI, OpenAI Agents, CrewAI, Amazon Bedrock AgentCore, LiveKit, and many more. You'll maintain these integrations, ensure new Langfuse features are supported across all of them, and add new integrations as the ecosystem evolves. When a new framework gains traction, you'll be among the first to instrument it.
Maintaining our OpenTelemetry endpoint: some users send traces directly to our OTLP endpoint instead of using our SDKs. You'll ensure this path is robust, well-documented, and supports the full range of Langfuse features. This also extends our language support to Java, Go, and any language with an OpenTelemetry SDK. You can work with the OpenTelemetry community to steer the GenAI conventions.
Writing public documentation: you'll own the SDK docs, integration guides, and migration paths. At Langfuse, docs are part of our core product. When you ship a new SDK version or integration, the docs ship with it.
Strong engineer who gets excited about developer experience, API design, and writing code that runs reliably in other people's production systems
Experience building or maintaining SDKs, client libraries, or developer tools — ideally in Python and/or TypeScript
Deep understanding of performance: you know how to profile, benchmark, and minimize CPU/memory overhead in hot paths
Someone who can manage projects themselves — you know how to develop strong conviction about what to build and how to ship it
Thoughtful about versioning, backwards compatibility, and migration paths
Interest in open source software and genuine excitement about talking to developers about their integration challenges
Thrives in a small, accountable team where your output is visible and matters
CS or quantitative degree preferred
Bonus points:
Experience with OpenTelemetry internals or observability instrumentation
Former founder or early startup experience
Contributions to popular open source SDKs or developer tools
ML/AI background or familiarity with the LLM framework ecosystem
No candidate checks all boxes. If you feel you are a good fit for this role, please go ahead and apply.
Learn more aboutSDK Overview — architecture and design of our Python and TypeScript SDKs
All Integrations — the full list of frameworks, model providers, and gateways we support
OpenTelemetry Integration — how we use OTEL as the foundation
SDK Upgrade Paths — how we think about versioning and breaking changes
Project you could have built: TypeScript SDK v4 (GA)
Project you could have built: Python SDK v3 (GA)
Our codebase: GitHub Repository
Ben Kuhn on Impact, Agency and Taste
Who we are & how we work
We can run the full process to your offer letter in less than 7 days (hiring process).
Tech StackWe run a TypeScript monorepo: Next.js on the frontend, Express workers for background jobs, PostgreSQL for transactional data, ClickHouse for tracing at scale, S3 for file storage, and Redis for queues and caching. You should be familiar with a good chunk of this, but we trust you'll pick up the rest quickly (Stack, Architecture).
How we shipLink to handbook
We trust you to take ownership (ownership overview) for your area. You identify what to build, propose solutions (RFCs), and ship them. Everyone here thinks about the user experience and the technical implementation at the same time. Everyone manages their own Linear.
You're never alone. Anyone from the team is happy to go into a whiteboard session with you. 15 minutes of shared discussion can very much improve the overall output.
We implement maker schedule and communication. There are two recurring meetings a week: Monday check-in on priorities (15 min) and a demo session on Fridays (60 min).
Code reviews are mentorship. New joiners get all PRs reviewed to learn the codebase, patterns, and how the systems work (onboarding guide).
We use AI as much as possible in our workflows to make our users happy. We encourage everyone to experiment with new tooling and AI workflows.
This role puts you at the forefront of the AI revolution, partnering with engineering teams who are building the technology that will define the next decade(s).
This is an open-source devtools company. We ship daily, talk to customers constantly, and fight for great DX. Reliability and performance are central requirements.
Your work ships under your name. You'll appear on changelog posts for the features you build, and during launch weeks, you'll produce videos to announce what you've shipped to the community. You’ll own the full delivery end to end.
We're solving hard engineering problems: figuring out which features actually help users improve AI product performance, building SDKs developers love, visualizing data-rich traces, rendering massive LLM prompts and completions efficiently in the UI, and processing terabytes of data per day through our ingestion pipeline.
You'll work closely with the ClickHouse team and learn how they build a world-class infrastructure company. We're in a period of strong growth: Langfuse is growing organically and accelerating through ClickHouse's GTM. (Why we joined ClickHouse)
If you wonder what to build next, our users are a Slack message or a Github discussions post away.
You’re on a continuous learning journey. The AI space develops at breakneck speed and our customers are at the forefront. We need to be ready to meet them where they are and deliver the tools they need just-in-time.



