Company Description
We’re Checkout.com – you might not know our name, but companies like eBay, ASOS, Klarna, Uber Eats, and Sony do. That moment when you check out online? We make it happen.
Checkout.com is where the world checks out. Our global network powers billions of transactions every year, making money move without making a fuss. We spent years perfecting a service most people will never notice. Because when digital payments just work, businesses grow, customers stay, and no one stops to think about why.
With 19 offices spanning six continents, we feel at home everywhere – but London is our HQ. Wherever our people work their magic, they’re fast-moving, performance-obsessed, and driven by being better every day. Ideal. Because a role here isn’t just another job; it’s a career-defining opportunity to build the future of fintech.
Job Description
Job Description
As a Data Engineer within the Data Analytics Team at Checkout, you will be the primary technical owner of the Payment Lifecycle streaming pipelines. Currently, our domain logic lives in dbt; your mission is to migrate and implement this logic within our upstream Flink pipelines. By moving processing into the streaming layer, you will help us eliminate architectural bottlenecks, reduce data latency, and ensure our Payment Lifecycle data product is domain-owned, robust, scalable, and perfectly aligned with the downstream needs of our users, from product data scientists to merchants.
How You’ll Make An ImpactShift left processing by migrating existing business logic from dbt/SQL into Flink stream-processing jobs (Java/Python)
Take the lead on the Payment Lifecycle Flink jobs, moving them from Data Platform ownership into the Analytics domain
You’ll be responsible for the health, monitoring (Datadog), and deployment (ArgoCD/Terraform) of these critical pipelines
Work with Apache Iceberg to manage state and lookups for long payment lifecycles
Take initiative to improve and optimise engineering workflows and platforms
Proven delivery experience experience in data or software engineering
You should be comfortable writing and debugging production-grade Java or Python
Hands-on experience with Apache Flink (or similar engines like Spark Streaming/Kafka Streams) and Apache Kafka
You understand checkpoints, watermarking, and state management
Familiarity with the modern DevOps stack: Docker, Kubernetes, Terraform, and CI/CD principles
A strong focus on data integrity
Bring all of you to work
We create the conditions for high performers to thrive – through real ownership, fewer blockers, and work that makes a difference from day one.
Here, you’ll move fast, take on meaningful challenges, and be recognized for the impact you deliver. It’s a place where ambition gets met with opportunity – and where your growth is in your hands.
We work as one team, and we back each other to succeed. So whatever your background or identity, if you’re ready to grow and make a difference, you’ll be right at home here.
It’s important we set you up for success and make our process as accessible as possible. So let us know in your application, or tell your recruiter directly, if you need anything to make your experience or working environment more comfortable.
Life at Checkout.com
We understand that work is just one part of your life. Our hybrid working model offers flexibility, with three days per week in the office to support collaboration and connection.
Curious about what it’s like to be part of our team? Visit our Careers Page to learn more about our culture, open roles, and what drives us.
For a closer look at daily life at Checkout.com, follow us on LinkedIn and Instagram


