Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
We are looking for a Senior Software Engineer to work on the runtime systems of a novel search engine tailored for agentic AI consumption. In this role, you will focus on building low-latency, high-throughput systems that serve search queries in real time. You will work on the critical path of user-facing requests, where performance, predictability, and efficiency directly impact product quality. You will design and operate systems that handle thousands of requests per second under strict latency budgets, optimizing every layer from request handling to data access and response assembly.
In this position, your responsibility will be to
- Design, implement, and operate core runtime services for serving search queries
- Build and optimize request flows (query processing, retrieval orchestration, response assembly)
- Develop systems that meet strict latency budgets under high load
- Optimize CPU, memory, and data access patterns in performance-critical paths
- Ensure reliability, observability, and predictability in production
- Build well-tested systems with clear boundaries while allowing architectural evolution
- Define observability primitives (logs, metrics, traces, latency breakdowns)
- Monitor and improve latency, throughput, and cost efficiency
- Collaborate with indexing and ML teams to integrate retrieval and ranking components
- Support experimentation through controlled rollouts and benchmarking
You may be a good fit if you:
- Have 5+ years of experience building production backend systems
- Have strong expertise in C++ or Rust
- Have experience with high-load, low-latency user-facing systems
- Have worked on systems handling thousands of RPS under strict latency constraints
- Understand performance at a systems level (CPU, memory, networking)
- Have operated services in production and handled incidents and debugging
- Understand distributed systems fundamentals and tradeoffs
- Think end-to-end about request flows rather than isolated components
- Can balance correctness, latency, and development speed
- Collaborate effectively across engineering, ML, and product
Strong candidates may also have experience with:
- DBMS (OSS or SaaS) and cloud infrastructure
- High-load web applications or large-scale APIs
- Performance-critical systems (e.g. trading platforms)
- Low-level performance tuning and optimization
- Open-source contributions
- Competitive programming or CTFs
- SHAD or similar advanced programs
- Conference talks or technical publications
We conduct coding interviews as part of the process.
What we offer
- Competitive salary and comprehensive benefits package.
- Opportunities for professional growth within Nebius.
- Flexible working arrangements.
- A dynamic and collaborative work environment that values initiative and innovation.
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!

