NVIDIA Logo

NVIDIA

HPC and AI Software Architect

Sorry, this job was removed at 08:16 p.m. (GMT) on Wednesday, Jun 11, 2025
Be an Early Applicant
In-Office or Remote
5 Locations
In-Office or Remote
5 Locations

Similar Jobs

An Hour Ago
Remote or Hybrid
London, England, GBR
Mid level
Mid level
Artificial Intelligence • Cloud • Sales • Security • Software • Cybersecurity • Data Privacy
The role involves selling SailPoint's Agentic Technology solutions to enterprise accounts, ensuring customer satisfaction, and executing effective sales strategies.
Top Skills: Agent Identity SecurityCloud Data PlatformsCloud TechnologiesData Access SecurityIaasMachine Identity Security
An Hour Ago
Remote or Hybrid
United Kingdom
Senior level
Senior level
Artificial Intelligence • Cloud • Sales • Security • Software • Cybersecurity • Data Privacy
Lead a UK-based remote engineering team to design and build capabilities for the Atlas platform, focusing on team management, project planning, and coaching engineers.
Top Skills: Go
An Hour Ago
Remote or Hybrid
London, England, GBR
Mid level
Mid level
Artificial Intelligence • Cloud • Sales • Security • Software • Cybersecurity • Data Privacy
The role involves selling SailPoint's Agentic technology solutions to major accounts, requiring deep relationship building, strategic planning, and consultative selling skills in cybersecurity and data governance.
Top Skills: Cloud TechnologiesCybersecurityData Access SecurityIdentity SecuritySalesforce

NVIDIA has been redefining computer graphics, PC gaming, and accelerated computing for more than 25 years. Today, we lead in artificial intelligence, driving advances in natural language processing, computer vision, autonomous systems, and scientific research. We are looking for a forward-thinking HPC and AI Inference Software Architect to help shape the future of scalable AI infrastructure—focusing on distributed training, real-time inference, and communication optimization across large-scale systems. Join our world-class team of researchers and engineers building next-generation software and hardware systems that power the most demanding AI workloads on the planet. 

 

What you will be doing: 

  • Design and prototype scalable software systems that optimize distributed AI training and inference—focusing on throughput, latency, and memory efficiency. 

  • Develop and evaluate enhancements to communication libraries such as NCCL, UCX, and UCC, tailored to the unique demands of deep learning workloads. 

  • Collaborate with AI framework teams (e.g., TensorFlow, PyTorch, JAX) to improve integration, performance, and reliability of communication backends. 

  • Co-design hardware features (e.g., in GPUs, DPUs, or interconnects) that accelerate data movement and enable new capabilities for inference and model serving. 

  • Contribute to the evolution of runtime systems, communication libraries, and AI-specific protocol layers. 

 

What we need to see: 

  • Ph.D. or equivalent industry experience in computer science, computer engineering, or a closely related field. 

  • 2+ years of experience in systems programming, parallel or distributed computing, or high-performance data movement. 

  • Strong programming background in C++, Python, and ideally CUDA or other GPU programming models. 

  • Practical experience with AI frameworks (e.g., PyTorch, TensorFlow) and familiarity with how they use communication libraries under the hood. 

  • Experience in designing or optimizing software for high-throughput, low-latency systems. 

  • Strong collaboration skills in a multi-national, interdisciplinary environment. 

 

Ways to stand out from the crowd: 

  • Expertise with NCCL, Gloo, UCX, or similar libraries used in distributed AI workloads. 

  • Background in networking and communication protocols, RDMA, collective communications, or accelerator-aware networking. 

  • Deep understanding of large model training, inference serving at scale, and associated communication bottlenecks. 

  • Knowledge of quantization, tensor/activation fusion, or memory optimization for inference. 

  • Familiarity with infrastructure for deployment of LLMs or transformer-based models, including sharding, pipelining, or hybrid parallelism.  

At NVIDIA, you’ll work alongside some of the brightest minds in the industry, pushing the boundaries of what’s possible in AI and high-performance computing. If you're passionate about distributed systems, AI inference, and solving problems at scale, we want to hear from you. 
NVIDIA is at the forefront of breakthroughs in Artificial Intelligence, High-Performance Computing, and Visualization. Our teams are composed of driven, innovative professionals dedicated to pushing the boundaries of technology. We offer highly competitive salaries, an extensive benefits package, and a work environment that promotes diversity, inclusion, and flexibility. As an equal opportunity employer, we are committed to fostering a supportive and empowering workplace for all. 

NVIDIA London, England Office

13th Floor One Angel Court, London, United Kingdom, EC2R 7HJ

What you need to know about the London Tech Scene

London isn't just a hub for established businesses; it's also a nursery for innovation. Boasting one of the most recognized fintech ecosystems in Europe, attracting billions in investments each year, London's success has made it a go-to destination for startups looking to make their mark. Top U.K. companies like Hoptin, Moneybox and Marshmallow have already made the city their base — yet fintech is just the beginning. From healthtech to renewable energy to cybersecurity and beyond, the city's startups are breaking new ground across a range of industries.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account