NVIDIA Logo

NVIDIA

HPC and AI Software Architect

Sorry, this job was removed at 04:09 p.m. (GMT) on Wednesday, Dec 10, 2025
Be an Early Applicant
In-Office or Remote
Hiring Remotely in UK
In-Office or Remote
Hiring Remotely in UK

Similar Jobs

9 Hours Ago
Remote or Hybrid
United Kingdom
Senior level
Senior level
Cloud • Fintech • Information Technology • Machine Learning • Software • App development • Generative AI
As a Solution Advisor, you will lead strategic sales pursuits, deliver solution demonstrations, and build data-driven business cases while mentoring the team.
Top Skills: SaaS
12 Hours Ago
Remote or Hybrid
Staines, Surrey, England, GBR
Senior level
Senior level
Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
The Senior Manager leads a team of consultants in delivering expert services for ServiceNow's CRM and Industry Workflows, ensuring high-quality implementation and customer satisfaction while promoting continuous improvement.
Top Skills: Ai-Powered ToolsCRMCustomer Service ManagementField Service ManagementIndustry WorkflowsSales Order ManagementServicenowTelco Service Management
12 Hours Ago
Remote or Hybrid
Staines, Surrey, England, GBR
Senior level
Senior level
Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Drive SaaS license revenue by building relationships with C-suite executives at the UK Home Office, develop account strategies, and leverage AI to enhance workflows and decision-making.
Top Skills: AISaaS

NVIDIA has been redefining computer graphics, PC gaming, and accelerated computing for more than 25 years. Today, we lead in artificial intelligence, driving advances in natural language processing, computer vision, autonomous systems, and scientific research. We are looking for a forward-thinking HPC and AI Inference Software Architect to help shape the future of scalable AI infrastructure—focusing on distributed training, real-time inference, and communication optimization across large-scale systems. Join our world-class team of researchers and engineers building next-generation software and hardware systems that power the most demanding AI workloads on the planet. 

 

What you will be doing: 

  • Design and prototype scalable software systems that optimize distributed AI training and inference—focusing on throughput, latency, and memory efficiency. 

  • Develop and evaluate enhancements to communication libraries such as NCCL, UCX, and UCC, tailored to the unique demands of deep learning workloads. 

  • Collaborate with AI framework teams (e.g., TensorFlow, PyTorch, JAX) to improve integration, performance, and reliability of communication backends. 

  • Co-design hardware features (e.g., in GPUs, DPUs, or interconnects) that accelerate data movement and enable new capabilities for inference and model serving. 

  • Contribute to the evolution of runtime systems, communication libraries, and AI-specific protocol layers. 

 

What we need to see: 

  • Ph.D. or equivalent industry experience in computer science, computer engineering, or a closely related field. 

  • 2+ years of experience in systems programming, parallel or distributed computing, or high-performance data movement. 

  • Strong programming background in C++, Python, and ideally CUDA or other GPU programming models. 

  • Practical experience with AI frameworks (e.g., PyTorch, TensorFlow) and familiarity with how they use communication libraries under the hood. 

  • Experience in designing or optimizing software for high-throughput, low-latency systems. 

  • Strong collaboration skills in a multi-national, interdisciplinary environment. 

 

Ways to stand out from the crowd: 

  • Expertise with NCCL, Gloo, UCX, or similar libraries used in distributed AI workloads. 

  • Background in networking and communication protocols, RDMA, collective communications, or accelerator-aware networking. 

  • Deep understanding of large model training, inference serving at scale, and associated communication bottlenecks. 

  • Knowledge of quantization, tensor/activation fusion, or memory optimization for inference. 

  • Familiarity with infrastructure for deployment of LLMs or transformer-based models, including sharding, pipelining, or hybrid parallelism.  

At NVIDIA, you’ll work alongside some of the brightest minds in the industry, pushing the boundaries of what’s possible in AI and high-performance computing. If you're passionate about distributed systems, AI inference, and solving problems at scale, we want to hear from you. 
NVIDIA is at the forefront of breakthroughs in Artificial Intelligence, High-Performance Computing, and Visualization. Our teams are composed of driven, innovative professionals dedicated to pushing the boundaries of technology. We offer highly competitive salaries, an extensive benefits package, and a work environment that promotes diversity, inclusion, and flexibility. As an equal opportunity employer, we are committed to fostering a supportive and empowering workplace for all. 

NVIDIA London, England Office

13th Floor One Angel Court, London, United Kingdom, EC2R 7HJ

What you need to know about the London Tech Scene

London isn't just a hub for established businesses; it's also a nursery for innovation. Boasting one of the most recognized fintech ecosystems in Europe, attracting billions in investments each year, London's success has made it a go-to destination for startups looking to make their mark. Top U.K. companies like Hoptin, Moneybox and Marshmallow have already made the city their base — yet fintech is just the beginning. From healthtech to renewable energy to cybersecurity and beyond, the city's startups are breaking new ground across a range of industries.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account