As ADAS/AD moves towards model-driven intelligence, industry value is extending from map delivery to model training and validation. HERE can convert its map and drive data into a scalable AI model-creation platform – capturing significant value from training, validation and next generation ADAS/AD performance.
It’s the growth of HERE’s AI-model creation platform that turns maps and drive data into reusable spatial intelligence – powering scalable training, validation, and next generation ADAS/AD performance.
The role
As an Engineering Director – Perception & Spatial AI, you’ll lead the architecture and delivery of perception models that run from cloud training through to deployment on automotive-grade hardware. A key part of the challenge is “design for deployment” from day one—building models that can meet strict latency and memory constraints on embedded/edge platforms, without compromising real-world performance.You will define the architecture, build a world-class team, and own the full journey from research through to deployment-ready models at scale.
What you’ll do
- Define and lead the end-to-end perception architecture—from cloud training to deployment-ready model variants for automotive-grade SoCs (e.g., Qualcomm Snapdragon Ride, NVIDIA Orin, or similar).
- Drive a deployment-first approach across architecture decisions, including quantization, latency targets, and memory constraints.
- Turn state-of-the-art perception research into reliable, scalable production pipelines (cloud + edge model variants).
- Guide BEV / multi-camera perception focused on road infrastructure (lanes, boundaries, signs, traffic lights, road surface attributes).
- Define evaluation and validation standards, including hardware-aware metrics (latency vs accuracy trade-offs, memory footprint, throughput on reference hardware).
- Partner closely with research, simulation, product, and customer/partner teams to ensure outputs are usable by downstream systems and meet real deployment needs.
- Stay hands-on by building and reviewing architectures, debugging critical issues, and prototyping new approaches.
- Mentor and grow a high-performing team (hiring, mentoring, setting technical direction, and establishing strong engineering practices).
Must-Have Experience
- 10+ years of experience in ML, AI, computer vision, robotics, autonomous driving, spatial AI, or related fields.
- 5+ years of hands-on experience with computer vision, perception, or scene understanding systems.
- Proven experience taking ML or computer vision models from research or prototype stage into production systems
- Strong understanding of perception tasks such as object detection, semantic segmentation, instance segmentation, lane detection, road boundaries, signs, traffic lights, or road surface attributes.
- Experience with deep learning frameworks, preferably PyTorch.
- Strong understanding of modern computer vision architectures, including multi-task learning and spatial scene understanding.
- Experience working with large-scale training pipelines, including distributed training, experiment tracking, and model versioning.
- Practical experience optimizing ML models for production, including latency, memory, throughput, and accuracy trade-offs.
- Familiarity with model deployment workflows such as ONNX export, TensorRT or similar inference optimization frameworks.
- Experience working with edge, embedded, automotive, robotics, mobile, or other hardware-constrained deployment environments.
- Strong technical leadership experience, including leading engineering or applied research teams, setting technical direction, mentoring engineers, and hiring talent.
- Ability to work across research, engineering, product, platform, and customer-facing teams.
- Strong communication skills, with the ability to explain technical trade-offs to both technical and non-technical stakeholders.
- Experience with large-scale training setups (multi-GPU/multi-node), and the ability to set practical MLOps standards (experiment tracking, model versioning, reproducibility).
- Curious and hands-on enough to stay close to emerging trends in perception, spatial AI, efficient models, and edge deployment.
Experience with BEV, multi-camera perception, 3D perception, lidar-camera fusion, or occupancy prediction.
Experience with architectures such as BEVFormer, BEVFusion, or similar spatial perception models.
Experience with automotive-grade SoCs such as NVIDIA Orin, Qualcomm Snapdragon Ride, TI TDA4, or similar platforms.
Hands-on experience with quantization-aware training, post-training quantization, pruning, distillation, mixed-precision inference, or model compression.
Experience benchmarking models on real hardware and working with latency, memory, and throughput constraints.
Familiarity with QNN, TensorRT, graph optimization, operator compatibility, or hardware-specific compilation workflows.
Experience with geospatial data, map priors, road topology, HD maps, or spatial data structures.
Experience with synthetic data, simulation pipelines, or sim-to-real validation.
Experience with large-scale driving or robotics datasets such as nuScenes, Waymo Open Dataset, KITTI, Argoverse, or similar.
Exposure to automotive safety standards such as ISO 26262 or SOTIF.
Publications or strong research contributions in computer vision, perception, robotics, or machine learning.
Experience in high-growth, scale-up, or fast-moving product environments.
HERE is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, age, gender identity, sexual orientation, marital status, parental status, religion, sex, national origin, disability, veteran status, and other legally protected characteristics.
As part of HERE Technologies employment process, candidates will be required to successfully complete a pre-employment screening process. This offer and any related claims are subject to the successful completion of a pre-employment screening. This will involve employment, education, and criminal verification if applicable.
#LI-SS1 #LI-HYBRID
Who are we?HERE Technologies is a location data and technology platform company. We empower our customers to achieve better outcomes – from helping a city manage its infrastructure or a business optimize its assets to guiding drivers to their destination safely.
At HERE we take it upon ourselves to be the change we wish to see. We create solutions that fuel innovation, provide opportunity and foster inclusion to improve people’s lives. If you are inspired by an open world and driven to create positive change, join us. Learn more about us on our YouTube Channel.
HERE Technologies London, England Office
20 Eastbourne Terrace, 11th floor, London, United Kingdom, W2 6LA

