IDBS helps BioPharma organizations unlock the potential of AI/ML to improve the lives of patients. As a trusted long-term partner to 80% of the top 20 global BioPharma companies, IDBS delivers powerful cloud software and services specifically designed to meet the evolving needs of the BioPharma sector.
IDBS, a Danaher company, leverages 35 years of scientific informatics expertise to help organizations design, execute and orchestrate processes, manage, contextualize and structure data and gain valuable insights throughout the product lifecycle, from R&D through manufacturing. Known for its signature IDBS E-WorkBook software, IDBS has extended its flexible, scalable solutions to the IDBS Polar and PIMS cloud platforms to help scientists make smarter decisions with assured confidence in both GxP and non-GxP environments.
Do you want to work in a dynamic, fast paced, high performing, safe to fail and fun environment which is founded on trust, empowerment and autonomy? Do you enjoy solving complex customer problems as a team?
The Principal AI Test Engineer plays a critical role in shaping and advancing IDBS’s AI test strategy across our engineering organisation. Initially operating within the team developing our first AI product, this role provides technical leadership in AI testing, and related tooling, and expands as our products and customer demands continue to evolve. The role combines hands‑on expertise with strategic influence, supporting the organisation as our AI product portfolio continues to expand.
What we’ll get you doing:
- AI Quality Strategy & Governance
Own and define the end‑to‑end testing strategy for AI/ML systems, establishing standards for model quality, robustness, fairness, explainability, and regulatory compliance across the organization. - Advanced Validation & Risk Assessment
Design and lead sophisticated validation approaches for AI models and data pipelines, including bias detection, drift monitoring, adversarial testing, and failure‑mode analysis in production environments. - Test Architecture & Automation Leadership
Architect scalable, automated test frameworks for AI systems (models, data, pipelines, and integrations), leveraging MLOps, CI/CD, synthetic data, and observability tooling. - Technical Leadership & Mentorship
Act as the subject‑matter expert for AI testing, mentoring engineers, influencing engineering best practices, and partnering closely with data science, platform, legal, and product teams. - Stakeholder Influence & Continuous Improvement
Translate complex AI risks and quality metrics into clear insights for senior stakeholders, shaping product decisions and continuously improving AI quality, reliability, and trustworthiness at scale.
Here is what success in this role looks like:
- A high-quality level of testing of our AI systems, in specific teams and then across the organisation.
- Our AI applications are consistent, robust, fair, explainable and observable, and deliver the results that our customers expect.
- Test outcomes are transparent, highly visible, immediately available and acted on, and an integral part of the SDLC and automated release process.
- The continuing expansion and excellence in the breadth, depth and quality of our AI product portfolio.
- The wider organisation, and our customers, have full confidence in our AI products and the results that are produced.
It would be a plus if you also possess previous experience in:
- Previous experience testing AI/ML systems is a must.
- Automating AI Testing, and a good knowledge of other forms of testing and working in an Agile environment.
- A background in influencing standards and practices across multiple teams without direct line management responsibility.
#LI-Hybrid
Join our winning team today. Together, we’ll accelerate the real-life impact of tomorrow’s science and technology. We partner with customers across the globe to help them solve their most complex challenges, architecting solutions that bring the power of science to life.
For more information, visit www.danaher.com.



