Data Scientist, Responsible Development and Innovation

Posted 20 Hours Ago
Be an Early Applicant
London, Greater London, England
3-5 Years Experience
Artificial Intelligence
The Role
As a data scientist in the Responsible Development and Innovation (ReDI) team at Google DeepMind, you will be responsible for developing and delivering safety evaluations for groundbreaking AI models. Your role involves working with internal and external partners to ensure responsible and safe practices are followed. Key responsibilities include contributing to evaluation design, dataset curation, analysis of model behavior, human rating assessments, and collaborating with multidisciplinary teams to deliver projects.
Summary Generated by Built In

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

About us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and responsibility are the highest priority.

Snapshot

As a data scientist in the Responsible Development and Innovation (ReDI) team, you’ll be an integral team member in both developing and delivering our approach to safety evaluations of Google DeepMind’s most groundbreaking models.   

You will work with teams at Google DeepMind, internal and external partners, to ensure that our work is conducted in line with responsibility and safety best practices, helping Google DeepMind to progress towards its mission.

The role

As a data scientist working in ReDI, you’ll be part of a team working on safety evaluations, using your expertise to help gather specialised data for training and evaluating our models across numerous modalities to deliver new evaluations and refine existing ones. In this role, you’ll work in collaboration with other members of this critical team, responding to needs of the business in a timely manner and prioritising projects accordingly. 

Key responsibilities

  • Contributing to the design and development of new evaluations, particularly focussing on content policy coverage of sensitive content (such as child safety) where dataset development, rater quality and pattern analysis are primary needs
  • Proactively engaging with prompt dataset curation, analysis and refinement to provide feedback for iteration with 3P vendors, and opportunities for data enhancement
  • Investigating the behaviour of our latest models to inform evaluation design
  • Investigating the accuracy and patterns in human rating of evaluation outputs 
  • Assessing the quality and coverage of safety datasets 
  • Contributing to developing/running quantitative analyses for evaluations 
  • Work collaboratively alongside a team of multidisciplinary specialists to deliver on priority projects
  • Communicating with wider stakeholders across ReDI, GDM/ Google and third party vendors where appropriate 
  • Supporting improvements to how evaluation findings are visualised to key stakeholders and leadership

About you

In order to set you up for success as a data scientist in the ReDI team, we look for the following skills and experience:

  • Expertise in analytical and statistical skills, data curation and data collection design, prompt data set curation and validation
  • Familiarity with sociotechnical considerations of generative AI, including content safety (such as child safety) and fairness
  • Ability to thrive in a fast-paced, live environment where decisions are made in a timely fashion 
  • Demonstrated ability to work within cross-functional teams, foster collaboration, and influence outcomes
  • Significant experience presenting and communicating data science findings to non-data science audiences, including senior stakeholders
  • Strong command of Python

In addition, the following would be an advantage: 

  • Experience of working with sensitive data, access control, and procedures for data worker wellbeing
  • Prior experience working with product development or in similar agile settings would be advantageous
  • Experience in sociotechnical research and content safety 
  • Demonstrated prior experience designing and implementing audits or evaluations of cutting edge AI systems
  • Experience working with ethics and safety topics associated with AI development in a technology company such as child safety, privacy, representational harms and discrimination, misinformation, or other areas of content or model risks

Application deadline: end of day, 25th of September 2024 

Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.


The Company
Mountain View, CA
1,218 Employees
On-site Workplace
Year Founded: 2010

What We Do

We’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

Our long term aim is to solve intelligence, developing more general and capable problem-solving systems, known as artificial general intelligence (AGI).

Guided by safety and ethics, this invention could help society find answers to some of the world’s most pressing and fundamental scientific challenges.

We have a track record of breakthroughs in fundamental AI research, published in journals like Nature, Science, and more.Our programs have learned to diagnose eye diseases as effectively as the world’s top doctors, to save 30% of the energy used to keep data centres cool, and to predict the complex 3D shapes of proteins - which could one day transform how drugs are invented.

Jobs at Similar Companies

MediaNews Group Logo MediaNews Group

Publisher

Consumer Web • Digital Media • News + Entertainment
Hybrid
Estes Park, CO, USA
4000 Employees

MediaNews Group Logo MediaNews Group

Digital Account Executive

Consumer Web • Digital Media • News + Entertainment
Hybrid
Scranton, PA, USA
4000 Employees

ServiceNow Logo ServiceNow

Vice President of Sales, Federal Defense & National Security

Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Remote
Hybrid
Washington, DC, USA
23000 Employees

ServiceNow Logo ServiceNow

Technical Support Engineer

Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Remote
Hybrid
Tokyo, JPN
23000 Employees

Similar Companies Hiring

GRAIL Thumbnail
Software • Machine Learning • Healthtech • Biotech • Big Data • Artificial Intelligence
Menlo Park, CA
1300 Employees
Hudson River Trading Thumbnail
Other • Fintech • Automation • Artificial Intelligence
New York, NY
1000 Employees
Snap Inc. Thumbnail
Virtual Reality • Software • Mobile • Machine Learning • Cloud • Artificial Intelligence • App development
Santa Monica, CA
5000 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account