At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Snapshot
As a Research Engineer on the Sociotechnical Analysis of Model Behaviour (SAMBA) team, you will conduct cutting edge research on model behaviours that impact people and society. You will work closely with Research Scientists on the team to conduct and operationalize research on model behaviour relevant to Responsible AI. As part of your work, you will be responsible for building out frontier evaluations of model behaviour and helping to integrate them into existing infrastructure.
About Us
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
The Role
You will contribute to empirical and conceptual research that helps to identify and define appropriate model behaviour in context. You will work closely with Research Scientists to transform research findings into concrete, measurable metrics and systematic evaluations. This position is a 12 month, fixed-term contract.
Key responsibilities:
- Contribute to research design to conduct experiments on model behaviour
- Automate and improve performance of evaluation processes
- Build and maintain evaluation pipelines
- Perform data analysis to synthesise and visualise research findings and evaluation results
- Contribute to research publications
About You
You are a Research Engineer who is interested in applying your skills to better under model behaviour in areas with large impacts on people and society. You are passionate about evaluation and quantification and able to work with technical teams to integrate your work into existing platforms and infrastructures.
In order to set you up for success as a Research Engineer at Google DeepMind, we look for the following skills and experience:
- A masters or equivalent experience in Computer Science, ML, Data Science, or similar technical field
- Strong programming skills: Proficiency in Python (with experience in relevant libraries like Pandas, NumPy, Scikit-learn) or other scripting languages.
- Solid understanding of statistical concepts, and ability to perform basic data analysis and visualization
- Experience with or an interest in AI evaluation: familiarity with designing and conducting evaluations of AI models, applying benchmarks, etc.
- Good communication and collaboration skills: an ability to effectively communicate and work with researchers from a wide variety of backgrounds, including non-technical backgrounds
- Comfortable working in a fast-paced environment
In addition, the following would be an advantage:
- A history of high quality research, touching on topics relevant to responsible AI
Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.
Top Skills
What We Do
We’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
Our long term aim is to solve intelligence, developing more general and capable problem-solving systems, known as artificial general intelligence (AGI).
Guided by safety and ethics, this invention could help society find answers to some of the world’s most pressing and fundamental scientific challenges.
We have a track record of breakthroughs in fundamental AI research, published in journals like Nature, Science, and more.Our programs have learned to diagnose eye diseases as effectively as the world’s top doctors, to save 30% of the energy used to keep data centres cool, and to predict the complex 3D shapes of proteins - which could one day transform how drugs are invented.