Nucleus Global Logo

Nucleus Global

Data Engineer

Reposted 7 Days Ago
Easy Apply
Remote
Hiring Remotely in United Kingdom
Mid level
Easy Apply
Remote
Hiring Remotely in United Kingdom
Mid level
The Data Engineer will build and maintain data pipelines, architecture, and CI/CD infrastructure, collaborate with stakeholders, and support AI components to enhance data management capabilities.
The summary above was generated by AI

Inizio, the world’s leading  healthcare and communications group providing marketing and medical communications services to healthcare clients. We have 5 main divisions within the group Medical, Advisory, Engage, Evoke and Biotech. Our Medical Division focuses on communicating evidence on new scientific and drug developments and educating healthcare professionals and payers on the appropriate use of therapy.


We have a fantastic opportunity for a Data Engineer to support the build of  AI capabilities across Inizio Medical.

 

Key Responsibilities

  • Build scalable and efficient data pipelines.
  • Design the Data Architecture (including data models, schemas, and data pipelines) to process complex data from a variety of data sources. 
  • Build and maintain the CI/CD infrastructure to host and run data pipelines.  
  • Build and maintain data APIs 
  • Setup, support, interact with and maintain AI components including generative and machine learning models. 
  • Build mechanisms for monitoring the data quality accuracy to ensure the reliability, and integrity of data.
  • Evaluate and make technical decisions on the most suitable data technology based on business needs (including security, costs etc) 
  • Collaborate with Data Scientists, Data Analysts, Software development and other stakeholders to understand data requirements.
  • Work closely with System Admins and Infrastructure teams to effectively integrate data engineering platforms into wider group platforms. 
  • Be cognisant of new and emerging technologies related to data engineering, be active champion of data engineering . 
  • Monitor and optimise performance of data systems, troubleshoot issues, and implement solutions to improve efficiency and reliability.

To succeed:

  • A strong proficiency in Python 
  • Experience working with Generative AI models, their deployment and orchestration.   
  • A solid understanding of database technologies and modelling techniques, including relational databases, NoSQL databases
  • Experience with setting and managing Databricks environments
  • Competent working with Spark 
  • Solid understanding of data warehousing modelling techniques
  • Competent with setting up CI/CD / Devops pipelines. 
  • Experience with the cloud platforms Azure and AWS and their associated data technologies is essential
  • Experience and understand of graph technologies and modelling techniques is desirable
  • Experience of GCP, Scala desirable
  • Excellent communication skills, capable of explaining complex data/technical concepts to stakeholders with varying levels of technical awareness.  
  • Ability to work collaboratively. 

In addition to a great compensation and benefits package including private medical insurance and a company pension, we are happy to talk dynamic working. We are also known for our friendly and informal working environment and  offer excellent opportunities for career and personal development.



Don't meet every job requirement? That's okay! Our company is dedicated to building a diverse, inclusive, and authentic workplace. If you're excited about this role, but your experience doesn't perfectly fit every qualification, we encourage you to apply anyway. You may be just the right person for this role or others.


Top Skills

AWS
Azure
Databricks
GCP
Python
Spark

Similar Jobs

7 Hours Ago
Remote
7 Locations
Mid level
Mid level
Machine Learning • Software
Design, build, and own scalable ETL/ELT data pipelines (Snowflake, Dagster, dbt). Migrate legacy Django/Celery pipelines, ensure data quality, observability, and collaborate with Product, Analytics, and Data Science.
Top Skills: Python,Snowflake,Databricks,Bigquery,Redshift,Dagster,Airflow,Dbt,Django,Fastapi,Flask,Celery,Aws,S3,Rds/Aurora,Pandas,Polars,Numpy
2 Days Ago
In-Office or Remote
London, Greater London, England, GBR
Mid level
Mid level
Information Technology • Consulting
Contract Data Engineer to review Power Apps implementation, support end-to-end testing and remediation, and drive migration from Development to UAT. Must be skilled in Microsoft Fabric, Python/PySpark, Power Apps, data modelling, and working with lakehouses.
Top Skills: LakehouseMicrosoft FabricMicrosoft Power AppsPysparkPython
2 Days Ago
In-Office or Remote
2 Locations
Mid level
Mid level
HR Tech • Payments • Software • Financial Services
Design, build, and maintain large-scale data pipelines using Microsoft Fabric and Databricks. Develop data architectures, ensure data quality and governance, optimize processing and storage, integrate with ML and analytics, implement monitoring, and collaborate with Product and Engineering to prioritize data initiatives.
Top Skills: Microsoft Fabric,Databricks,Python,Sql,Scala,Java,Apache Spark,Apache Beam,Azure Data Factory,Azure,Aws,Azure Synapse Analytics,Azure Data Lake Storage,Git

What you need to know about the London Tech Scene

London isn't just a hub for established businesses; it's also a nursery for innovation. Boasting one of the most recognized fintech ecosystems in Europe, attracting billions in investments each year, London's success has made it a go-to destination for startups looking to make their mark. Top U.K. companies like Hoptin, Moneybox and Marshmallow have already made the city their base — yet fintech is just the beginning. From healthtech to renewable energy to cybersecurity and beyond, the city's startups are breaking new ground across a range of industries.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account