Job Title:
Data Engineer ManagerContract Type:
Time Type:
Job Description:
Mission: create a repeatable, governed, AI-ready data factory on Azure + Snowflake and event architectures—so every dollar invested turns into reliable, high-performance data products and services the trading business can trust at market speed.
Reporting line: Global Head of Data
Scope: Enterprise-wide data engineering across oil, gas and power trading; teams in multiple regions; close partnership with Enterprise Data Architecture, Data Governance, Platform/Operations, and Security.
Own the strategy and execution of best-in-class data engineering to deliver state-of-the-art data products and services at scale. Build and operate a modern estate on Azure + Snowflake, centred on event-driven architectures and high-throughput ingestion/pipelines that feed analytics, risk, and AI/ML safely and cost-effectively. Establish the standards, tooling and talent model that convert complex trading data into fast, reliable, governed, and reusable products, aligned to the firm’s semantic/knowledge-graph backbone.
Main Responsibilities
Engineering Strategy & Roadmap
Define and execute the global data engineering strategy (ingest → govern → serve → observe), aligned with enterprise architecture and governance.
Standardise event patterns (Kafka/Flink), ELT (dbt/Spark/SQL), and serving layers (APIs/SQL/Graph) across regions.
Platform & Product Delivery
Industrialise scalable pipelines for market/curve, ETRM/CTRM, SCADA/time-series, logistics and finance/settlement data with SLOs, lineage and DR/BCP.
Enable AI/ML at scale: feature/label pipelines, vector stores, policy-aware retrieval, evaluation hooks and model/LLM registry integration.
Standards & Quality
Mandate data contracts, DQ rules, OpenLineage, security baselines (RBAC/ABAC, masking, retention) and FinOps tagging; codify golden paths/templates.
Drive performance engineering (p95/p99 targets, replay, back-pressure) and cost optimisation (tiering, compression, autoscaling).
People, Capacity & Sourcing
Build and coach high-performing squads; manage the engineering capacity plan; anticipate peaks and scale out via vetted staff-augmentation partners without lowering the bar.
Run an objective skills framework, hiring rubric, and career paths; ensure global follow-the-sun support on critical flows.
Run & Reliability
Own operational excellence: observability (metrics/logs/traces), incident management, post-incident reviews, and continuous hardening of critical paths (market close, risk runs, nominations).
Stakeholder Leadership
Partner with Trading, Risk, Ops/Logistics, Finance/Settlement and Compliance to prioritise a value-backlog; communicate trade-offs on latency, cost and control.
Align with Architecture on ontology/knowledge-graph mapping; with Governance on evidence and controls; with Platform/Operations on environments, access and DR.
What “Good” Looks Like (Outcomes & KPIs)
Reliability: SLOs met on market-critical paths; deterministic replay proven quarterly; MTTR trending down.
Speed & Reuse: Time-to-first-value for new products reduced by >50%; adoption of golden paths/templates across squads >60%.
Cost: Unit economics (cost per product/feature/inference) visible; ≥15–25% cost-to-serve reduction through optimisation/deprecation.
Compliance: Zero critical audit findings on lineage, access, retention; automated evidence packs.
Talent & Capacity: Bench strength in core skills; surge capacity activated without quality or security regressions.
Profile
Bachelor’s degree or higher in Computer Science, Engineering, Applied Mathematics, or a related field.
12+ years in data engineering/platform roles, 5+ years leading multi-region teams in real-time, regulated environments (ideally commodity trading/energy/financial markets).
Track record delivering at scale on Azure (Identity/Key Vault/AKS/Functions/ADF) and Snowflake (performance, security, cost controls).
Deep hands-on leadership in Kafka/Flink, dbt/Spark/SQL, API/stream serving, and performance/DR design.
Proven enablement of AI/ML foundations (feature pipelines, vector/RAG datasets, evaluation, registries) integrated with governance.
Demonstrated vendor management and staff augmentation leadership (selection, onboarding, QA, and exit/portability).
English (fluent), any additional language is an asset
Core Competencies & Skills
Engineering excellence: event design, time-series/curve patterns, schema evolution, replay, SLAs/SLOs.
Governance by design: data contracts, DQ/lineage, RBAC/ABAC, masking/retention, SoD; audit-ready automation.
FinOps literacy: tagging discipline, capacity planning, rightsizing and lifecycle policies; clear cost storytelling.
People leadership: hiring, coaching, performance management; builds inclusive, high-accountability culture.
Executive communication: crisp updates, escalation discipline, clear trade-offs; trusted by desk heads and control functions.
Key Collaboration Interfaces
Global Head of Data: strategy, budget, risk appetite, and executive reporting.
Lead Data Solution Architect & team: domain roadmaps, solution assurance, reuse/adoption metrics.
Platform Ops and Data Engineering: CI/CD, observability, identity/secrets, DR/BCP and FinOps.
Data and AI Governance, Risk, Compliance & Internal Audit: model risk, evidence automation, regulatory readiness, fine-grained FinOps Enablement.
Business Lines: Trading, Risk, Ops/Logistics, Asset Ops/SCADA, Finance/Settlement, Market Analysis, value mapping and SLO reviews.
Signature Deliverables (first 6–12 months)
90 days: baseline reliability/cost on critical domains (curves, positions); publish engineering playbook (golden paths, SLOs, lineage, DR); confirm capacity plan and surge provider roster.
180 days: deliver a platform uplift (e.g., unified ingestion + versioned curve service) and one AI-enablement uplift (feature/vector platform + policy-aware retrieval); achieve measurable gains (≥20% latency or ≥15% cost improvement) on a key path; standard templates adopted by ≥3 squads.
Nice to Have (Differentiators)
Commodity depth: Power/gas, Oil, Derivatives, time-series operational dashboards.
Knowledge-graph awareness: semantic layers (entity/relationship drill-through, lineage/impact views, consistent business terms).
Advanced geospatial: Mapbox/Leaflet, tiling strategies, clustering, and projection choices for assets, routes, and weather overlays.
LLM-assisted UX: Patterns for in-workflow assistants, retrieval-augmented explanations, and safe inline summarisation of alerts/incidents.
Design for low-latency streams: Live updates, batching, and diff-only rendering for high-frequency market data.
BI engineering partnership: Custom visual specs, semantic model constraints (star/snowflake), and row-level security/RBAC considerations.
Security basics: Understanding of ABAC/RBAC, PII handling, export controls, and auditability of user actions.
Performance tuning: p95/p99 render targets, bundle hygiene, virtualisation of large grids, and caching strategies.
Quant empathy: Comfortable discussing VaR/PFE math at a conceptual level to avoid misrepresenting risk semantics.
Prototyping breadth: Interactive prototypes wired to mock APIs; ability to script lightweight data fixtures.
Change management: Training kits, walkthroughs, and adoption campaigns for front-office and operations users.
Additional Skills
Highly numerate, rigorous, and resilient in problem-solving.
Ability to prioritize, multitask, and deliver under time constraints.
Strong written and verbal communication in English.
Self-motivated, proactive, and detail-oriented.
Comfortable working under pressure in a fast-paced environment.
Excellent communication skills, ability to explain technical topics clearly.
Team player with ability to collaborate across engineering, quant, and trading teams.
If you think the open position you see is right for you, we encourage you to apply!
Our people make all the difference in our success.

