Company Overview
Soros Fund Management LLC (SFM) is a global asset manager and family office founded by George Soros in 1970. With $28 billion in assets under management (AUM), SFM serves as the principal asset manager for the Open Society Foundations, one of the world’s largest charitable foundations dedicated to advancing justice, human rights, and democracy.
Distinct from other investment platforms, SFM thrives on agility, acting decisively when conviction is high and exercising patience when it’s not. With permanent capital, a select group of major clients, and an unconstrained mandate, we invest opportunistically with a long-term view in a wide range of strategies and asset classes, including public and private equity and credit, fixed income, foreign exchange, and alternative assets. Our teams operate with autonomy, while cross-team collaboration strengthens our conviction and empowers us to capitalize on market dislocations.
At SFM, we foster an ownership mindset, encouraging professionals to challenge the status quo, innovate, and take initiative. We prioritize development, enabling team members to push beyond their roles, voice bold ideas, and contribute to our long-term success. This culture of continuous growth and constructive debate fuels innovation and drives efficiencies.
Our impact is measured by both the returns we generate and the values we uphold, from environmental stewardship to social responsibility. Operating as a unified team across geographies and mandates, we remain committed to our mission, ensuring a meaningful, lasting impact.
Headquartered in New York City with offices in Greenwich, Garden City, London, and Dublin, SFM employs 200 professionals.
Team Overview:Reports To: Head of Cloud SRE & DevOps Engineering
Other Key Relationships: Cybersecurity analysts, Software Development engineers
Job Overview:
We are seeking a mid-to-senior level engineer to join our Cloud SRE & DevOps Engineering team in London, focused on building, operating, and evolving the cloud infrastructure and delivery platforms that support our trading and investment systems.
This role sits at the intersection of Cloud Engineering, SRE, DevOps, Platform Engineering, and Production Engineering, and is designed for individuals who take ownership of systems running in production. You will be responsible for designing and operating resilient, scalable environments across AWS and Kubernetes, while enabling engineering teams through modern DevOps and GitOps practices.
The position reflects a strong Site Reliability Engineering (SRE) mentality, with emphasis on reliability, observability, automation, and operational excellence. You will play a key role in advancing the firm’s cloud transformation strategy, ensuring systems are built for performance, stability, and scale.
This is a hands-on role requiring deep technical expertise, accountability for production systems, and a mindset oriented toward continuous improvement, risk management, and engineering efficiency. The role also contributes to evolving platform capabilities supporting AI and data-driven workloads.
You will work closely with software engineering, cybersecurity, and data teams to deliver secure, scalable, and high-performing systems, while improving developer experience and platform maturity across the organization.
Major Responsibilities:
Cloud Infrastructure & Kubernetes
Design, build, and operate scalable AWS-based infrastructure supporting trading systems
Work across hybrid or multi-cloud environments (AWS and Azure)
Manage and optimize Kubernetes environments (EKS), ensuring resilience, scalability, and performance
Utilize Kubernetes ecosystem tooling (e.g., Helm) to support application deployment and lifecycle management
Develop reusable infrastructure using Terraform and infrastructure as code principles
Database Cloud Management & Administration
Manage the automated delivery of Snowflake configurations through CICD pipelines
Help administration of Microsft SQLServer, Snowflake and AWS Aurora
CI/CD & DevOps Practices
Design and maintain CI/CD pipelines using tools such as GitHub Actions
Promote Git-based workflows and support GitOps practices (e.g., ArgoCD)
Improve deployment reliability, consistency, and engineering velocity
Production Engineering & Reliability
Own and operate business-critical production systems with a strong focus on uptime, performance, and risk mitigation
Troubleshoot and resolve complex issues across distributed systems, cloud infrastructure, and Kubernetes environments
Implement and enhance monitoring, logging, and alerting using tools such as Datadog, AWS CloudWatch, Geneos, and LogicMonitor
Apply cloud security best practices, including IAM, secrets management, and vulnerability scanning
Support and optimize relational database systems (e.g., PostgreSQL, MySQL, SQL Server, Aurora), ensuring performance and high availability
Contribute to backup, recovery, and resilience strategies across infrastructure and data layers
Drive improvements aligned with SRE principles, including reliability, observability, and operational maturity
Developer Enablement
Improve developer experience through automation, standardization, and self-service capabilities
Partner with engineering teams to streamline development and deployment workflows
AI & Emerging Workloads
Support infrastructure for AI/ML and data-driven workloads, including scalable compute and data processing patterns
Enable deployment patterns for modern, data-intensive applications
Evaluate emerging technologies relevant to AI-enabled platforms
What We’re Looking For:
Core Technical Expertise
Strong hands-on experience with AWS in production environments
Experience working in hybrid or multi-cloud environments (AWS and Azure)
Deep experience with Kubernetes and containerized systems (Docker, EKS)
Strong familiarity with Kubernetes tooling, including Helm
Proven experience with infrastructure as code (Terraform preferred)
Strong experience building and managing CI/CD pipelines (GitHub Actions or similar)
Proficiency in Python, Shell, or PowerShell for automation
Production & Systems Engineering
Strong understanding of distributed systems, networking, and cloud architecture
Experience operating and supporting production systems with high availability and performance requirements
Experience diagnosing and resolving complex production incidents in distributed environments
Experience supporting relational databases (e.g., PostgreSQL, MySQL, SQL Server, Aurora) in cloud environments
Monitoring & Observability
Hands-on experience with observability tools such as Datadog, AWS CloudWatch, Geneos, or LogicMonitor (or comparable enterprise monitoring platforms), including designing metrics, alerts, and dashboards for production systems
DevOps & Engineering Practices
Strong grounding in DevOps methodologies, including automation, continuous delivery, and infrastructure standardization
Experience working with Git-based workflows and modern software engineering practices
Ability to balance speed, stability, and risk in production environments
Professional Attributes
Strong analytical and troubleshooting skills
Ability to operate effectively in high-performance, time-sensitive environments
Clear communication across technical and non-technical stakeholders
Ownership mindset with accountability for critical systems
Nice to Have:
AI & Data
Experience supporting AI/ML workloads or data platforms
Familiarity with LLM APIs, vector databases, or GPU-based environments
Exposure to modern data platforms such as Databricks, Google BigQuery, or Amazon Redshift
Financial Services
Experience supporting trading or investment systems
Understanding of trading workflows (OMS, order entry, trade booking), market data, or FIX protocol
What We Value:
Bachelor's degree in Engineering, Computer Science, Information Systems, or equivalent experience.
Hands-on experience with cloud platforms such as AWS, Azure, and Snowflake, with a solid understanding of IaaS, PaaS, and SaaS in hybrid environments.
Experience in cloud infrastructure or server-side development, with a fundamental understanding of containerization and Kubernetes.
Proficiency in programming languages like PowerShell and Python, with experience in DevOps tools such as Terraform, Docker, and Kubernetes.
Deep understanding of CI/CD tools (e.g., Octopus, Bamboo, Azure DevOps, Git Actions) and configuration management.
Expertise in automation scripting, infrastructure as code (IaC), and immutable infrastructure using tools like AWS CloudFormation or Terraform.
Familiarity with observability frameworks (DataDog, OpenSearch, and LogicMonitor).
Familiarity with networking, storage, and database concepts.
Ability to craft clear, effective communications for both technical and non-technical audiences.
Strong analytical and troubleshooting skills, attention to detail, ability to multitask, and work effectively in a fast-paced environment.
Proficiency in writing scripts using Python, PowerShell, Bash, and experience with technologies like Docker, Lambda functions, and Kubernetes.
Hands-on experience with Terraform, Packer, Git, and Jira is considered a plus.
Integration with JIRA and GitHub is a plus
Experience in designing, developing, and implementing ETL/ELT processes using Snowflake and related technologies.
In addition to a base salary, the successful candidate will also be eligible to receive a discretionary year-end bonus.
In all respects, candidates need to reflect the following SFM core values:
Integrity // Teamwork // Smart risk-taking // Owner’s Mindset // Humility
Discover how SFM continues to drive impactful investments and supports the global mission of the Open Society Foundations.

