Formation Bio is a tech and AI driven pharma company differentiated by radically more efficient drug development.
Advancements in AI and drug discovery are creating more candidate drugs than the industry can progress because of the high cost and time of clinical trials. Recognizing that this development bottleneck may ultimately limit the number of new medicines that can reach patients, Formation Bio, founded in 2016 as TrialSpark Inc., has built technology platforms, processes, and capabilities to accelerate all aspects of drug development and clinical trials. Formation Bio partners, acquires, or in-licenses drugs from pharma companies, research organizations, and biotechs to develop programs past clinical proof of concept and beyond, ultimately helping to bring new medicines to patients. The company is backed by investors across pharma and tech, including a16z, Sequoia, Sanofi, Thrive Capital, Sam Altman, John Doerr, Spark Capital, SV Angel Growth, and others.
You can read more at the following links:
- Our Vision for AI in Pharma
- Our Current Drug Portfolio
- Our Technology & Platform
At Formation Bio, our values are the driving force behind our mission to revolutionize the pharma industry. Every team and individual at the company shares these same values, and every team and individual plays a key part in our mission to bring new treatments to patients faster and more efficiently.
We're looking for a Data Engineer to join the Scientific Data Intelligence (SDI) team at Formation Bio to help build and scale the data infrastructure that powers our drug development platform. In this role, your primary focus will be supporting the Data Science team — building the coherent, well-structured data models, feature engineering pipelines, and evidence layers that underpin their scientific work. You'll also be responsible for ingesting data from a wide variety of sources—APIs, files, databases, and licensed third-party datasets—and transforming it into clean, analytics-ready assets using modern data tooling.
This is a foundational engineering role for someone who takes pride in building things right: reliable pipelines, well-structured data models, and systems that others can trust and build on. You'll work closely with Data Scientists and other engineers across the SDI team to ensure that high-quality data is consistently available where and when it's needed — and structured in a way that directly accelerates scientific discovery.
The ideal candidate is deeply fluent in Snowflake and dbt, has strong opinions about data modeling best practices, and has experience handling large and complex datasets across diverse ingestion patterns. You thrive in environments where data quality and engineering rigor are treated as first-class concerns.
Responsibilities- Partner closely with Data Scientists and other teams to build coherent data models, feature engineering pipelines, and structured evidence layers that directly support scientific analysis and machine learning workflows.
- Design and build scalable ingestion pipelines to onboard data from diverse sources including REST APIs, flat files, relational databases, and licensed third-party datasets.
- Develop and maintain robust data models in dbt, adhering to best practices around modularity, testing, documentation, and layered architecture (staging, intermediate, mart).
- Orchestrate pipelines using Dagster to ensure reliable, observable, and maintainable workflows.
- Implement data quality checks, validation frameworks, and monitoring to ensure trustworthiness of datasets across the platform.
- Collaborate with Data Scientists and analysts to understand data needs and translate them into well-structured, reusable models.
- Handle large-scale data movement and transfer scenarios, applying appropriate tooling and patterns to ensure efficiency and reliability at scale.
- Document data models, pipeline logic, and transformation assumptions to support discoverability and data governance.
- You have 5+ years of experience in data engineering, with a strong track record of building and maintaining production-grade pipelines.
- You are deeply fluent in dbt and follow best practices rigorously—including modular model design, sources and refs, testing (schema + data), documentation, and environment management.
- You have hands-on expertise with Snowflake, including schema design, performance tuning, and data governance patterns.
- You have experience with at least one modern orchestration tool—we use Dagster, but experience with Airflow or Prefect is equally welcome.
- You have broad ingestion experience across source types: REST APIs, flat files (CSV, JSON, Parquet), relational databases, and vendor-licensed datasets.
- You have worked with large datasets (TB -> PB scale) and understand the engineering considerations that come with scale—partitioning, incremental loading, efficient data movement, and storage optimization.
- You're a strong data modeler who thinks carefully about how data is structured, named, and layered for downstream usability.
- You value documentation, testability, and building things others can maintain and extend.
- You can balance upfront design with speed to execution, slowing down when it counts without getting stuck in the details.
- Bonus: You have experience with data quality and observability tooling such as Elementary, Great Expectations, or similar frameworks — and you treat data quality as a first-class engineering concern, not an afterthought.
- Bonus: You have experience with Spark, Databricks for large-scale data processing workloads.
- Bonus: You have experience with large-scale data transfer tooling such as AWS DataSync, AWS S3 Transfer Acceleration, or equivalent cloud-native data movement services.
- Bonus: You have experience in healthcare or life sciences data environments, including familiarity with EHR, claims, or other biomedical datasets.
Formation Bio is prioritizing hiring in key hubs, primarily the New York City and Boston metro areas. These positions will follow a hybrid work model with 1-3 days required at the office. Applicants from the Research Triangle (NC) and San Francisco Bay Area may also be considered. Please only apply if you reside in these locations or are willing to relocate.
Compensation Range: $177,500 - $232,000
Salary ranges are informed by a number of factors including geographic location. The range provided includes base salary only. In addition to base salary, we offer equity, comprehensive benefits, generous perks, hybrid flexibility, and more. If this range doesn't match your expectations, please still apply because we may have something else for you.
You will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.

