IXIS Logo

IXIS

Data Analytics Engineer

Posted 2 Days Ago
Remote
Hiring Remotely in United States
Mid level
Remote
Hiring Remotely in United States
Mid level
As a Data Analytics Engineer, you will design and manage data pipelines, integrate diverse datasets, and enhance data quality and performance within our cloud environment. You'll collaborate with teams and contribute to code reviews and technical direction.
The summary above was generated by AI

About IXIS

IXIS is a digital consultancy and technology company that empowers organizations to make smarter, faster decisions through the seamless integration of strategy, technology, and analytics. Since 2012, we’ve helped leading brands harness their marketing, advertising, and customer experience data to unlock insights, improve performance, and achieve digital transformation. Our expert team spans media strategy, data governance, analytics enablement, and platform implementation.

At the heart of our offering is ATLAS, our proprietary data activation platform, which simplifies complex data challenges by consolidating, transforming, and delivering data across tools, teams, and workflows. With ATLAS, our clients gain full visibility and control over their data ecosystems, driving measurable results and operational efficiency.

We offer competitive compensation packages including health, dental, short-term and long-term disability and vision insurance, 401(k) with company match, flexible work schedules, wellness plan, and exceptional growth opportunities.

This is a full-time remote position with the option of working from our office in Burlington, VT.

Overview: IXIS is seeking a senior-level Data Analytics Engineer to join our Data Analytics Engineering (DAE) team. You will play a key role in managing and evolving our data ingestion, sanitation, and transformation pipelines. Our team handles complex client data, joining Adobe and GA4 clickstream data with social, CRM and other business data to create metrics and segments that power our data visualization products.

This is a hands-on individual contributor (IC) role. You will be expected to lead by example through high-quality design, coding, and problem-solving and contribution to technical direction.

You will be complementing an experienced team lead and will collaborate with other team members while contributing your own perspective and best practices. We are particularly interested in candidates who have seen different ways of doing things and can help us evolve by improving our data quality, scalability, and overall pipeline performance.

This is a high-impact role where you’ll help shape our technical direction, improve existing systems, and introduce new tools and workflows to make our data products even better.

Success in this role looks like:

  • Designing performant data pipelines for ingestion and transformation of complex datasets into usable data products.
  • Building scalable infrastructure that supports hourly, daily, and weekly update cycles.
  • Implementing automated QA checks and monitoring that catch data anomalies before they reach clients.
  • Re-architecting parts of our system to improve performance or reduce cost.
  • Supporting team members through code reviews and collaboration.

Team & Collaboration:
You’ll be working alongside a senior team lead who sets technical direction, while also collaborating with other engineers, QA, data scientists, and client teams. You’ll be expected to contribute both as a builder and a mentor (everyone is a mentor, it’s part of our culture).

Responsibilities:

  • Build enterprise-grade batch and real-time data processing pipelines using AWS with a focus on serverless architectures.
  • Design and implement automated ELT processes to integrate disparate datasets.
  • Work across multiple teams to ingest, extract, and process data via Python, R, zsh, SQL, REST, and GraphQL APIs.
  • Join and transform clickstream and CRM data into meaningful metrics and segments for visualization.
  • Create automated acceptance, QA, and reliability checks for business logic and data integrity.
  • Design appropriately normalized schemas and determine when to use SQL vs NoSQL solutions.
  • Optimize infrastructure and schema design for performance, scalability, and cost.
  • Help define and maintain CI/CD and deployment pipelines for data infrastructure.
  • Containerize and deploy solutions using Docker and AWS ECS.
  • Proactively identify and resolve data discrepancies and implement safeguards to prevent recurrence.
  • Contribute to documentation, onboarding materials, and cross-team enablement.

Required Education and Skills:

  • B.A./B.S. in Computer Science, Software Engineering, or a related field; training in statistics/mathematics/machine learning is a plus.
  • 3-5 years of experience building scalable, reliable data pipelines and data products in a cloud environment (AWS preferred).
  • Deep understanding of ELT processes and data modeling principles.
  • Strong programming skills in Python (or similar scripting languages).
  • Advanced SQL skills and intermediate to advanced relational database design experience.
  • Familiarity with joining large behavioral datasets like Adobe and GA4 clickstream data.
  • Excellent problem-solving skills and attention to data detail.
  • Experience managing and prioritizing multiple initiatives with minimal supervision.

Additional Desired Skills:

  • Experience with dbt or other transformation-layer tools.
  • Familiarity with Docker containerization and orchestration.
  • Experience with statistical programming (R or Python preferred).
  • API design or integration experience for data pipelines.
  • Experience developing in a Linux or Mac environment.
  • Exposure to data QA frameworks or observability tools (e.g. Great Expectations, Monte Carlo, etc.).

If you’re passionate about turning raw data into reliable, actionable insight—and want to help shape the future of data engineering at a growing SaaS company—we’d love to hear from you.

Top Skills

AWS
Docker
GraphQL
NoSQL
Python
Rest
SQL

Similar Jobs

9 Days Ago
Easy Apply
Remote or Hybrid
Easy Apply
Senior level
Senior level
Fintech • Information Technology • Software • Financial Services
The Senior Data Engineer will design and implement scalable data models in BigQuery using dbt for analytics and reporting, ensuring data governance and optimal performance.
Top Skills: BigQueryDbtFivetranSQL
Yesterday
In-Office or Remote
Mid level
Mid level
Aerospace • Energy
The Staff Data & Analytics Engineer manages data engineering processes, develops ETL/ELT solutions, ensures data quality, and provides insights for supply chain operations.
Top Skills: PythonRSQL
4 Hours Ago
Remote
Expert/Leader
Expert/Leader
Artificial Intelligence • Cloud • Consumer Web • Productivity • Software • App development • Data Privacy
Lead design and implementation of shared, reusable analytics data models and pipelines. Drive standardization, governance, observability, and CI/CD for analytics; partner with Data Science, Infrastructure, and Product to certify metrics, modernize orchestration, and integrate AI-native tooling.
Top Skills: AirflowDbtPythonSpark SqlSQL

What you need to know about the London Tech Scene

London isn't just a hub for established businesses; it's also a nursery for innovation. Boasting one of the most recognized fintech ecosystems in Europe, attracting billions in investments each year, London's success has made it a go-to destination for startups looking to make their mark. Top U.K. companies like Hoptin, Moneybox and Marshmallow have already made the city their base — yet fintech is just the beginning. From healthtech to renewable energy to cybersecurity and beyond, the city's startups are breaking new ground across a range of industries.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account