Responsibilities
- Act as the primary steward of data across all our applications, understanding and managing various data types and sources
- Design, implement, and maintain scalable data pipelines and ETL processes
- Develop and manage data integrations between different systems and applications
- Perform complex data analysis to derive insights and support decision-making
- Build and maintain dashboards for data visualization and reporting
- Prepare and manage datasets for machine learning and analytics projects
- Collaborate with cross-functional teams to understand data needs and provide solutions
- Implement data quality checks and ensure data integrity across systems
- Optimize data storage and retrieval for performance and cost efficiency
- Stay current with emerging technologies and best practices in data engineering and analytics
Skills and Experience required
- Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field
- 4+ years of experience in data engineering, data science, or a similar role
- Proven track record of building and maintaining data systems at scale
- Strong analytical and problem-solving skills
- Excellent communication skills, able to translate complex technical concepts to non-technical stakeholders
Required Technical Skills:
- Strong proficiency in Python or another relevant programming language for data manipulation and analysis
- Experience with big data technologies (e.g., Hadoop, Spark)
- Expertise in SQL and working with relational databases (e.g., PostgreSQL, MySQL)
- Familiarity with NoSQL databases (e.g., MongoDB, Cassandra)
- Experience with data warehousing solutions and ETL tools
- Proficiency in building data pipelines and workflow management tools (e.g., Airflow, Luigi)
- Knowledge of data visualization tools (e.g., Tableau, PowerBI, or similar)
- Understanding of statistical analysis and machine learning concepts
- Experience with cloud platforms (AWS, GCP, or Azure) for data processing and storage
Nice to Have:
- Experience with stream processing technologies (e.g., Kafka, Flink)
- Knowledge of data modeling and dimensional modeling concepts
- Familiarity with data governance and compliance requirements
- Experience with containerization and orchestration (e.g., Docker, Kubernetes)
- Understanding of data security best practices
- Exposure to machine learning operations (MLOps) practices
Key Attributes:
- Passionate about data and its potential to drive business value
- Detail-oriented with a strong focus on data quality and integrity
- Proactive in identifying and solving data-related challenges
- Adaptable and quick to learn new technologies and methodologies
- Collaborative mindset, able to work effectively with diverse teams
- Self-motivated and able to manage multiple priorities in a fast-paced environment
Top Skills
What We Do
At Tag, we turn big ideas into high-impact marketing, working with leading brands and agencies to deliver content at speed and scale across channels, cultures and regions.
With intelligent, sustainable and technology-driven solutions at the heart of everything we do, we enable brands to operate more efficiently and effectively to stand out, sell more and waste less.
Every decision at Tag is made in consideration of our clients, our people, our planet, and our communities. With 2,700 experts in 29 countries across the world, our distributed hub model has allowed us to be the always-on, end-to-end production partner of choice for over half a century.
Find out more at tagww.com