Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

EPAM Systems • Argentina
Role & seniority: Senior Data Quality Engineer
Languages: Python (data validation/automation), Java/Scala (nice-to-have)
Big Data: Hadoop ecosystem (HDFS, Hive, Spark), streaming (Kafka/Flume/Kinesis)
Data stores: NoSQL (Cassandra, MongoDB, HBase), RDBMS (PostgreSQL, MSSQL, MySQL, Oracle)
ETL / data quality: Talend, Informatica (or similar), data validation frameworks, MDM integration, JMeter
Cloud / architecture: AWS, Azure, GCP; multi-cloud
BI / analytics: Tableau, Power BI, Tibco Spotfire
CI/CD / VCS: Jenkins, GitHub Actions, Git, GitLab, SVN
Testing / governance: TDD/DDT/BDT, automated validation pipelines, testing frameworks
Lead data quality strategies and governance; ensure accuracy and reliability across data products
Develop, implement, and scale automated data quality validation pipelines and testing frameworks
Manage complex data quality tasks, coordinate with cross-functional teams, and mentor junior members; maintain documentation
3+ years in Data Quality Engineering
Python for data validation/automation
Expertise with Hadoop ecosystem, Spark, Kafka, and NoSQL/SQL databases
Experience with cloud platforms (AWS/Azure/GCP) and multi-cloud architectures
ETL experience (Talend/Informatica), data governance/MDM integration
Strong analytics, stakeholder communication, and English proficiency (B2+)
We are looking for a skilled and experienced Senior Data Quality Engineer to join our team. In this role, you will play a critical part in ensuring the accuracy, reliability, and efficiency of our data systems and processes at scale. If you are passionate about leading impactful data quality initiatives and working with cutting-edge technologies, this position will allow you to shape the future of our data ecosystem. Responsibilities Lead the development and execution of data quality strategies, ensuring accuracy and reliability across data products and processes Drive data quality initiatives while promoting best practices across teams and projects Develop and implement advanced testing frameworks and methodologies to meet enterprise data quality standards Manage and prioritize complex data quality tasks, ensuring efficiency under tight deadlines and competing priorities Design and maintain comprehensive testing strategies for evolving system architectures and data pipelines Provide guidance on resource allocation and prioritize testing efforts to align with business and regulatory requirements Establish and continuously improve a data quality governance framework to ensure compliance with industry standards Build, scale, and optimize automated data quality validation pipelines for production environments Collaborate with cross-functional teams to address infrastructure challenges and enhance system performance Mentor junior team members and maintain detailed documentation for test strategies, plans, and frameworks Requirements At least 3 years of professional experience in Data Quality Engineering Advanced programming skills in Python for data validation and automation Expertise in Big Data platforms, including tools from the Hadoop ecosystem such as HDFS, Hive, and Spark, as well as modern streaming platforms like Kafka, Flume, or Kinesis Practical experience with NoSQL databases such as Cassandra, MongoDB, or HBase, managing large-scale datasets Proficiency in data visualization tools like Tableau, Power BI, or Tibco Spotfire to support analytics and decision-making Extensive experience with cloud platforms such as AWS, Azure, or GCP, with a strong understanding of multi-cloud architectures Advanced knowledge of relational databases and SQL (PostgreSQL, MSSQL, MySQL, Oracle) in high-volume, real-time environments Proven experience in implementing and scaling ETL processes using tools like Talend, Informatica, or similar platforms Familiarity with deploying and integrating MDM tools into workflows, as well as performance testing tools like JMeter Advanced experience with version control systems such as Git, GitLab, or SVN, and expertise in automation for large-scale systems Comprehensive understanding of modern testing frameworks (TDD, DDT, BDT) and their application in data environments Experience with CI/CD practices, including pipeline implementation using tools like Jenkins or GitHub Actions Strong analytical and problem-solving skills, with the ability to interpret complex datasets into actionable insights Exceptional English communication skills (B2 level or higher), with experience engaging stakeholders and leading discussions Nice to have Hands-on experience with additional programming languages like Java, Scala, or advanced Bash scripting for production data solutions Advanced knowledge of XPath and its use in data validation and transformation workflows Experience designing custom data generation tools and synthetic data techniques for advanced testing scenarios We offer International projects with top brands Work with global teams of highly skilled, diverse peers Healthcare benefits Employee financial programs Paid time off and sick leave Upskilling, reskilling and certification courses Unlimited access to the LinkedIn Learning library and 22,000+ courses Global career opportunities Volunteer and community involvement opportunities EPAM Employee Groups Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn
Nivel de antigüedad Intermedio Tipo de empleo Jornada completa Función laboral Tecnología de la información, Ingeniería y Control de calidad Sectores Desarrollo de software, Servicios y consultoría de TI y Tecnología, información e internet