Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

EPAM Systems • Brazil
Role & seniority: Senior Data Quality Engineer (Pleno-Sênior level)
Stack/tools: Python; Big Data (HDFS, Hive, Spark); streaming (Kafka, Flume, Kinesis); NoSQL (Cassandra, MongoDB, HBase); data viz (Tableau, Power BI, Tibco Spotfire); cloud (AWS, Azure, GCP); relational DBs (PostgreSQL, MSSQL, MySQL, Oracle); ETL (Talend, Informatica); MDM integration; testing (JMeter); version control (Git, GitLab, SVN); CI/CD (Jenkins, GitHub Actions); testing frameworks (TDD/DT/BDT)
Must-have skills: ≥3 years in Data Quality Engineering; Python for validation/automation; deep experience with Hadoop ecosystem and Spark; Kafka/Kinesis/Flume; NoSQL and large-scale data management; SQL in high-volume real-time contexts; multi-cloud architecture; ETL tooling; data validation frameworks; CI/CD and automation; strong analytical/communication skills
Nice-to-haves: Java/Scala or advanced Bash scripting; XPath for data validation; custom data generation/synthetic data tooling
Location & work type: Full-time; location not specified; international/global projects; remote/hybrid options may apply
We are looking for a skilled and experienced Senior Data Quality Engineer to join our team. In this role, you will play a critical part in ensuring the accuracy, reliability, and efficiency of our data systems and processes at scale. If you are passionate about leading impactful data quality initiatives and working with cutting-edge technologies, this position will allow you to shape the future of our data ecosystem. Responsibilities Lead the development and execution of data quality strategies, ensuring accuracy and reliability across data products and processes Drive data quality initiatives while promoting best practices across teams and projects Develop and implement advanced testing frameworks and methodologies to meet enterprise data quality standards Manage and prioritize complex data quality tasks, ensuring efficiency under tight deadlines and competing priorities Design and maintain comprehensive testing strategies for evolving system architectures and data pipelines Provide guidance on resource allocation and prioritize testing efforts to align with business and regulatory requirements Establish and continuously improve a data quality governance framework to ensure compliance with industry standards Build, scale, and optimize automated data quality validation pipelines for production environments Collaborate with cross-functional teams to address infrastructure challenges and enhance system performance Mentor junior team members and maintain detailed documentation for test strategies, plans, and frameworks Requirements At least 3 years of professional experience in Data Quality Engineering Advanced programming skills in Python for data validation and automation Expertise in Big Data platforms, including tools from the Hadoop ecosystem such as HDFS, Hive, and Spark, as well as modern streaming platforms like Kafka, Flume, or Kinesis Practical experience with NoSQL databases such as Cassandra, MongoDB, or HBase, managing large-scale datasets Proficiency in data visualization tools like Tableau, Power BI, or Tibco Spotfire to support analytics and decision-making Extensive experience with cloud platforms such as AWS, Azure, or GCP, with a strong understanding of multi-cloud architectures Advanced knowledge of relational databases and SQL (PostgreSQL, MSSQL, MySQL, Oracle) in high-volume, real-time environments Proven experience in implementing and scaling ETL processes using tools like Talend, Informatica, or similar platforms Familiarity with deploying and integrating MDM tools into workflows, as well as performance testing tools like JMeter Advanced experience with version control systems such as Git, GitLab, or SVN, and expertise in automation for large-scale systems Comprehensive understanding of modern testing frameworks (TDD, DDT, BDT) and their application in data environments Experience with CI/CD practices, including pipeline implementation using tools like Jenkins or GitHub Actions Strong analytical and problem-solving skills, with the ability to interpret complex datasets into actionable insights Exceptional English communication skills (B2 level or higher), with experience engaging stakeholders and leading discussions Nice to have Hands-on experience with additional programming languages like Java, Scala, or advanced Bash scripting for production data solutions Advanced knowledge of XPath and its use in data validation and transformation workflows Experience designing custom data generation tools and synthetic data techniques for advanced testing scenarios We offer International projects with top brands Work with global teams of highly skilled, diverse peers Healthcare benefits Employee financial programs Paid time off and sick leave Upskilling, reskilling and certification courses Unlimited access to the LinkedIn Learning library and 22,000+ courses Global career opportunities Volunteer and community involvement opportunities EPAM Employee Groups Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn
Nível de experiência Pleno-sênior Tipo de emprego Tempo integral Função Tecnologia da informação, Engenharia e Controle de qualidade Setores Desenvolvimento de software, Atividades dos serviços de tecnologia da informação e Tecnologia, Informação e Internet