Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

UnionBank of the Philippines • Pasig, Metro Manila, Philippines
Role & seniority
Stack / tools
SQL (advanced), Python or scripting for test automation
ETL/ELT, data warehouses/lakes, data modeling, data contracts, lineage/metadata
Automated testing frameworks (Great Expectations, Soda Core, dbt tests, pytest)
CI/CD integration (GitHub Actions, GitLab CI, Azure DevOps)
Data observability and tooling (optional: Great Expectations, Monte Carlo, Datadog, etc.)
Top 3 responsibilities
Design, execute, and maintain data quality test strategies across ETL/ELT pipelines, warehouses, lakes, and analytics layers
Develop automated data validation suites (schema checks, reconciliation, referential integrity, duplicates, boundary tests) with CI/CD integration; monitor quality metrics
Collaborate with data engineers, analytics engineers, product analysts, and stakeholders to translate data requirements into test cases, define SLAs, and drive defect remediation; document plans, data dictionaries, and playbooks
Must-have skills
Advanced SQL (joins, window functions, profiling, reconciliation) and at least one scripting language (Python)
Strong understanding of ETL/ELT, dimensional/normalized modeling, data contracts, and lineage concepts
Experience creating automated tests with Great Expectations, Soda Core, dbt tests, or pytest-based tooling; able to integrate with CI/CD and alerting
Ability to diagnose data quality defects via logs, lineage, profiling; strong communication to
Company: Union Bank of the Philippines
Position: Data Quality Test Engineer
Office Location: UnionBank Plaza - Ortigas, Pasig City
As a Data Quality Test Engineer, you will design, execute, and maintain comprehensive data quality test strategies across ETL/ELT pipelines, data warehouses, data lakes, and analytics layers to ensure accuracy, completeness, consistency, timeliness, and lineage integrity. You will develop automated data validation suites, including schema checks, reconciliation tests, referential integrity validations, duplicate detection, boundary/value range tests, and regression suites that run in CI/CD. You will collaborate closely with data engineers, analytics engineers, product analysts, and business stakeholders to translate data requirements and business rules into verifiable test cases, define quality SLAs, and prioritize defect remediation. You will create synthetic and anonymized test datasets where needed, analyze production data drifts and anomalies, and monitor quality metrics and alerts. You will document test plans, test evidence, data dictionaries, and quality playbooks; drive root-cause analysis for defects; and advocate for quality engineering best practices, including shift-left testing, test data management, and observability. You will contribute to tooling selection and integration, ensure test environments mimic production constraints, and support post-release validation, incident response, and continuous improvement of data quality controls.
Bachelor’s degree in Computer Science, Information Systems, Data Engineering, Statistics, or a related technical field (or equivalent practical experience). Foundational understanding of databases, data modeling, and software testing principles is essential. Certifications that validate data engineering or testing capabilities—such as ISTQB Foundation/Advanced, AWS/Azure/GCP data/analytics certifications (e.g., AWS Data Analytics Specialty, Azure Data Engineer Associate, Google Professional Data Engineer), or DBT Analytics Engineering—are strongly preferred. Candidates should demonstrate experience with SQL-driven testing and at least one scripting language for test automation.
Preferred candidates bring hands-on experience with modern data stacks (e.g., Snowflake/BigQuery/Redshift, Databricks, dbt), workflow orchestration (Airflow/Azure Data Factory/GCP Cloud Composer), and data observability platforms (Monte Carlo, Datadog, Great Expectations, Soda, or similar). Exposure to streaming data validation (Kafka/Kinesis/Pub/Sub), event models, and CDC pipelines is valued. Experience establishing data quality frameworks (dimensions, SLAs, KPIs), implementing data contracts, and embedding tests within CI/CD pipelines (GitHub Actions, GitLab CI, Azure DevOps) will stand out. Familiarity with privacy-by-design, PII handling, and test data management in regulated environments (e.g., BFSI, healthcare, or public sector) is a plus. A background in BI/report testing (Tableau/Power BI/Looker) and statistical validation of metrics is advantageous.
Proficient in SQL (complex joins, window functions, profiling, reconciliation) and comfortable with a scripting language such as Python for building test harnesses, data generators, and assertion libraries. Strong understanding of ETL/ELT pipelines, dimensional and normalized modeling, data contracts, and lineage/metadata concepts. Skilled at creating automated tests with frameworks such as Great Expectations, Soda Core, dbt tests, or custom pytest-based tooling, and integrating them with CI/CD and alerting. Adept in diagnosing data quality defects through log analysis, lineage tracing, and profiling; able to use query planners and cost-based optimization insights to validate performance-related quality issues. Excellent communication skills to translate business rules into testable specifications, produce clear documentation, and present findings to technical and non-technical audiences. Demonstrated problem-solving, curiosity, and a quality-first mindset, with the ability to prioritize risk-based testing and drive cross-functional remediation to closure.