Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Jobs via Dice • Cincinnati, Ohio, United States
Role & seniority: Data-focused Quality Engineer (mid-level)
Stack / tools: Databricks, Delta Lake, Delta Live Tables, PySpark, SQL; Databricks Workflows; CI/CD tools (Azure DevOps, GitHub Actions, or Jenkins)
Build and maintain automated data test frameworks for Databricks environments
Validate end-to-end data pipelines (ETL/ELT) and ensure data quality (accuracy, completeness, consistency)
Integrate data quality checks into CI/CD; tune Spark jobs for scalability and performance
2+ years’ hands-on Databricks experience with Delta Lake
Proficiency in PySpark and complex SQL
Experience building automated test harnesses for data pipelines or APIs
Solid understanding of data warehousing, schema validation, and Spark optimization
Familiarity with CI/CD tools (Azure DevOps, GitHub Actions, or Jenkins)
Databricks Certified Data Engineer or Associate
Experience with data quality tools like Great Expectations or Deequ
Local to Cincinnati or willing to relocate (initial meet-and-greet expenses covered)
Location & work type: Cincinnati, OH; full-time, on-site with potential relocation support for initial meet-and-greet
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Millennium Software, Inc., is seeking the following. Apply via Dice today!
We are seeking a Data-focused Quality Engineer to join our team in Cincinnati. This is not a traditional UI/Web testing role. You will be responsible for building automated test frameworks for massive data pipelines, ensuring the reliability of Databricks notebooks, Delta tables, and complex ETL processes.
What You Will Do
Automate Data Testing: Develop and execute automated frameworks for Databricks environments using PySpark and SQL.
Validate Pipelines: Perform end-to-end testing of data ingestion and transformation layers (ETL/ELT).
Ensure Data Quality: Implement automated checks for data accuracy, completeness, and consistency using Delta Live Tables and Databricks Workflows.
CI/CD Integration: Integrate data quality tests into the automated CI/CD pipeline.
Performance Tuning: Troubleshoot and optimize Spark jobs to ensure scalability across large datasets.
Required Skills
Platform: 2+ years of hands-on experience with Databricks and Delta Lake.
Languages: Strong proficiency in PySpark and complex SQL.
Testing: Experience building automated "test harnesses" for data pipelines or APIs.
Engineering: Solid understanding of data warehousing, schema validation, and Spark optimization.
DevOps: Familiarity with CI/CD tools (Azure DevOps, GitHub Actions, or Jenkins).
Preferred Qualifications
Databricks Certified Data Engineer or Associate. Experience with data quality tools like Great Expectations or Deequ. Local to Cincinnati or willing to relocate (Expenses covered for initial meet-and-greet).