Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.
symplr • Karnataka, India
Role & seniority
Stack/tools
Performance testing: JMeter, LoadRunner, Gatling
Programming: Python, Java, JavaScript
Cloud/containers: AWS, Docker, Kubernetes
Monitoring/observability: Datadog, Dynatrace, Grafana, AppDynamics, Splunk
Data/ETL/ops: Elasticsearch/OpenSearch, Kafka, AWS Glue, SageMaker
Databases: SQL; MongoDB, Cosmos DB, PostgreSQL
Top 3 responsibilities
Design and lead comprehensive performance testing strategies, frameworks, and roadmaps; drive initiatives across QA, Dev, and Ops
Establish KPIs, benchmarks, dashboards, and real-time monitoring; analyze results to identify bottlenecks and drive improvements
Collaborate to integrate performance testing into CI/CD; troubleshoot complex issues in QA/Staging/Prod; mentor junior engineers
Must-have skills
12+ years in performance testing/engineering; leadership of performance teams
Expertise with JMeter, LoadRunner, or Gatling; strong programming (Python/JS/Java)
AWS experience and containerization (Docker/Kubernetes); solid CI/CD/DevOps understanding
Proficiency with monitoring/profiling tools (Datadog, Dynatrace, Grafana, AppDynamics, Splunk)
Experience with microservices, web/app architectures; SQL and cloud databases (MongoDB, Cosmos DB, PostgreSQL)
Ability to capture/analyze/interpret multi-layer performance metrics; strong communication
Nice-to-haves
Overview Designing comprehensive performance testing strategies, leading initiatives, and collaborating with cross-functional teams to ensure system reliability, scalability, and responsiveness across applications Conduct thorough performance assessments, including load testing, stress testing, and capacity planning, to identify system bottlenecks and areas for improvement. Work closely with development and operations teams to Identifying key performance indicators (KPIs) and establishing benchmarks, monitoring solutions, and dashboards that provide real-time insights into system performance. Work with Performance Architect and implement scalable testing frameworks for performance, and data validation, focusing on AI and Generative AI applications. Lead the troubleshooting and resolution of complex performance-related issues in QA, Staging, Pre-production and/or Production environments. Provide guidance and mentorship to junior QA engineers, fostering a culture of quality and continuous learning. Utilize industry-standard performance testing tools (e.g., JMeter, LoadRunner, Gatling) to simulate real-world scenarios and measure system performance, staying current with emerging tools and technologies in the performance testing space. Collaborate with development, QA, and operations teams to integrate performance testing into the continuous integration and continuous deployment (CI/CD) processes, providing guidance and support to team members on performance testing best practices. Analyze the CPU Utilization, Memory usage, Network usage, Garbage Collection to verify the performance of the applications. Generate performance graphs, session reports, and other related documentation required for validation and analysis. Create comprehensive performance test documentation, including test plans, test scripts, and performance analysis reports, effectively communicating performance testing results and recommendations to technical and non-technical stakeholders. Duties & Responsibilities Bachelor’s or Master’s degree in computer science, Engineering, or a related field. 12+ years of experience in performance testing and engineering, with a strong understanding of performance testing methodologies and tools. Experience in managing performance teams Experience in driving the performance roadmap and deliver the outcomes with high standards Proficiency in performance testing tools such as JMeter, LoadRunner, or Gatling. Proficiency in programming languages such as Python, Javascript, Java. Extensive experience with AWS cloud technologies and containerization (Docker/Kubernetes). Strong understanding of web technologies and application architecture Experience in Application Monitoring Tools and profiling tools like Datadog, Dynatrace, Grafana, AppDynamics, Splunk. Strong experience with CI/CD pipelines and DevOps practices. Experience in Applications like ElasticSearch, OpenSearch, Grafana, Kafka, AWS Glue, Sage Maker. Hands-on experience with performance test simulations, performance analysis, performance tuning, performance monitoring in a microservices environment Hands- on experience in analyzing the performance results - Capture/Analyze/Interpret performance metrics from application, database, OS, and Network. Working knowledge of SQL and cloud Databases like MongoDB, Cosmos DB, PostgreSQL Skills Required Experience with AI/ML frameworks (e.g., TensorFlow, PyTorch) is a plus. Strong understanding of data engineering and validation techniques and tools. Demonstrated ability to analyze complex systems, identify performance bottlenecks, and provide actionable insights. Good understanding of basic DB tuning, application server tuning and common issues around performance and scalability. Proven track record of leading performance testing teams and drive initiatives by collaborating effectively with cross-functional teams. Strong verbal and written communication skills, with the ability to convey complex technical concepts to non-technical stakeholders. Good understanding of computer networks and networking concepts Agile development experience