Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

TechDoQuest • United States
Role & seniority
Stack/tools
Penetration testing: Burp Suite Pro, Netsparker, Checkmarx
AI security: TensorFlow, PyTorch, LLM APIs, LangChain
Areas: API, web, and mobile app testing; AI model testing; adversarial ML
Top 3 responsibilities
Execute AI-focused penetration tests (manual testing of AI/ML systems, AI-driven features, traditional and AI-centric surfaces)
Perform threat modeling, architecture reviews, and evaluate AI-related business logic; lead remediation discussions
Develop/improve AI-driven offensive security tools (discovery, exploitation, fuzzing, adversarial ML testing); deliver findings with live demos to technical and non-technical audiences
Must-have skills
3+ years in penetration testing (APIs, web, mobile)
Experience with AI red teaming, adversarial attacks, prompt engineering, model evasion
Proficiency with Burp Suite Pro, Netsparker, Checkmarx; familiarity with AI security frameworks (TensorFlow, PyTorch, LLM APIs, LangChain)
Strong communication/presentation skills for diverse stakeholders
Ethical hacking certifications (GWAPT, CREST, OSWE, OSWA) or AI security training
Bachelor’s degree or equivalent experience
Authorized to work in the United States without visa sponsorship now or in the future
Nice-to-haves
Direct experience with AI security/testing in production environments
Collaboration with Red Teams, SOC, and AI security researchers
Independent engagement planni
Execute AI-focused penetration testing engagements that include manual penetration testing of systems incorporating AI/ML, objective-based testing of AI-driven features, and coverage of both traditional and AI-centric attack surfaces.
Perform threat modeling for AI-powered software systems, evaluate AI-related business logic, and conduct architecture reviews. Focus on adversarial ML vectors, prompt-based vulnerabilities, and other AI-specific security risks.
Develop and improve AI-driven tools and methodologies for offensive security tasks such as discovery, exploitation, fuzzing, and adversarial ML testing, emphasizing web apps, APIs, and mobile clients.
Demonstrate AI penetration testing findings to technical and non-technical audiences, including live demos.
Collaborate with engineering, development, and security teams to communicate findings, lead remediation discussions, and advise on secure AI model development and deployment best practices.
Research emerging AI attack techniques and evaluate their potential impact, identify vulnerabilities, and provide actionable recommendations to strengthen AI defenses.
Collaborate with internal Red Teams, SOC analysts, and AI security researchers, sharing insights and data to enhance AI-driven offensive security methodologies. Refine existing AI red teaming approaches by integrating new adversarial ML techniques and proven exploitation tactics.
Act independently on AI penetration testing with minimal oversight, guiding engagements from planning through execution and reporting.
Qualifications: The skills, abilities, specific knowledge, education, and minimum experience necessary to perform this job.
Minimum three (3) years of recent penetration testing experience focused on APIs, web applications, and mobile applications. Experience with AI model testing or AI security is highly desirable.
Proven background in AI red teaming and adversarial attack development, including prompt engineering attacks, LLM-based vulnerability analysis, and model evasion techniques.
Proficiency with penetration testing tools (e.g., Burp Suite Pro, Netsparker, Checkmarx) and AI security frameworks (e.g., TensorFlow, PyTorch, LLM APIs, LangChain).
Strong communication and presentation skills to explain AI-related vulnerabilities to technical and non-technical stakeholders and drive remediation.
One or more major ethical hacking certifications (e.g., GWAPT, CREST, OSWE, OSWA) and certifications or training in AI security techniques.
Bachelor’s degree from an accredited college/university or equivalent industry experience.
Applicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future.
Show more Show less