Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Jobs via Dice • United States
Role & seniority: AI Penetration Tester, Mid-Senior level, Full-time
Location & work type: Remote; United States work authorization required (no visa sponsorship needed now or in the future)
Stack / tools: Burp Suite Pro, Netsparker, Checkmarx; AI/security frameworks (TensorFlow, PyTorch, LLM APIs, LangChain); APIs, web apps, mobile testing
Execute AI-focused penetration testing (manual tests of AI/ML-enabled systems; assess traditional and AI-centric attack surfaces)
Perform threat modeling, evaluate AI-related business logic, and conduct architecture reviews; lead remediation discussions
Develop/improve AI-driven offensive security tools and methodologies; communicate findings to technical and non-technical audiences; collaborate with engineering/security teams; publish actionable recommendations
3+ years in penetration testing focused on APIs, web apps, and mobile apps
Experience with AI model testing or AI security; background in AI red teaming and adversarial attacks (prompt engineering, LLM vulnerabilities, model evasion)
Proficiency with penetration testing tools and AI security frameworks
Strong communication/presentation skills; relevant ethical hacking certifications (GWAPT, CREST, OSWE, OSWA) and AI security training
Bachelor’s degree or equivalent experience
Dice is the leading career destination for tech experts at every stage of their careers. Our client, AIT Global, Inc., is seeking the following. Apply via Dice today!
Job Title: AI Penetration Tester
Location: Remote
Execute AI-focused penetration testing engagements that include manual penetration testing of systems incorporating AI/ML, objective-based testing of AI-driven features, and coverage of both traditional and AI-centric attack surfaces. Perform threat modeling for AI-powered software systems, evaluate AI-related business logic, and conduct architecture reviews. Focus on adversarial ML vectors, prompt-based vulnerabilities, and other AI-specific security risks. Develop and improve AI-driven tools and methodologies for offensive security tasks such as discovery, exploitation, fuzzing, and adversarial ML testing, emphasizing web apps, APIs, and mobile clients. Demonstrate AI penetration testing findings to technical and non-technical audiences, including live demos. Collaborate with engineering, development, and security teams to communicate findings, lead remediation discussions, and advise on secure AI model development and deployment best practices. Research emerging AI attack techniques and evaluate their potential impact, identify vulnerabilities, and provide actionable recommendations to strengthen AI defenses. Collaborate with internal Red Teams, SOC analysts, and AI security researchers, sharing insights and data to enhance AI-driven offensive security methodologies. Refine existing AI red teaming approaches by integrating new adversarial ML techniques and proven exploitation tactics. Act independently on AI penetration testing with minimal oversight, guiding engagements from planning through execution and reporting.
Minimum three (3) years of recent penetration testing experience focused on APIs, web applications, and mobile applications. Experience with AI model testing or AI security is highly desirable. Proven background in AI red teaming and adversarial attack development, including prompt engineering attacks, LLM-based vulnerability analysis, and model evasion techniques. Proficiency with penetration testing tools (e.g., Burp Suite Pro, Netsparker, Checkmarx) and AI security frameworks (e.g., TensorFlow, PyTorch, LLM APIs, LangChain). Strong communication and presentation skills to explain AI-related vulnerabilities to technical and non-technical stakeholders and drive remediation. One or more major ethical hacking certifications (e.g., GWAPT, CREST, OSWE, OSWA) and certifications or training in AI security techniques. Bachelor's degree from an accredited college/university or equivalent industry experience. Applicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future.
Seniority level Mid-Senior level Employment type Full-time Job function Information Technology Industries Software Development