Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Georgia IT, Inc. • United States
Role & seniority: Senior AI Penetration Tester (contract role; 12+ months; 100% remote)
Stack / tools: Burp Suite Pro, Netsparker, Checkmarx; AI security frameworks (TensorFlow, PyTorch, LLM APIs, LangChain); APIs, web apps, mobile apps; offensive security tooling (discovery, fuzzing, exploitation)
Lead AI-focused penetration testing engagements (manual testing of AI/ML systems, objective-based AI feature testing; cover traditional and AI-centric surfaces)
Perform threat modeling, architecture reviews, and assess AI-related business logic; develop AI-driven offensive tooling
Communicate findings to technical and non-technical audiences; drive remediation discussions and advise on secure AI model development/deployment
3+ years in penetration testing (APIs, web, mobile)
Experience with AI model testing or AI security; AI red teaming/adversarial attack development; prompt engineering attacks, LLM vulnerabilities, model evasion
Proficiency with pentest tools (Burp Suite Pro, Netsparker, Checkmarx) and AI security frameworks
Strong communication/presentation skills for diverse stakeholders
Certifications (GWAPT, CREST, OSWE, OSWA) and/or AI security training; Bachelor’s degree or equivalent experience
US work authorization (no visa sponsorship needed now or in future)
AI Penetration Tester– 100% Remote Candidate Location – United States
Employment Type: 12 months plus contract Start date – DOE
Job Description
Position Summary / Purpose: Overview of the basic function and purpose of the job, and how it contributes to the successful achievement of department and organization objectives. Execute AI-focused penetration testing engagements that include manual penetration testing of systems incorporating AI/ML, objective-based testing of AI-driven features, and coverage of both traditional and AI-centric attack surfaces. Perform threat modeling for AI-powered software systems, evaluate AI-related business logic, and conduct architecture reviews. Focus on adversarial ML vectors, prompt-based vulnerabilities, and other AI-specific security risks. Develop and improve AI-driven tools and methodologies for offensive security tasks such as discovery, exploitation, fuzzing, and adversarial ML testing, emphasizing web apps, APIs, and mobile clients. Demonstrate AI penetration testing findings to technical and non-technical audiences, including live demos. Collaborate with engineering, development, and security teams to communicate findings, lead remediation discussions, and advise on secure AI model development and deployment best practices. Research emerging AI attack techniques and evaluate their potential impact, identify vulnerabilities, and provide actionable recommendations to strengthen AI defenses. Collaborate with internal Red Teams, SOC analysts, and AI security researchers, sharing insights and data to enhance AI-driven offensive security methodologies. Refine existing AI red teaming approaches by integrating new adversarial ML techniques and proven exploitation tactics. Act independently on AI penetration testing with minimal oversight, guiding engagements from planning through execution and reporting.
Qualifications: The skills, abilities, specific knowledge, education, and minimum experience necessary to perform this job. Minimum three (3) years of recent penetration testing experience focused on APIs, web applications, and mobile applications. Experience with AI model testing or AI security is highly desirable. Proven background in AI red teaming and adversarial attack development, including prompt engineering attacks, LLM-based vulnerability analysis, and model evasion techniques. Proficiency with penetration testing tools (e.g., Burp Suite Pro, Netsparker, Checkmarx) and AI security frameworks (e.g., TensorFlow, PyTorch, LLM APIs, LangChain). Strong communication and presentation skills to explain AI-related vulnerabilities to technical and non-technical stakeholders and drive remediation. One or more major ethical hacking certifications (e.g., GWAPT, CREST, OSWE, OSWA) and certifications or training in AI security techniques. Bachelor’s degree from an accredited college/university or equivalent industry experience. Applicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future.