Welcome Back

Google icon Sign in with Google
OR
I agree to abide by Pharmadaily Terms of Service and its Privacy Policy

Create Account

Google icon Sign up with Google
OR
By signing up, you agree to our Terms of Service and Privacy Policy
Instagram
youtube
Facebook

Ai Red-Teamer — Adversarial Ai Testing (Advanced); English & Hindi

Mercor
Mercor
2-3 years
$13.87 per hour
10 Jan. 12, 2026
Job Description
Job Type: Part Time Education: B.Sc/M.Sc/M.Pharma/B.Pharma/Life Sciences Skills: Causality Assessment, Clinical SAS Programming, Communication Skills, CPC Certified, GCP guidelines, ICD-10 CM Codes, CPT-Codes, HCPCS Codes, ICD-10 CM, CPT, HCPCS Coding, ICH guidelines, ICSR Case Processing, Interpersonal Skill, Labelling Assessment, MedDRA Coding, Medical Billing, Medical Coding, Medical Terminology, Narrative Writing, Research & Development, Technical Skill, Triage of ICSRs, WHO DD Coding

AI Red-Teamer – Adversarial AI Testing (Advanced) | Remote, India

Contract Type: Hourly / Full-time or Part-time
Location: Remote (India)
Compensation: $13.87 per hour
Languages Required: Native-level fluency in English and Hindi


Company Overview

Mercor is a leading AI research and human-data training organization collaborating with top AI labs and enterprises globally. We specialize in evaluating, testing, and enhancing AI systems through rigorous human-driven analysis. Our projects focus on adversarial AI testing, red-teaming, and AI safety, enabling enterprises to deploy robust, trustworthy AI models.


Role Overview

Mercor is seeking experienced AI Red-Teamers to join a high-impact adversarial AI testing initiative. In this advanced role, you will probe AI models, identify vulnerabilities, and generate high-quality red-team data to enhance AI safety for enterprise customers.

This fully remote role is restricted to applicants based in India. The position requires native-level fluency in English and Hindi, as the work involves detailed textual analysis and reporting.


Key Responsibilities

  • Conduct red-team testing of AI models and agents, including jailbreaks, prompt injections, misuse cases, bias exploitation, and multi-turn manipulation.

  • Generate actionable human-labeled data by annotating AI failures, classifying vulnerabilities, and flagging systemic risks.

  • Apply structured testing using established taxonomies, benchmarks, and playbooks to ensure reproducibility and consistency.

  • Document findings thoroughly, producing datasets, reports, and reproducible attack cases for customer action.


Required Qualifications and Experience

  • Prior experience in AI red teaming, adversarial ML, cybersecurity, or socio-technical probing.

  • Strong analytical and problem-solving skills, with the ability to push AI systems to their limits.

  • Ability to work with structured frameworks rather than ad-hoc methods.

  • Excellent communication skills for conveying risks clearly to technical and non-technical stakeholders.

  • Adaptable and capable of managing multiple projects and priorities.

Experience Required: Minimum 2–3 years in AI red teaming, adversarial testing, or related cybersecurity roles.


Preferred Specializations

  • Adversarial ML: Jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction.

  • Cybersecurity: Penetration testing, exploit development, reverse engineering.

  • Socio-technical Risk: Harassment/disinformation analysis, abuse detection, conversational AI testing.

  • Creative Probing: Psychology, acting, or writing to support unconventional adversarial thinking.


Success Metrics

  • Identify vulnerabilities missed by automated testing.

  • Deliver reproducible artifacts that enhance customer AI system safety.

  • Expand evaluation coverage and reduce production surprises.

  • Build trust with customers by preemptively probing AI systems like an adversary.


Contract Details

  • Engagement as an independent contractor with flexible, fully remote work.

  • Weekly payments via Stripe or Wise for services rendered.

  • Project timelines may be extended, shortened, or concluded depending on performance and organizational needs.

  • Work does not require access to confidential employer or client information.

  • Note: H1-B and STEM OPT candidates are not eligible.


Why Join Mercor

  • Gain hands-on experience in human data-driven AI red teaming at the frontier of AI safety.

  • Play a direct role in making AI systems more robust, safe, and trustworthy.

  • Collaborate with top AI researchers and enterprises globally.

  • Competitive contract rates commensurate with experience and project sensitivity.

  • Opportunities for referral bonuses: Earn $40 per successful referral without a limit on the number of referrals.