French & English AI Red-Teamer - Remote

--Powermax General Electrical Merchants Ltd--

Job Description

We believe the safest AI is the one that’s already been attacked — by us. We are assembling a red team for this project - human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red team data that makes AI safer for our customers.

This project involves reviewing AI outputs that touch on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources. Before being exposed to any content, the topics will be clearly communicated.

Job Industry

ICT / Computer, Data, Business Analysis and AI

Job Salary Currency

UGX

Job Salary Fixed

No

Key Deliverables

  • Red team conversational AI models and agents: jailbreaks, prompt injections, misuse cases, bias exploitation, multi-turn manipulation

  • Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks

  • Apply structure: follow taxonomies, benchmarks, and playbooks to keep testing consistent

  • Document reproducibly: produce reports, datasets, and attack cases customers can act on

Professional Qualifications

Industry Qualification
ICT / Computer, Data, Business Analysis and AI ou bring prior red teaming experience (AI adversarial work, cybersecurity, socio-technical probing) You’re curious and adversarial: you instinctively push systems to breaking points You’re structured: you use frameworks or benchmarks, not just random hacks You’re communicative: you explain risks clearly to technical and non-technical stakeholders You’re adaptable: thrive on moving across projects and customers

Essential Qualities

Essential Qualities
  • Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction

  • Cybersecurity: penetration testing, exploit development, reverse engineering

  • Socio-technical risk: harassment/disinfo probing, abuse analysis, conversational AI testing

  • Creative probing: psychology, acting, writing for unconventional adversarial thinking


Application Process

Close Date

31/03/2026