Innodata is looking to solve the problem of ensuring the robustness and reliability of large language models (LLMs) by challenging them to think critically and act safely, and identifying vulnerabilities and improving the resilience of AI systems.
Requirements
- Strong understanding of grammar, syntax, and semantics – knowing what 'proper' English rules are, as well as when to violate them to better test AI responses
- Professional or Expert level proficiency (C1/C2) in English
- Advanced degrees are strongly preferred (Master’s or PhD)
- A Bachelor’s degree or Associates degree with minimum 1 year of relevant industry experience
Responsibilities
- Complete extensive training on AI/ML, LLMs, Red Teaming, and jailbreaking, as well as specific project guidelines and requirements
- Craft clever and sneaky prompts to attempt to bypass the filters and guardrails on LLMs, targeting specific vulnerabilities defined by our clients
- Collaborating closely with language specialists, team leads, and QA leads to produce the best possible work
- Assist our data scientists to conduct automated model attacks
- Adapt to the dynamic needs of different projects and clients, navigating shifting guidelines and requirements
- Keep up with the evolving capabilities and vulnerabilities of LLMs and help your team’s methods evolve with them
- Hit productivity targets, including for number of prompts written and average handling time per prompt
Other
- A Bachelor’s degree or Associates degree with minimum 1 year of relevant industry experience
- Advanced degrees are strongly preferred (Master’s or PhD)
- Must be able to work full-time (40 hours weekly) for 4 weeks
- Must be able to work fully remote within the U.S. (excluding Alaska, California, Colorado, Nevada and Puerto Rico)
- Must be willing to deal with material that is toxic or NSFW