Skip to content

Search the site

IBMjobsNews

IBM offensive security job advert hints at plans to poison LLMs and attack RAG models

"You’ll be responsible for inventing clever ways of utilising AI for breaching customer networks and bypassing security controls."

ChatGPT's depiction of what happens when attackers 'poison' an LLM
ChatGPT's depiction of what happens when attackers 'poison' an LLM

IBM has dropped some major clues about the potential future work of its secretive X-Force security team.

In a job advert looking for an Offensive AI Researcher & Tester in the X-Force Adversary Services team, IBM's recruiters said they were looking for people with deep expertise in attacking AI systems and using agents offensively.

Anyone wanting to apply for the position should have experience with model evasion, extraction, inversion, and poisoning attacks as well as LLM prompt injection.

These attacks can be used to, for instance, persuade models to hand over sensitive information, produce false outputs or even ruin their future performance by corrupting training data.

Anyone wishing to join IBM must also know how to attack RAG interfaces, have experience "evaluating AI models and creating test harnesses for offensive use" and have previously "creating offensive agents which support shifting towards a Human-on-the-Loop approach for offensive tasks that are good candidates for automation".

The advert is a clear sign that IBM is taking the Generative AI threat seriously and getting a team in place to address it.

We've already seen big banks open up jobs with titles like "Head of Generative AI Security", so it's not too much of a stretch to imagine that IBM's red team are already hard at work poisoning LLMs and attacking RAG models - or at least considering doing so.

READ MORE: Red RAG to a Bull? Generative AI, data security, and toolchain maturity. An ecosystem evolves...

IBM's X-Force team helps customers continuously assess their real-world security by "delivering an unrivalled attack experience, at scale."

"As an team you’ll both test AI systems as well as leverage AI with cutting-edge X-Force methodologies and sophisticated capabilities to keep X-Force on the bleeding edge of red teaming innovation," IBM wrote.

"You’ll be responsible for inventing clever ways of utilising AI for breaching customer networks and bypassing security controls while working side by side with our offensive engineers, researchers, and developers to drive those innovations throughout our toolset and across our customers."

Responsibilities of the roles include the development of methodologies for offensive AI design, implementation, and testing, as well as devising "prototype novel AI capabilities and techniques" or "solving problems that do not have known solutions".

Experience with enterprise data lakes, relational/vector databases, complex data structures and data analysis tools is also a must, as well as debugging proficiency and experience of binary analysis using a reverse engineering platform and knowledge of Continuous Integration/Continuous Deployment (CI/CD) pipelines.

You'll need 5+ years coding in two or more programming languages, with Python, C#, C/C++, Assembly, Rust named in the ad. And you'll get paid a minimum of $153,000 per year - or a maximum of $285,000.

If that sounds like you - apply here.

READ MORE: Sysadmins facing "decline": Bureau of Labor issues grim job outlook forecast

Latest