EXPERTISE

Adversarial work against production AI.

Offensive research, hands-on audits, and operator training — the work that takes findings out of slide decks and into shipping engineering practice.

WHAT WE DO

Security teams ship more than tickets — they ship trust.

Ryvane works alongside engineering and security teams on the specific surfaces that modern AI systems actually fail on: agent loops, retrieval pipelines, tool-use, memory, and the operator workflows around them. The output isn't a PDF — it's reproducible exploits, hardening patches, and people on your team who can do the work themselves.

01 — Research

Offensive AI security research

Original adversarial work on agents, retrieval systems, and AI-augmented production stacks — published, reproducible, and usable as the basis for hardening.

  • Sandbox escape and tool-use attacks
  • Memory poisoning in persistent agents
  • Retrieval pipeline boundary failures
  • MCP server abuse patterns
02 — Audits

Production AI security audits

Hands-on assessments of agent platforms, RAG pipelines, and tool-using LLM systems — delivered as exploit chains plus a concrete remediation plan, not a checklist.

  • Agent platform red-teaming
  • RAG and retrieval security review
  • MCP server hardening
  • Prompt injection and exfiltration paths
03 — Training

Operator training & Academy

Cohort-based and bespoke programs that turn engineers and security teams into operators who can run live exploitation labs and ship defenses against the offense.

  • Ryvane Academy on-demand courses
  • Bespoke team cohorts
  • Live exploitation labs
  • Onsite or remote engagement

Frequently Asked Questions

An agreed area of investigation — usually a class of failure on a specific system shape — and a fixed time-on-team to dig into it. We share progress weekly, and the deliverable is reproducible findings plus the artifacts (sandboxes, scripts, write-ups) needed to act on them.

For research on our own initiative, yes — that's how the field moves. For research done under engagement, the default is that findings stay with the client. We're happy to negotiate a publication path after the issue is patched, but only if you want one.

Where the gap between what's deployed and what's been tested is largest, where the impact is highest, and where the work would still be useful six months from now. We bias toward classes of failure that recur across many systems — agent loops, retrieval boundaries, memory — rather than one-off curiosities.

Audits are scoped to a specific system you've shipped or are about to ship — the goal is to make that system safer. Research is scoped to a class of problem, with no single target. Same methodology, different orientation.

We don't issue certifications. We produce findings and remediation guidance in a form your existing certifying body (or your auditors, or your insurers) can read directly. If you need a formal attestation, we'll work with the auditor of your choice.

Yes. The pair-programming model is what we'd recommend for most multi-week engagements — we work in your repo, your chat, your standup. Knowledge transfer happens as a side-effect of the work itself.

Common. Pre-launch reviews land in one to two weeks. Investor or acquirer due-diligence engagements are usually shorter — a focused look at the AI surface, not a full audit — and result in a memo rather than a test suite.

Structured modules covering the foundations and the offence — from how language models work, to how they get broken — paired with runnable exploitation labs. Each lab is a sandboxed system with a specific failure to find and patch.

On-demand for the main curriculum. Cohorts include scheduled live sessions for the harder material and Q&A, but the bulk of the work happens at your pace.

Yes — and we'd usually recommend it if more than three or four engineers are going through the same material. We tailor the labs to your stack so the practice carries directly into your day-to-day work.

Application security engineers, ML engineers shipping production agents, security operators in AI-heavy organisations, and red teamers expanding into AI. Solid software fundamentals are assumed; ML background is not.

Still have questions? Reach out and we'll walk through your specifics.

Get in touch
ENGAGING WITH US

Start with the actual problem.

Most engagements begin with a short scoping call: what's deployed, what's at risk, and what shipping a fix would look like. We size the work to that — a one-week deep audit, an embedded research engagement, or a training cohort — and write a short brief before either side commits.

Reach us at hello@ryvane.com.

Where AI security gets practiced.

Knowledge that translates innovation into outcomes.

LEARN MORE