← Back to all papers
1 December 2025 Preprint DAI-2513

Autonomous Red Team AI: LLM-Guided Adversarial Security Testing

Murad Farzulla

Abstract

This technical report describes an architecture for autonomous penetration testing using LLM-guided agents operating within Kubernetes-isolated environments. The system combines RAG knowledge bases with OODA-loop decision cycles, enabling systematic vulnerability discovery while maintaining strict NetworkPolicy isolation.

Suggested Citation

Murad Farzulla (2025). Autonomous Red Team AI: LLM-Guided Adversarial Security Testing. Farzulla Research Working Paper DAI-2513. DOI: 10.5281/zenodo.17614726

BibTeX

@misc{farzulla2025autonomousredteam,
  author = {Farzulla, Murad},
  title = {Autonomous Red Team AI: LLM-Guided Adversarial Security Testing},
  year = {2025},
  howpublished = {Farzulla Research Working Paper DAI-2513},
  doi = {10.5281/zenodo.17614726},
  url = {https://farzulla.org/papers/autonomous-red-team}
}

Topics

AI Safety Security Research