About
Myles is a final year PhD student at Imperial College London in the Security and Machine Learning Lab and a Research Scientist in Defence and Security at The Alan Turing Institute. In his PhD research he focuses on using Deep Reinforcement Learning to develop novel solutions to problems in Web Security. At the Turing Institute his research focuses on using agentic systems in Autonomous Cyber Defence. He has interned at IBM Research Dublin and been a Visiting Researcher at both the National Institute of Information, Communications Technology in Japan, The Alan Turing Institute. Prior to this he My research focus is on applying Reinforcement Learning to difficult problems in cyber security Prior to this I received a Masters of Engineering from University College London, earning the ‘Outstanding MEng Graduating Student’ prize from the Department of Electronic and Electrical Engineering.

Security & Machine Learning.
Research Interests
- Web Security
- Agentic Systems
- Network Defence
- Reinforcement Learning
- Vulnerability Detection
- Adversarial Machine Learning
Skills
Education and Experience
Education
Doctorate of Philosophy
2020 - 2025
Department of Computing, Imperial College London
Research focused on developing novel deep reinforcement learning models for web security testing. Collectively found 11 zero-day vulnerabilities, and 17 bugs in production grade software. Designed, engineered, and implemented frameworks to use deep learning models in real-world problems that handle complex and varied data. Developed bespoke feature extraction methods for real-world data (web APIs, Network Packets, SQL databases), using LLMs and Principle Component Analysis (PCA) among others. This research has been presented at IEEE TrustCom (winning best paper), AsiaCCS (winning best poster), and CAMLIS.
Supervised and mentored eight MSc and MEng final year projects focused on using deep learning and reinforcement learning for cybersecurity applications.
Masters of Engineering
2016 - 2020
Electronic Engineering with Computer Science, University College London
Design and implementation of a novel graph-deep-Q-network in Tensorflow. Project lead for team on a computer vision task that developed an advanced driver assistance system to predict lane changes of vehicles. I oversaw the data processing on this task, in addition to the implementation of trajectory prediction algorithm, and model hyper parameter tuning.
Professional Experience
Research Scientist
September 2024 - Present
Alan Turing Institute
Technical lead in cross functional research team. Ran international workshops. Leading a project to assess the ability of foundation models and agentic systems for automated cyber defence, and AI cyber risk. Designed and implemented benchmarks, and end-to-end pipeline for using LLMs for attack investigation and threat hunting.
Visiting Researcher
October 2023 - February 2024
National Institute of Information, Communications Technology, Japan
Independently arranged collaboration, being awarded the competitive fellowship offered by the "Japanese Society for the Promotion of Science". Fostered collaboration between NICT and Imperial. Coordinated two concurrent projects to use Language Models and Deep Reinforcement Learning to detect vulnerabilities and bugs in popular browser engines, (V8, JavaScriptCore, etc.).
Security Research Intern
September 2022 - December 2022
IBM Research
Research on Large Language Model (LLMs) and finetuning attribution. Investigating ways to detect stolen foundational models, and attribute them correctly to the owners. I designed several different detection methods ranging from heuristics to deep architectures. I also designed model architectures, experiments, and evaluation benchmarks, finetuning 20 different LLMs. This work was used to win the second MLMAC Challenge, and lead to publication at ACL.
Enrichment Student
November 2021 - October 2023
The Alan Turing Institute
Collaboration for Autonomous Cyber Defence, involving developing techniques to combat the large state space when applying reinforcement learning. Our Hierarchical Reinforcement Learning model went on to win the First CAGE Challenge this international challenge has agents defend a small network against multiple classes of advanced persistent threat. Read about it in our paper that won the best poster award at AsiaCCS '22 here or page 19 of the Alan Turing Institute Annual Report 2021-2022. We also won the Third CAGE Challenge .
Graduate teaching Assistant
October 2020 - Present
Department of Computing, Imperial College London
- Network and Web Security (Spring 2021, Spring 2022, Spring 2023)
- Reinforcement Learning (Winter 2020, Winter 2021)
- Networks and Communications (Spring 2022, Spring 2023)
- Deep Learning (Spring 2023)
- Reasoning About Programs (Spring 2021)
Publications
APIRL: Deep Reinforcement Learning for REST API Fuzzing
M. Foley, S. Maffeis. APIRL: Deep Reinforcement Learning for REST API Fuzzing, Proceedings of the 39th AAAI Conference on Artificial Intelligence, 2025. [PDF][CODE]
SQIRL: Grey-Box Detection of SQL Injection Vulnerabilities Using Reinforcement Learning
S. Al Wahaibi, M. Foley, S. Maffeis. SQIRL: Grey-Box Detection of SQL Injection Vulnerabilities Using Reinforcement Learning, Proceedings of the 32nd USENIX Security Symposium, 9-11 August, 2023, Anaheim, USA. [PDF][CODE]
Canaries and Whistles: Resilient Drone Communication Networks with (or without) Deep Reinforcement Learning
C. Hicks, V. Mavroudis, M. Foley, T. Davies, K. Highnam, and T. Watson Canaries and Whistles: Resilient Drone Communication Networks with (or without) Deep Reinforcement Learning, P Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security, 2023. [PDF]
Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models
M. Foley, A. Rawat, T. Lee, Y. Hou, G. Picco, G. Zizzo. Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics , 9-14 July, 2023, Toronto, Canada. [PDF][CODE]
Haxss: Hierarchical Reinforcement Learning for XSS Payload Generation
IEEE Best Student Paper
M. Foley, S. Maffeis. Haxss: Hierarchical Reinforcement Learning for XSS Payload Generation, 2022 IEEE 21th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom’ 22), 9-11 December, 2022, Wuhan, China. [DOI][CODE]
Inroads in Autonomous Network Defence using Explained Reinforcement Learning.
M. Foley, M. Wang, Z. M, C. Hicks, V. Mavroudis. Inroads in Autonomous Network Defence using Explained Reinforcement Learning. 2022. In Proceedings of the Conference on Applied Machine Learning for Information Security. October 20-21, 2022, Arlington, VA. [SLIDES][VIDEO][CODE][PDF]
Autonomous Network Defence using Reinforcement Learning
Best Poster Award
M. Foley, C. Hicks, K. Highnam, and V. Mavroudis. 2022. POSTER: Autonomous Network Defence using Reinforcement Learning. In Proceedings of the 2022 ACM Asia Conference on Computer and Communications Security (ASIA CCS ’22), May 30–June 3, 2022, Nagasaki, Japan. ACM, New York, NY, USA, 3 pages. [DOI][PDF][CODE]
Patents
FINE-TUNED MODEL TO SOURCE FOUNDATION MODEL ATTRIBUTION
Myles Foley, Ambrish Rawat, Gabriele Picco, Giulio Zizzo, Taesung Lee, Yufang Hou, US, 2025. 18223134
Talks and Media
-
Arachnology: using AI to eat bugs on the web., CyberFirst Alumni Conference, October 14th 2024
RL, I Choose You: Using RL to learn mutation strategies in fuzzing, Machine Learning Cyber Security Symposium at Imperial College London, 14 June 2024
Who let the APIs out?: Designing practical RL fuzzing systems, Machine Learning Cyber Security Symposium at Imperial College London, 5 May 2023
Hierarchical Reinforcement Learning for Cyber Security [POSTER], Imperial Computing Conference, 1 July 2022
Hacking websites with Reinforcement Learning: an XSS story, Machine Learning Cyber Security Symposium at Imperial College London, 31 May 2022
Reinforcement Learning for Computer Security, Imperial College London Department of Computing PhD showcase, 12 Feburary 2022
Contact
Send me an email, its the best way to contact me!