Research
Publications & research lines
My research is on agentic AI systems for software engineering — pipelines
that combine LLMs with static-analysis tooling for vulnerability
detection, triage, and remediation. Conducted at
Simon Fraser University
under Dr. Mohammad Tayebi.
Publications
Mohsen Iranmanesh, Sina Moradi Sabet, Sina Marefat, Ali Javidi Ghasr, Allison Wilson, Iman Sharafaldin, Mohammad A. Tayebi
A multi-stage LLM pipeline that takes raw static-analyzer alerts and triages them through contextual reasoning and structured evidence validation, reducing false positives without sacrificing recall. Evaluated 10 LLMs across 6 model families on two benchmarks; achieves best-in-class F1 on both synthetic and real-world CodeQL alerts.
First-author submission, currently under review.
arXiv
Ongoing research lines
AutoSec — fully agentic vulnerability remediation
End-to-end multi-agent pipeline: a static-analyzer agent surfaces
candidate vulnerabilities, an LLM triage agent validates them against
code context, a patch-generation agent proposes fixes, and a
verification agent runs the patched code through the test suite and
re-analyzes for regressions. Current focus: multi-stage prioritization
to improve both precision and remediation coverage.
ThreatEZ — bottom-up threat modeling
A 6-phase static-analysis-grounded multi-agent pipeline that derives
system architecture and STRIDE threats directly from source code — no
manually authored data-flow diagram required. Maps findings to NIST
800-53 controls. Shipped as a VS Code extension; evaluated via an
LLM-as-Judge harness with semantic threat matching against
human-authored ground truth.
CVE-Bench — reproducible CVE exploitation & patching
LangGraph-orchestrated agentic pipeline that reproduces and patches
documented CVEs end-to-end. Curated dataset of 100+ CVEs with
dockerized vulnerable + patched builds and an automated
exploit-validation loop that verifies the PoC succeeds on the
vulnerable image and fails on the patched one.