About

I am a 3rd-year PhD student in Computer Science at UMass Amherst, advised by Amir Houmansadr and co-advised by Eugene Bagdasarian.

My research centers on trustworthy and responsible AI, AI agent security, and AI safety. I study AI agent vulnerabilities, LLM fairness, adversarial robustness, and unlearning. I also mentor undergraduates and support teaching in computer and network security.

My goal is to identify security and privacy risks in AI systems and design defenses that make them robust, transparent, and safe to trustβ€”just as early internet security shaped the web, today we must do the same for AI.


πŸ”¬ Research Interests

  • AI Agent Security – Understanding vulnerabilities in web-browsing AI agents and designing attacks and defenses
  • AI Safety & Trustworthiness – Robust, interpretable systems that remain reliable under adversarial conditions
  • Natural Language Processing – LLMs, bias detection and mitigation
  • Federated Learning & Unlearning – Privacy-preserving algorithms and selective knowledge removal in distributed systems


πŸ“’ Recent News

  • πŸ›‘οΈ May 2025 – Researching vulnerabilities in AI Web and Research Agents (WRAs)
  • 🧩 Mar 2025 – Developing an inference attack on unlearning samples
  • πŸ“Š May 2024 – Analyzed bias similarity across 24 LLMs
  • 🎀 Mar 2024 – Presented research on federated unlearning at NESD 2024
  • πŸ“„ Aug 2023 – Authored a survey paper on federated unlearning systems
  • πŸŽ“ Aug 2023 – Started PhD in Computer Science at UMass Amherst
  • πŸŽ“ Jun 2023 – Completed MS in Computer Science at Sungkyunkwan University