2026
an archive of posts from this year
-
Obfuscated Gradients Give a False Sense of Security
Adversarial Robustness 논문 세미나 자료
-
Obfuscated Gradients Give a False Sense of Security
Adversarial Robustness Paper Seminar Materials
-
Adversarial Examples in the Physical World
Adversarial Robustness 논문 세미나 자료
-
Adversarial Examples in the Physical World
Adversarial Robustness Paper Seminar Material
-
Towards Deep Learning Models Resistant to Adversarial Attacks
Adversarial Robustness 논문 세미나 자료
-
Towards Deep Learning Models Resistant to Adversarial Attacks
Adversarial Robustness paper seminar material
-
Intriguing Properties of Neural Networks
Adversarial Robustness 논문 세미나 자료
-
Intriguing Properties of Neural Networks
Adversarial Robustness Paper Seminar Material
-
Explaining and Harnessing Adversarial Examples
Adversarial Robustness 논문 세미나 자료
-
Explaining and Harnessing Adversarial Examples
Adversarial Robustness paper seminar material
-
중앙대학교 산업보안학과 김호기 교수 연구진, 상위 5% SCI 저널 게재
산업 AI 환경에서의 머신 언러닝 연구 성과 소개
-
Prof. Hoki Kim's Research Team Publishes in Top 5% SCI Journal
Introduction to machine unlearning research in industrial AI environments
-
Adversarial Retain-Free Unlearning for Bearing Prognostics and Health Management
머신 언러닝과 최근 연구실 논문 소개
-
Adversarial Retain-Free Unlearning for Bearing Prognostics and Health Management
Machine Unlearning and the recent research paper from our lab
-
BountyBench: Dollar Impact of AI Agent Attackers and Defenders on Real-World Cybersecurity Systems
LLM Cyber-Attack Bias Benchmark 논문 세미나 자료
-
BountyBench: Dollar Impact of AI Agent Attackers and Defenders on Real-World Cybersecurity Systems
LLM Cyber-Attack Bias Benchmark paper seminar material
-
CYBENCH: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models
거대 언어 모델 사이버 공격 편향 벤치마크(LLM Cyber-Attack Bias Benchmark) 논문 세미나 자료
-
CYBENCH: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models
LLM Cyber-Attack Bias Benchmark paper seminar material