2026
an archive of posts from this year
-
중앙대학교 산업보안학과 김호기 교수 연구진, ICML 2026 논문 3편 채택
AI 안전성·거버넌스·최적화 안정성 분야 논문 3편이 ICML 2026에 동시 채택
-
Prof. Hoki Kim's Research Team Has Three Papers Accepted to ICML 2026
Three papers spanning AI safety, AI governance, and optimization stability accepted simultaneously to ICML 2026.
-
Stability Analysis of Sharpness-Aware Minimization
연구실 논문 소개: 인공지능 일반화
-
Stability Analysis of Sharpness-Aware Minimization
Generalization and the recent research paper from our lab
-
Proactive Defense Benchmark Against Deepfake Generation
연구실 논문 소개: 딥페이크 사전적 방어
-
Proactive Defense Benchmark Against Deepfake Generation
Proactive deepfake defense and the recent research paper from our lab
-
Position: Current Model Cards Are Insufficient for Downstream Governance of Open-Weight Foundation Models
연구실 논문 소개: 오픈웨이트 모델 거버넌스
-
Position: Current Model Cards Are Insufficient for Downstream Governance of Open-Weight Foundation Models
Open-weight model governance and the recent research paper from our lab
-
Adversarial Training for Free!
Adversarial Robustness 논문 세미나 자료
-
Adversarial Training for Free!
Adversarial Robustness Paper Seminar Materials
-
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Robustness 논문 세미나 자료
-
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Robustness Paper Seminar Materials reinterpreting adversarial examples as non-robust features learned from data.
-
Theoretically Principled Trade-off between Robustness and Accuracy
Adversarial Robustness 논문 세미나 자료
-
Theoretically Principled Trade-off between Robustness and Accuracy
Adversarial Robustness Paper Seminar Materials
-
Robustness May Be at Odds with Accuracy
Adversarial Robustness 논문 세미나 자료
-
Robustness May Be at Odds with Accuracy
Adversarial Robustness Paper Seminar Materials
-
Code Review: Adversarial Attacks and Defenses
torchattacks · MAIR 라이브러리 기반 적대적 공격·방어 기법 코드 리뷰
-
Code Review: Adversarial Attacks and Defenses
Line-by-line PyTorch walkthrough of torchattacks and MAIR implementations of adversarial attacks and defenses.
-
Towards Evaluating the Robustness of Neural Networks
Adversarial Robustness 논문 세미나 자료
-
Towards Evaluating the Robustness of Neural Networks
C&W attacks expose that defensive distillation only masked existing attack weaknesses, redefining how adversarial robustness is evaluated.
-
Obfuscated Gradients Give a False Sense of Security
Adversarial Robustness 논문 세미나 자료
-
Obfuscated Gradients Give a False Sense of Security
Adversarial Robustness Paper Seminar Materials
-
Adversarial Examples in the Physical World
Adversarial Robustness 논문 세미나 자료
-
Adversarial Examples in the Physical World
Adversarial Robustness Paper Seminar Material
-
Towards Deep Learning Models Resistant to Adversarial Attacks
Adversarial Robustness 논문 세미나 자료
-
Towards Deep Learning Models Resistant to Adversarial Attacks
Adversarial Robustness paper seminar material
-
Intriguing Properties of Neural Networks
Adversarial Robustness 논문 세미나 자료
-
Intriguing Properties of Neural Networks
Adversarial Robustness Paper Seminar Material
-
Explaining and Harnessing Adversarial Examples
Adversarial Robustness 논문 세미나 자료
-
Explaining and Harnessing Adversarial Examples
Adversarial Robustness paper seminar material
-
중앙대학교 산업보안학과 김호기 교수 연구진, 상위 5% SCI 저널 게재
산업 AI 환경에서의 머신 언러닝 연구 성과 소개
-
Prof. Hoki Kim's Research Team Publishes in Top 5% SCI Journal
Introduction to machine unlearning research in industrial AI environments
-
인공지능 위협과 신뢰성
본 연구실이 제안하는 인공지능 위협의 두 층위(내재적 위협, 외재적 위협)와 신뢰성을 확보하기 위한 핵심 요소 및 국내외 규제 동향
-
AI Threat and Trustworthiness
The two layers of AI threats that our lab proposes (intrinsic and extrinsic), and the key elements and regulations needed to make AI trustworthy
-
Adversarial Retain-Free Unlearning for Bearing Prognostics and Health Management
머신 언러닝과 최근 연구실 논문 소개
-
Adversarial Retain-Free Unlearning for Bearing Prognostics and Health Management
Machine Unlearning and the recent research paper from our lab
-
BountyBench: Dollar Impact of AI Agent Attackers and Defenders on Real-World Cybersecurity Systems
LLM Cyber-Attack Bias Benchmark 논문 세미나 자료
-
BountyBench: Dollar Impact of AI Agent Attackers and Defenders on Real-World Cybersecurity Systems
LLM Cyber-Attack Bias Benchmark paper seminar material
-
CYBENCH: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models
거대 언어 모델 사이버 공격 편향 벤치마크(LLM Cyber-Attack Bias Benchmark) 논문 세미나 자료
-
CYBENCH: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models
LLM Cyber-Attack Bias Benchmark paper seminar material