Article
-
ICML 2026
🎉중앙대학교 산업보안학과 김호기 교수 연구진, ICML 2026 논문 3편 채택
AI 안전성·거버넌스·최적화 안정성 분야 논문 3편이 ICML 2026에 동시 채택
-
ICML 2026
🎉Prof. Hoki Kim's Research Team Has Three Papers Accepted to ICML 2026
Three papers spanning AI safety, AI governance, and optimization stability accepted simultaneously to ICML 2026.
-
-
ICML 2026
🎉Stability Analysis of Sharpness-Aware Minimization
Generalization and the recent research paper from our lab
-
-
ICML 2026
🎉Proactive Defense Benchmark Against Deepfake Generation
Proactive deepfake defense and the recent research paper from our lab
-
ICML 2026
🎉Position: Current Model Cards Are Insufficient for Downstream Governance of Open-Weight Foundation Models
연구실 논문 소개: 오픈웨이트 모델 거버넌스
-
ICML 2026
🎉Position: Current Model Cards Are Insufficient for Downstream Governance of Open-Weight Foundation Models
Open-weight model governance and the recent research paper from our lab
-
📖Adversarial Training for Free!
Adversarial Robustness 논문 세미나 자료
-
📖Adversarial Training for Free!
Adversarial Robustness Paper Seminar Materials
-
📖Adversarial Examples Are Not Bugs, They Are Features
Adversarial Robustness 논문 세미나 자료
-
📖Adversarial Examples Are Not Bugs, They Are Features
Adversarial Robustness Paper Seminar Materials reinterpreting adversarial examples as non-robust features learned from data.
-
📖Theoretically Principled Trade-off between Robustness and Accuracy
Adversarial Robustness 논문 세미나 자료
-
📖Theoretically Principled Trade-off between Robustness and Accuracy
Adversarial Robustness Paper Seminar Materials
-
📖Robustness May Be at Odds with Accuracy
Adversarial Robustness 논문 세미나 자료
-
📖Robustness May Be at Odds with Accuracy
Adversarial Robustness Paper Seminar Materials
-
📖Code Review: Adversarial Attacks and Defenses
torchattacks · MAIR 라이브러리 기반 적대적 공격·방어 기법 코드 리뷰
-
📖Code Review: Adversarial Attacks and Defenses
Line-by-line PyTorch walkthrough of torchattacks and MAIR implementations of adversarial attacks and defenses.
-
📖Towards Evaluating the Robustness of Neural Networks
Adversarial Robustness 논문 세미나 자료
-
📖Towards Evaluating the Robustness of Neural Networks
C&W attacks expose that defensive distillation only masked existing attack weaknesses, redefining how adversarial robustness is evaluated.
-
📖Obfuscated Gradients Give a False Sense of Security
Adversarial Robustness 논문 세미나 자료
-
📖Obfuscated Gradients Give a False Sense of Security
Adversarial Robustness Paper Seminar Materials
-
📖Adversarial Examples in the Physical World
Adversarial Robustness 논문 세미나 자료
-
📖Adversarial Examples in the Physical World
Adversarial Robustness Paper Seminar Material
-
📖Towards Deep Learning Models Resistant to Adversarial Attacks
Adversarial Robustness 논문 세미나 자료
-
📖Towards Deep Learning Models Resistant to Adversarial Attacks
Adversarial Robustness paper seminar material
-
📖Intriguing Properties of Neural Networks
Adversarial Robustness 논문 세미나 자료
-
📖Intriguing Properties of Neural Networks
Adversarial Robustness Paper Seminar Material
-
📖Explaining and Harnessing Adversarial Examples
Adversarial Robustness 논문 세미나 자료
-
📖Explaining and Harnessing Adversarial Examples
Adversarial Robustness paper seminar material
-
-
IEEE TII 2026 · Top 5%
🎉Prof. Hoki Kim's Research Team Publishes in Top 5% SCI Journal
Introduction to machine unlearning research in industrial AI environments
-
IEEE TII 2026 · Top 5%
🎉Adversarial Retain-Free Unlearning for Bearing Prognostics and Health Management
머신 언러닝과 최근 연구실 논문 소개
-
IEEE TII 2026 · Top 5%
🎉Adversarial Retain-Free Unlearning for Bearing Prognostics and Health Management
Machine Unlearning and the recent research paper from our lab
-
📖BountyBench: Dollar Impact of AI Agent Attackers and Defenders on Real-World Cybersecurity Systems
LLM Cyber-Attack Bias Benchmark 논문 세미나 자료
-
📖BountyBench: Dollar Impact of AI Agent Attackers and Defenders on Real-World Cybersecurity Systems
LLM Cyber-Attack Bias Benchmark paper seminar material
-
📖CYBENCH: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models
거대 언어 모델 사이버 공격 편향 벤치마크(LLM Cyber-Attack Bias Benchmark) 논문 세미나 자료
-
📖CYBENCH: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models
LLM Cyber-Attack Bias Benchmark paper seminar material
-
🎉‘2025 SW·AI Tech Fair’ 우수성과 발표대회 최우수상
연구실 공모전 및 대회 수상
-
🎉Grand Prize at the '2025 SW·AI Tech Fair' Outstanding Achievement Presentation
Lab competition and award achievement
-
-
NeurIPS 2025
🎉Prof. Hoki Kim's Paper Accepted at World's Top AI Conference
Introduction to machine unlearning and recent lab research paper
-
-
NeurIPS 2025
🎉Unlearning-Aware Minimization
Machine Unlearning and the recent research paper from our lab
-
📖Extracting Robust Models with Uncertain Examples
Model Stealing and Application 논문 세미나 자료
-
📖Extracting Robust Models with Uncertain Examples
Model Stealing and Application paper seminar material
-
📖Perturbing Inputs to Prevent Model Stealing
Model Stealing and Application 논문 세미나 자료
-
📖Perturbing Inputs to Prevent Model Stealing
Model Stealing and Application paper seminar material
-
📖Preventing Neural Network Weight Stealing via Network Obfuscation
Model Stealing and Application 논문 세미나 자료
-
📖Preventing Neural Network Weight Stealing via Network Obfuscation
Model Stealing and Application paper seminar material
-
📖Practical Black-Box Attacks Against Machine Learning
Model Stealing and Application 논문 세미나 자료
-
📖Practical Black-Box Attacks Against Machine Learning
Model Stealing and Application paper seminar material
-
📖High Accuracy and High Fidelity Extraction of Neural Networks
Model Stealing and Application 논문 세미나 자료
-
📖High Accuracy and High Fidelity Extraction of Neural Networks
Model Stealing and Application paper seminar material
-
📖Hiding CNN Parameters with Guided Grad-CAM
Model Stealing and Application 논문 세미나 자료
-
📖Hiding CNN Parameters with Guided Grad-CAM
Model Stealing and Application paper seminar material
-
📖Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Model Stealing and Application 논문 세미나 자료
-
📖Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Model Stealing and Application paper seminar materials
-
📖Data-Free Model Extraction
Model Stealing and Application 논문 세미나 자료
-
📖Data-Free Model Extraction
Model Stealing and Application paper seminar materials
-
PRADA: Protecting Against DNN Model Stealing Attacks
Model Stealing and Application 논문 세미나 자료
-
📖PRADA: Protecting Against DNN Model Stealing Attacks
Model Stealing and Application paper seminar materials
-
Towards Reverse-Engineering Black-Box Neural Networks
Model Stealing and Application 논문 세미나 자료
-
📖Towards Reverse-Engineering Black-Box Neural Networks
Model Stealing and Application paper seminar materials
-
Knockoff Nets: Stealing Functionality of Black-Box Models
Model Stealing and Application 논문 세미나 자료
-
📖Knockoff Nets: Stealing Functionality of Black-Box Models
Model Stealing and Application paper seminar materials
-
Stealing Hyperparameters in Machine Learning
Model Stealing and Application 논문 세미나 자료
-
📖Stealing Hyperparameters in Machine Learning
Model Stealing and Application paper seminar materials
-
Stealing Machine Learning Models via Prediction APIs
Model Stealing and Application 논문 세미나 자료
-
📖Stealing Machine Learning Models via Prediction APIs
Model Stealing and Application paper seminar materials
-
CVPR 2025 Workshop
CaddieSet: A Golf Swing Dataset with Human Joint Features and Ball Information
연구실 공동 연구 소개
-
CVPR 2025 Workshop
🎉CaddieSet: A Golf Swing Dataset with Human Joint Features and Ball Information
Introduction to collaborative research from our lab
-
Applied Economics Letters 2025
🎉Explaining Determinants of Bank Failure Prediction via Neural Additive Model
연구실 논문 소개: 인공지능 설명성
-
Applied Economics Letters 2025
🎉Explaining Determinants of Bank Failure Prediction via Neural Additive Model
Lab paper introduction: AI explainability
-
EAAI 2024 · Top 10%
Evaluating Practical Adversarial Robustness of Fault Diagnosis Systems via Spectrogram-Aware Ensemble Method
연구실 논문 소개: 인공지능 강건성
-
EAAI 2024 · Top 10%
🎉Evaluating Practical Adversarial Robustness of Fault Diagnosis Systems via Spectrogram-Aware Ensemble Method
Lab paper introduction: AI robustness
-
📖Unified Gradient-Based Machine Unlearning with Remain Geometry Enhancement
머신언러닝(Machine Unlearning) 논문 세미나 자료
-
📖Unified Gradient-Based Machine Unlearning with Remain Geometry Enhancement
Machine Unlearning paper seminar material
-
📖Towards Unbounded Machine Unlearning
머신언러닝(Machine Unlearning) 논문 세미나 자료
-
📖Towards Unbounded Machine Unlearning
Machine Unlearning paper seminar material
-
📖Approximate Data Deletion from Machine Learning Models
머신언러닝(Machine Unlearning) 논문 세미나 자료
-
📖Approximate Data Deletion from Machine Learning Models
Machine Unlearning paper seminar material
-
📖SalUn: Empowering Machine Unlearning via Gradient-Based Weight Saliency in Both Image Classification and Generation
머신언러닝(Machine Unlearning) 논문 세미나 자료
-
📖SalUn: Empowering Machine Unlearning via Gradient-Based Weight Saliency in Both Image Classification and Generation
Machine Unlearning paper seminar material
-
📖Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models
머신언러닝(Machine Unlearning) 논문 세미나 자료
-
📖Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models
Machine Unlearning paper seminar material
-
📖Machine Unlearning of Features and Labels
머신언러닝(Machine Unlearning) 논문 세미나 자료
-
📖Machine Unlearning of Features and Labels
Machine Unlearning paper seminar material
-
📖Amnesiac Machine Learning
머신언러닝(Machine Unlearning) 논문 세미나 자료
-
📖Amnesiac Machine Learning
Machine Unlearning paper seminar material
-
📖Evaluating Machine Unlearning via Epistemic Uncertainty
머신언러닝(Machine Unlearning) 논문 세미나 자료
-
📖Evaluating Machine Unlearning via Epistemic Uncertainty
Machine Unlearning paper seminar material
-
-
NeurIPS 2024
🎉Are Self-Attentions Effective for Time Series Forecasting?
Lab paper introduction: AI explainability
-
✏️신뢰할 수 있는 인공지능의 핵심 요소와 기술적 과제
신뢰할 수 있는 인공지능의 개념 및 연구실의 주요 기술 소개
-
✏️Key Elements and Technical Challenges of Trustworthy AI
Concepts of trustworthy AI and introduction to our lab's key technologies
-
✏️인공지능 규제와 신뢰성
국제·국내 인공지능 관련 규제와 인공지능 신뢰성
-
✏️AI Regulations and Trustworthiness
International and domestic AI-related regulations and AI trustworthiness
-
NeurIPS 2023
🎉Fantastic Robustness Measures: The Secrets of Robust Generalization
연구실 논문 소개: 인공지능 강건성
-
NeurIPS 2023
🎉Fantastic Robustness Measures: The Secrets of Robust Generalization
Adversarial robustness and the recent research paper from our lab