Зарегистрироваться
Восстановить пароль
FAQ по входу

Chen Pin-Yu, Hsieh Cho-Jui. Adversarial Robustness for Machine Learning

  • Файл формата zip
  • размером 23,09 МБ
  • содержит документ формата epub
  • Добавлен пользователем
  • Описание отредактировано
Academic Press/Elsevier, 2023. — 300 p. — ISBN 978-0-12-824020-5.
Состязательная устойчивость для машинного обучения
Adversarial Robustness for Machine Learning summarizes the recent progress on this topic and introduces popular algorithms on adversarial attack, defense and verification. Sections cover adversarial attack, verification and defense, mainly focusing on image classication applications which are the standard benchmark considered in the adversarial robustness community. Other sections discuss adversarial examples beyond image classification, other threat models beyond testing time attack, and applications on adversarial robustness. For researchers, this book provides a thorough literature review that summarizes latest progress in the area, which can be a good reference for conducting future research.
In addition, the book can also be used as a textbook for graduate courses on adversarial robustness or trustworthy Machine Learning. While Machine Learning (ML) algorithms have achieved remarkable performance in many applications, recent studies have demonstrated their lack of robustness against adversarial disturbance. The lack of robustness brings security concerns in ML models for real applications such as self-driving cars, robotics controls and healthcare systems.
Summarizes the whole field of adversarial robustness for Machine learning models
Provides a clearly explained, self-contained reference
Introduces formulations, algorithms and intuitions
Preliminaries
Background and motivation
Adversarial attack
White-box adversarial attacks
Black-box adversarial attacks
Physical adversarial attacks
Training-time adversarial attacks
Adversarial attacks beyond image classification
Robustness verification
Overview of neural network verification
Incomplete neural network verification
Complete neural network verification
Verification against semantic perturbations
Adversarial defense
Overview of adversarial defense
Adversarial training
Randomization-based defense
Certified robustness training
Adversary detection
Adversarial robustness of beyond neural network models
Adversarial robustness in meta-learning and contrastive learning
Applications beyond attack and defense
Model reprogramming
Contrastive explanations
Model watermarking and fingerprinting
Data augmentation for unsupervised machine learning
  • Чтобы скачать этот файл зарегистрируйтесь и/или войдите на сайт используя форму сверху.
  • Регистрация