-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
265 lines (242 loc) · 16.3 KB
/
index.html
File metadata and controls
265 lines (242 loc) · 16.3 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css" integrity="sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS" crossorigin="anonymous">
<link rel="stylesheet" href="css/style.css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js" integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=" crossorigin="anonymous"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/js/bootstrap.min.js" integrity="sha384-smHYKdLADwkXOn1EmN1qk/HfnUcbVRZyYmZ4qpPea6sjB/pTJ0euyQp0Mk8ck+5T" crossorigin="anonymous"></script>
<script src="js/script.js"></script>
<title>Adversarial Robustness of Deep Learning</title>
</head>
<body data-spy="scroll" data-target="#main-nav" id="home">
<nav class="navbar navbar-expand-sm bg-dark navbar-dark fixed-top" id="main-nav">
<div>
<button class="navbar-toggler" data-toggle="collapse" data-target="#navbarCollapse">
<span class="navbar-toggler-icon "></span>
</button>
<div class="collapse navbar-collapse navbar-right" id="navbarCollapse">
<ul class="navbar-nav ml-auto">
<li class="nav-item">
<a href="#home" class="nav-link">HOME</a>
</li>
<li class="nav-item">
<a href="#overview" class="nav-link">OVERVIEW</a>
</li>
<li class="nav-item">
<a href="#program" class="nav-link">Program</a>
</li>
<li class="nav-item">
<a href="#presenters" class="nav-link">Presenters</a>
</li>
<li class="nav-item">
<a href="#references" class="nav-link">References</a>
</li>
<li class="nav-item">
<a href="#contact" class="nav-link">CONTACT</a>
</li>
</ul>
</div>
</div>
</nav>
<!-- Carousel -->
<section id="showcase">
<div id="myCarousel" class="carousel slide" data-ride="carousel">
<div class="carousel-inner pt-3">
<div class="carousel-item carousel-image-1 active">
<div class="carousel-caption d-block">
<h3>ICDM 2020 Tutorial</h3>
<h4 style="text-align: center;">Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications</h4>
<br />
<h4 style="text-align: center;">Tutorial in 20th IEEE International Conference on Data Mining (ICDM 2020) November 17-20, 2020, Sorrento, Italy</h4>
</div>
<div class="container">
<div class="cover-overlay">
</div>
</div>
</div>
</div>
</div>
</section>
<section id="overview" class="mt-5">
<div class="container">
<div class="row px-3">
<div class="col">
<h3 class="mx-auto" style="text-align: center;">TUTORIAL ON ADVERSARIAL ROBUSTNESS OF DEEP LEARNING</h3>
<h4>Overview</h4>
<br />
<h5>Abstract</h5>
<p style="text-align: justify;">This tutorial aims to introduce the fundamentals of adversarial robustness ofdeep learning, presenting a well-structured review of up-to-date techniques toassess the vulnerability of various types of deep learning models to adversarialexamples. This tutorial will particularly highlight state-of-the-art techniques inadversarial attacks and robustness verification of deep neural networks (DNNs).We will also introduce some effective countermeasures to improve robustness ofdeep learning models, with a particular focus on generalisable adversarial train-ing. We aim to provide a comprehensive overall picture about this emergingdirection and enable the community to be aware of the urgency and importanceof designing robust deep learning models in safety-critical data analytical ap-plications, ultimately enabling the end-users to trust deep learning classifiers.We will also summarize potential research directions concerning the adversarialrobustness of deep learning, and its potential benefits to enable accountable andtrustworthy deep learning-based data analytical systems and applications.</p>
<h5>Content</h5>
<p style="text-align: justify;">
The content of the tutorial is planned as below:
</p>
<ul>
<li>
Introduction to adversarial robustness: this part will introduce the concept of adversarial robustness by showing some examples from computer vision, natural language processing, and malware detection, autonomous systems. Specifically, we will demonstrate the vulnerabilities of various types of deep learning models to different adversarial examples. We will also highlight the dissimilarities of research focuses on adversarial robustness from different communities, i.e., attack, defense and verification.
</li>
<br />
<li>
Adversarial attacks: this part will detail some famous adversarial attack methods with an aim to provide some insights of why adversarial examples exit and how to generate adversarial perturbation effectively and efficiently. Specifically, we will present five well-established works, including FGSM [1], C&W [2], DeepFool [3], JMSA [4], ZeroAttack [20]. In the end of this part, we will also briefly touch some novel adversarial examples emerged recently, including universal adversarial examples [21], spatial-transformed attacks [7], adversarial patches [22], etc
</li>
<br />
<li>
Verification: this part will review the state-of-the-art on the formal verification techniques for checking whether a deep learning model is robust. Techniques to be discussed including constraint solving based techniques (MILP, Reluplex [8]), approximation techniques (MaxSens [23], AI2 [10], DeepSymbol), and global optimisation based techniques (DLV [11], DeepGO [12], DeepGame [17]).
</li>
<br />
<li>
Adversarial defense: this part will present an overview of state-of-the-art robust optimisation techniques for adversarial defense, with emphasis on generalisable adversarial training and regularisation methods. In particular, adversarial training with Fast Gradient Method (FGM) [14], Projected Gradient Method (PGM) [15], Wasserstein Risk Minimization (WRM) [18] will be analysed with respect to generalisation guarantees, and regularisation techniques such as spectral normalisation [13] and Lipschitz regularisation [23] to promote training stability and robustness against adversarial examples will be also discussed.
</li>
</ul>
</div>
</div>
</div>
</section>
<hr class="divider div-transparent mt-1">
<section id="program" class="mt-3">
<div class="container">
<div class="row px-3">
<div class="col">
<h3> Program </h3>
<p>Detailed program and slides will be available soon.</p>
</div>
</div>
</section>
<hr class="divider div-transparent mt-1">
<section id="presenters" class="mt-3">
<div class="container" style="text-align: justify;">
<div class="row px-3">
<div class="col">
<h3>Presenters</h3>
</div>
</div>
<div class="row mt-3 px-3">
<div class="col-md">
<img src="img/wenjie.jpg" class="rounded-circle">
<p class="mx-auto mt-3 name"><a href="http://wenjieruan.com/">Wenjie Ruan</a></p>
</div>
<div class="col-md">
<img src="img/xinping.jpg" class="rounded-circle">
<p class="mx-auto mt-3 name"><a href="https://sites.google.com/site/xinpingyi00/">Xinping Yi</a></p>
</div>
<div class="col-md">
<img src="img/xiaowei.jpg" class="rounded-circle">
<p class="mx-auto mt-3 name"><a href="https://cgi.csc.liv.ac.uk/~xiaowei/">Xiaowei Huang</a></p>
</div>
</div>
</div>
</section>
<hr class="divider div-transparent mt-1">
<section id="references">
<div class="container">
<div class="row px-3">
<div class="col" style="text-align: justify;">
<h3 class="mx-auto mt-2 mb-3">Key References</h3>
<ul style="list-style-type: none; margin-left: -40px">
<li>
[1] Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy: Explaining and Harnessing Adversarial Examples. ICLR 2015
</li>
<li>
[2] Carlini, Nicholas, and David Wagner. Towards evaluating the robustness of neural networks. 2017 IEEE Symposium on Security and Privacy (S&P), 2017.
</li>
<li>
[3] Moosavi-Dezfooli, Seyed-Mohsen, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. IEEE conference on computer vision and pattern recognition (CVPR), 2016.
</li>
<li>
[4] Papernot, Nicolas, et al. The limitations of deep learning in adversarial settings. 2016 IEEE European symposium on security and privacy (EuroS&P). 2016.
</li>
<li>
[5] Ilyas, Andrew, et al. Black-box Adversarial Attacks with Limited Queries and Information. International Conference on Machine Learning (ICML). 2018.
</li>
<li>
[6] Wang, Qinglong, et al. "Adversary resistant deep neural networks with an application to malware detection." Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD). 2017.
</li>
<li>
[7] Xiao, Chaowei, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, and Dawn Song. "Spatially Transformed Adversarial Examples." In International Conference on Learning Representations (ICLR) 2018.
</li>
<li>
[8] Guy Katz, Clark W. Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. CAV 2017.
</li>
<li>
[9] Xiang, W., Tran, H. D., Johnson, T. T. Output reachable set estimation and verification for multilayer neural networks. IEEE transactions on neural networks and learning systems, 29(11), 5777-5783. 2018
</li>
<li>
[10] Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. 2018 IEEE Symposium on Security and Privacy (S&P)
</li>
<li>
[11] Xiaowei Huang, Marta Kwiatkowska, Sen Wang, Min Wu. Safety Verification of Deep Neural Networks. CAV 2017.
</li>
<li>
[12] Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska. Reachability Analysis of Deep Neural Networks with Provable Guarantees. IJCAI 2018
</li>
<li>
[13] Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska. A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees. Theoretical Computer Science, vol. 807, pp. 298-329, 2020.
</li>
<li>
[14] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. ICLR 2017.
</li>
<li>
[15] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. ICLR 2018.
</li>
<li>
[16] Aman Sinha, Hongseok Namkoong, and John Duchi. Certifiable distributional robustness with principled adversarial training. ICLR 2018.
</li>
<li>
[17] Farzan Farnia, Jesse Zhang, and David Tse. Generalizable adversarial training via spectral normalization. ICLR 2019.
</li>
<li>
[18] Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, andNicolas Usunier. Parseval networks: Improving robustness to adversarialexamples. ICML 2017.
</li>
<li>
[19] Xinping Yi. Asymptotic singular value distribution of linear convolutional layers. arXiv:2006.07117, 2020.
</li>
<li>
[20] Chun-Chen Tu, Paishun Ting, Pin-Yu Chen, Sijia Liu, Huan Zhang, Jinfeng Yi,Cho-Jui Hsieh, and Shin-Ming Cheng. Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks. In Proceed-ings of the AAAI Conference on Artificial Intelligence, volume 33, pages 742–749, 2019.
</li>
<li>
[21] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and PascalFrossard. Universal adversarial perturbations. In Proceedings of the IEEE con-ference on computer vision and pattern recognition, pages 1765–1773, 2017.
</li>
<li>
[22] Simen Thys, Wiebe Van Ranst, and Toon Goedem ́e. Fooling automated surveil-lance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.
</li>
<li>
[23] Aladin Virmaux and Kevin Scaman. Lipschitz regularity of deep neural networks:analysis and efficient estimation. In Advances in Neural Information Processing Systems, pages 3835–3844, 2018.
</li>
</ul>
</div>
</div>
</div>
</section>
<hr class="divider div-transparent mt-1">
<section id="contact">
<div class="container">
<div class="row px-3 mt-2 mb-3">
<div class="col">
<h3 class="mb-4">Contact</h3>
<p>Web Builder and Tutorial Assistant:</p>
<p>Han Wu</p>
<p>Email: trust.ai.research@gmail.com</p>
<p>Department of Computer Science</p>
<p>University of Exeter, UK</p>
</div>
</div>
</div>
</div>
</section>
<!-- Footer -->
<footer id="main-footer" class="text-center pb-2">
<div class="container">
<div class="row">
<div class="col-md">
<img src="img/exeter.png" class="exeter">
</div>
<div class="col-md">
<img src="img/liverpool.png" class="liverpool">
</div>
</div>
</div>
</footer>
</body>
</html>