-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathpublications.html
More file actions
489 lines (475 loc) · 36.4 KB
/
publications.html
File metadata and controls
489 lines (475 loc) · 36.4 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
<!DOCTYPE html>
<html lang="en">
<head>
<!--Import Google Icon Font-->
<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
<link href="https://fonts.googleapis.com/css?family=Handlee" rel="stylesheet">
<!--Import materialize.css-->
<link type="text/css" rel="stylesheet" href="css/materialize.min.css" media="screen"/>
<link type="text/css" rel="stylesheet" href="css/style.css" media="screen"/>
<link type="text/css" rel="stylesheet" href="css/icon_fonts.css" media="screen"/>
<link rel="icon" type="image/png" href="pics/g922.png">
<!-- awesome icons -->
<!-- <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.15.4/css/all.css" integrity="sha384-DyZ88mC6Up2uqS4h/KRgHuoeGwBcD4Ng9SiP4dIRy0EXTlnuz47vAwmeGwVChigm" crossorigin="anonymous"/> -->
<script src="https://kit.fontawesome.com/6f5a62b28e.js" crossorigin="anonymous"></script>
<!--Let browser know website is optimized for mobile-->
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1.0, user-scalable=no"/>
<title>Matteo Pirotta</title>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-90538549-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-90538549-1');
</script>
</head>
<body>
<div class=" blue darken-1">
<!-- photo -->
<div class="container">
<h1 class="header center-on-small-only special-font">Matteo Pirotta</h1>
<div class="row center">
<h4 class="header col s12 light center">Research Scientist at Meta</h4>
</div>
</div>
<!-- NAVIGATION BAR -->
<nav class="blue darken-3">
<div class="nav-wrapper container">
<!--<a id="logo-container" href="#" class="brand-logo white-text"><span class="pubs_me">Matteo Pirotta</span></a>-->
<ul class="right hide-on-med-and-down">
<li><a class="white-text" href="index.html">About Me</a></li>
<li><a class="white-text" href="index.html#news">News</a></li>
<li><a class="white-text" href="#pubs">Publications</a></li>
<li><a class="white-text" href="teaching.html">Teaching</a></li>
<!-- <li><a class="white-text" href="#foot">Links</a></li> -->
<!-- <li><a class="white-text" href="cv_mpirotta.pdf">Résumé</a></li> -->
</ul>
<ul id="nav-mobile" class="side-nav">
<li><a href="index.html#about">About Me</a></li>
<li><a href="index.html#news">News</a></li>
<li><a href="#pubs">Publications</a></li>
<li><a href="teaching.html">Teaching</a></li>
<!-- <li><a href="#foot">Links</a></li> -->
<!-- <li><a href="cv_mpirotta.pdf">Résumé</a></li> -->
</ul>
<a href="#" data-activates="nav-mobile" class="button-collapse"><i class="material-icons">menu</i></a>
</div>
</nav>
</div>
<!-- CONTAINER -->
<div class="container">
<div class="divider"></div>
<div class="section" id="pubs">
<!-- Publications Section -->
<div class="row light">
<div class="col s12 m12 offset-l1 l10">
<!-- <h4 class="center">Publications</h4> -->
<h5 class="center"><a href="https://scholar.google.com/citations?user=6qWcDTAAAAAJ&hl=en"><i class="fa-solid fa-graduation-cap"></i> Google Scholar</a></h5>
<h5>Preprints</h5>
<ul class="pubs_ul">
<li>
Andrea Tirinzoni, <span class="pubs_me">Matteo Pirotta</span>, Alessandro Lazaric:<br>
<span class="pubs_title">
A Fully Problem-Dependent Regret Lower Bound for Finite-Horizon MDPs.
</span> arXiv:2106.13013, 2021. [<a href="https://arxiv.org/abs/2106.13013">arXiv</a>]
</li>
<li>
Yonathan Efroni, Shie Mannor and <span class="pubs_me">Matteo Pirotta</span>:<br>
<span class="pubs_title">Exploration-Exploitation in Constrained MDPs.
</span> arXiv:2003.02189, 2020. [<a href="https://arxiv.org/abs/2003.02189">arXiv</a>]
</li>
</ul>
<h5>Conference Papers</h5>
<ul class="pubs_ul">
<li>
Yunchang Yang, Tianhao Wu, Han Zhong, Evrard Garcelon, <span class="pubs_me">Matteo Pirotta</span>, Alessandro Lazaric, Liwei Wang, Simon S. Du:<br>
<span class="pubs_title">
A Reduction-Based Framework for Conservative Bandits and Reinforcement Learning.
</span> ICLR 2022, Virtual. [<a href="https://openreview.net/forum?id=AcrlgZ9BKed&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3DICLR.cc%2F2022%2FConference%2FAuthors%23your-submissions)">paper</a>], [<a href="https://arxiv.org/abs/2106.11692">arXiv</a>]
</li>
<li>
Jean Tarbouriech, Omar Darwiche Domingues, Pierre Menard, <span class="pubs_me">Matteo Pirotta</span>, Michal Valko, Alessandro Lazaric:<br>
<span class="pubs_title">
Adaptive Multi-Goal Exploration.
</span> AISTATS 2021, Virtual. [<a href="https://proceedings.mlr.press/v151/tarbouriech22a.html">paper</a>], [<a href="https://arxiv.org/abs/2111.12045">arXiv</a>]
</li>
<li>
Evrard Garcelon, Vashist Avadhanula, Alessandro Lazaric, <span class="pubs_me">Matteo Pirotta</span>:<br>
<span class="pubs_title">
Top K Ranking for Multi-Armed Bandit with Noisy Evaluations.
</span> AISTATS 2021, Virtual. [<a href="https://proceedings.mlr.press/v151/garcelon22b.html">paper</a>], [<a href="https://arxiv.org/abs/2112.06517">arXiv</a>]
</li>
<li>
Evrard Garcelon, Vianney Perchet, <span class="pubs_me">Matteo Pirotta</span>:<br>
<span class="pubs_title">
Homomorphically Encrypted Linear Contextual Bandit.
</span> AISTATS 2021, Virtual. [<a href="https://proceedings.mlr.press/v151/garcelon22a.html">paper</a>], [<a href="https://arxiv.org/abs/2103.09927">arXiv</a>]
</li>
<li>
Evrard Garcelon, Kamalika Chaudhuri, Vianney Perchet, <span class="pubs_me">Matteo Pirotta</span>:<br>
<span class="pubs_title">
Privacy Amplification via Shuffling for Linear Contextual Bandits.
</span> ALT 2022, Virtual. [<a href="https://proceedings.mlr.press/v167/garcelon22a.html">paper</a>], [<a href="https://arxiv.org/abs/2112.06008">arXiv</a>]
</li>
<li>
Matteo Papini, Andrea Tirinzoni, Aldo Pacchiano, Marcello Restelli, Alessandro Lazaric, <span class="pubs_me">Matteo Pirotta</span>:<br>
<span class="pubs_title">
Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection.
</span> NeurIPS 2021, Virtual. [<a href="https://papers.nips.cc/paper/2021/hash/8860e834a67da41edd6ffe8a1c58fa55-Abstract.html">paper</a>], [<a href="https://arxiv.org/abs/2110.14798">arXiv</a>]
</li>
<li>
Jean Tarbouriech, Runlong Zhou, Simon S. Du, <span class="pubs_me">Matteo Pirotta</span>, Michal Valko, Alessandro Lazaric:<br>
<span class="pubs_title">
Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret.
</span> NeurIPS 2021, Virtual. [<a href="https://proceedings.neurips.cc/paper/2021/hash/367147f1755502d9bc6189f8e2c3005d-Abstract.html">paper</a>], [<a href="https://arxiv.org/abs/2104.11186">arXiv</a>]
</li>
<li>
Jean Tarbouriech, <span class="pubs_me">Matteo Pirotta</span>, Michal Valko and Alessandro Lazaric:<br>
<span class="pubs_title">
A Provably Efficient Sample Collection Strategy for Reinforcement Learning.
</span> NeurIPS 2021, Virtual. [<a href="https://proceedings.neurips.cc/paper/2021/hash/3e98410c45ea98addec555019bbae8eb-Abstract.html">paper</a>], [<a href="https://arxiv.org/abs/2007.06437">arXiv</a>]
</li>
<li>
Evrard Garcelon, Vianney Perchet, Ciara Pike-Burke and <span class="pubs_me">Matteo Pirotta</span>:<br>
<span class="pubs_title">
Local Differentially Private Regret Minimization in Reinforcement Learning.
</span> NeurIPS 2021, Virtual. [<a href="https://proceedings.neurips.cc/paper/2021/hash/580760fb5def6e2ca8eaf601236d5b08-Abstract.html">paper</a>], [<a href="https://arxiv.org/abs/2010.07778">arXiv</a>]
</li>
<li>
Matteo Papini, Andrea Tirinzoni, Marcello Restelli, Alessandro Lazaric, <span class="pubs_me">Matteo Pirotta</span>:<br>
<span class="pubs_title">
Leveraging Good Representations in Linear Contextual Bandits.
</span> ICML 2021, Virtual. [<a href="https://proceedings.mlr.press/v139/papini21a.html">paper</a>], [<a href="https://arxiv.org/abs/2104.03781">arXiv</a>]
</li>
<li>
Omar Darwiche Domingues, Pierre Menar, <span class="pubs_me">Matteo Pirotta</span>, Emilie Kaufmann and Michal Valko:<br>
<span class="pubs_title">
Kernel-Based Reinforcement Learning: A Finite-Time Analysis.
</span> ICML 2021, Virtual. [<a href="https://arxiv.org/abs/2004.05599">arXiv</a>]
</li>
<li>
Jean Tarbouriech, <span class="pubs_me">Matteo Pirotta</span>, Michal Valko and Alessandro Lazaric:<br>
<span class="pubs_title">
Sample Complexity Bounds for Stochastic Shortest Path with a Generative Model.
</span> ALT 2021, Virtual. [<a href="http://proceedings.mlr.press/v132/tarbouriech21a.html">paper</a>]
</li>
<li>
Omar Darwiche Domingues, Pierre Menar, <span class="pubs_me">Matteo Pirotta</span>, Emilie Kaufmann and Michal Valko:<br>
<span class="pubs_title">
A Kernel-Based Approach to Non-Stationary Reinforcement Learning in Metric Spaces
</span> AISTATS 2021, Virtual. [<a href="https://arxiv.org/abs/2007.05078">arXiv</a>]
</li>
<li>
Andrea Tirinzoni, <span class="pubs_me">Matteo Pirotta</span>, Marcello Restelli and Alessandro Lazaric:<br>
<span class="pubs_title">
An Asymptotically Optimal Primal-Dual Incremental Algorithm for Linear Contextual Bandits.
</span> NeurIPS 2020, Virtual. [<a href="https://arxiv.org/abs/2010.12247">arXiv</a>]
</li>
<li>
Jean Tarbouriech, <span class="pubs_me">Matteo Pirotta</span>, Michal Valko and Alessandro Lazaric:<br>
<span class="pubs_title">
Improved Sample Complexity for Incremental Autonomous Exploration in MDPs.
</span> NeurIPS 2020, Virtual. [<a href="https://arxiv.org/abs/2012.14755">arXiv</a>]
</li>
<li>
Evrard Garcelon, Baptiste Roziere, Laurent Meunier, Jean Tarbouriech, Olivier Teytaud, Alessandro Lazaric and <span class="pubs_me">Matteo Pirotta</span>:<br>
<span class="pubs_title">
Adversarial Attacks on Linear Contextual Bandits.
</span> NeurIPS 2020, Virtual. [<a href="https://arxiv.org/abs/2002.03839">arXiv</a>]
</li>
<li>
Jean Tarbouriech, Shubhanshu Shekhar, <span class="pubs_me">Matteo Pirotta</span>, Mohammad Ghavamzadeh, Alessandro Lazaric:<br>
<span class="pubs_title">
Active Model Estimation in Markov Decision Processes
</span>
UAI 2020, Virtual. [<a href="https://arxiv.org/abs/2003.03297">arXiv</a>][<a href="http://proceedings.mlr.press/v124/tarbouriech20a.html">paper</a>]
</li>
<li>
Evrard Garcelon, Mohammad Ghavamzadeh, Alessandro Lazaric and <span class="pubs_me">Matteo Pirotta</span>:<br>
<span class="pubs_title">
Conservative Exploration in Reinforcement Learning.
</span>
AISTATS 2020, Palermo, Italy. [<a href="https://arxiv.org/abs/2002.03218">arXiv</a>]
</li>
<li>
Andrea Zanette, David Brandfonbrener, Emma Brunskill, <span class="pubs_me">Matteo Pirotta</span> and Alessandro Lazaric:<br>
<span class="pubs_title">
Frequentist Regret Bounds for Randomized Least-Squares Value Iteration.
</span>
AISTATS 2020, Palermo, Italy. [<a href="https://arxiv.org/abs/1911.00567">arXiv</a>]
</li>
<li>
Evrard Garcelon, Mohammad Ghavamzadeh, Alessandro Lazaric and <span class="pubs_me">Matteo Pirotta</span>:<br>
<span class="pubs_title">
Improved Algorithms for Conservative Exploration in Bandits.
</span>
AAAI 2020, New York, USA. [<a href="https://arxiv.org/abs/2002.03221">arXiv</a>]
</li>
<li>
Ronald Ortner, <span class="pubs_me">Matteo Pirotta</span>, Alessandro Lazaric, Ronald Fruit and Odalrici-Ambrym Maillard:<br>
<span class="pubs_title">
Regret Bounds for Learning State Representations in Reinforcement Learning.
</span>
NeurIPS 2019, Vancouver, Canada.
</li>
<li>
Jian Qian, Ronan Fruit, <span class="pubs_me">Matteo Pirotta</span> and Alessandro Lazaric:<br>
<span class="pubs_title">
Exploration Bonus for Regret Minimization in Discrete and Continuous Average Reward MDPs.
</span>
NeurIPS 2019, Vancouver, Canada.
[<a href="https://arxiv.org/abs/1812.04363" target="_blank">arXiv</a>] [<a href="https://papers.nips.cc/paper/8735-exploration-bonus-for-regret-minimization-in-discrete-and-continuous-average-reward-mdps">Paper</a>]
</li>
<li>
Ronan Fruit, <span class="pubs_me">Matteo Pirotta</span> and Alessandro Lazaric:<br>
<span class="pubs_title">Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes.
</span>
NeurIPS 2018, Montréal, Canada.
[<a href="https://arxiv.org/abs/1807.02373" target="_blank">arXiv</a>] [<a href="http://papers.nips.cc/paper/7563-near-optimal-exploration-exploitation-in-non-communicating-markov-decision-processes">Paper</a>]
</li>
<li>
Ronan Fruit, <span class="pubs_me">Matteo Pirotta</span>, Alessandro Lazaric and Ronald Ortner:<br>
<span class="pubs_title">Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning.
</span> ICML 2018, Stockholm, Sweden. [<a href="https://arxiv.org/abs/1802.04020">arXiv</a>]
</li>
<li>
Matteo Papini, Damiano Binaghi, Giuseppe Canonaco, <span class="pubs_me">Matteo Pirotta</span> and Marcello Restelli:<br>
<span class="pubs_title">Stochastic Variance-Reduced Policy Gradient.
</span> ICML 2018, Stockholm, Sweden. [<a href="https://arxiv.org/abs/1806.05618">arXiv</a>] [<a href="http://proceedings.mlr.press/v80/papini18a.html">Paper</a>]
</li>
<li>
Andrea Tirinzoni, Andrea Sessa, <span class="pubs_me">Matteo Pirotta</span> and Marcello Restelli:<br>
<span class="pubs_title">Importance Weighted Transfer of Samples in Reinforcement Learning.
</span> ICML 2018, Stockholm, Sweden. [<a href="https://arxiv.org/abs/1805.10886">arXiv</a>] [<a href="http://proceedings.mlr.press/v80/tirinzoni18a.html">Paper</a>]
</li>
<li>
Davide Di Febbo, Emilia Ambrosini, <span class="pubs_me">Matteo Pirotta</span>, Eric Rojas, Marcello Restelli, Alessandra Pedrocchi and Simona Ferrante:<br>
<span class="pubs_title">Does Reinforcement Learning Outperform PID in the Control of FES Induced Elbow Flex-Extension?</span> MeMeA 2018, Rome, Italy.
</li>
<li>Ronan Fruit, <span class="pubs_me">Matteo Pirotta</span>, Alessandro Lazaric, and Emma Brunskill:<br>
<span class="pubs_title">Regret Minimization in MDPs with Options without Prior Knowledge.
</span>
NIPS 2017, Long Beach, California, USA.
[<a href="http://fruit.nom.fr/WordPress3/wp-content/uploads/2016/12/poster_options.pdf" target="_blank">Poster</a>]
[<a href="https://hal.inria.fr/hal-01649082/" target="_blank">Full Paper</a>]
</li>
<li>Alberto Metelli, <span class="pubs_me">Matteo Pirotta</span>, and Marcello Restelli:<br>
<span class="pubs_title">Compatible Reward Inverse Reinforcement Learning.
</span>
NIPS 2017, Long Beach, California, USA.
[<a href="https://albertometelli.github.io/download/poster_nips2017.pdf" target="_blank">Poster</a>]
[<a href="https://papers.nips.cc/paper/6800-compatible-reward-inverse-reinforcement-learning" target="_blank">Paper</a>]
</li>
<li>Matteo Papini, <span class="pubs_me">Matteo Pirotta</span>, and Marcello Restelli:<br>
<span class="pubs_title">Adaptive Batch Size for Safe Policy Gradients.
</span>
NIPS 2017, Long Beach, California, USA.
[<a href="https://t3p.github.io/download/poster_NIPS17.pdf" target="_blank">Poster</a>]
[<a href="https://papers.nips.cc/paper/6950-adaptive-batch-size-for-safe-policy-gradients" target="_blank">Paper</a>]
</li>
<li>Davide Tateo, <span class="pubs_me">Matteo Pirotta</span>, Andrea Bonarini and Marcello Restelli:<br>
<span class="pubs_title">Gradient-Based Minimization for Multi-Expert Inverse Reinforcement Learning.
</span>
IEEE SSCI 2017, Hawaii, USA.
</li>
<li>Samuele Tosatto, <span class="pubs_me">Matteo Pirotta</span>, Carlo D'Eramo, and Marcello Restelli:<br>
<span class="pubs_title">Boosted Fitted Q-Iteration.
</span>
ICML 2017, Sydney, New South Wales, Australia.
</li>
<li>Carlo D'Eramo, Alessandro Nuara, <span class="pubs_me">Matteo Pirotta</span>, and Marcello Restelli:<br>
<span class="pubs_title">Estimating the Maximum Expected Value in Continuous Reinforcement Learning Problems.
</span>
AAAI 2017, San Francisco, California, USA.
</li>
<li><span class="pubs_me">Matteo Pirotta</span>, and Marcello Restelli:<br>
<span class="pubs_title">Inverse Reinforcement Learning through Policy Gradient Minimization.
</span>
AAAI 2016, Phoenix, Arizona, USA.
</li>
<li><span class="pubs_me">Matteo Pirotta</span>, Simone Parisi, and Marcello Restelli:<br>
<span class="pubs_title">Multi-Objective Reinforcement Learning with Continuous Pareto Frontier Approximation.
</span>
AAAI 2015, Austin, Texas, USA.
</li>
<li>Caporale Danilo, Luca Deori, Roberto Mura, Alessandro Falsone, Riccardo Vignali, Luca Giulioni,
<span class="pubs_me">Matteo Pirotta</span> and Giorgio Manganini:<br>
<span class="pubs_title">Optimal Control to Reduce Emissions in Gasoline
Engines: An Iterative Learning Control Approach for ECU Calibration Maps Improvement.
</span>
ECC 2015, Linz, Austria.
</li>
<li>Giorgio Manganini, <span class="pubs_me">Matteo Pirotta</span>, Marcello Restelli, Luca Bascetta:<br>
<span class="pubs_title">Following Newton
Direction in Policy Gradient with Parameter Exploration.
</span>
IJCNN 2015, Killarney, Ireland.
</li>
<li>Simone Parisi, <span class="pubs_me">Matteo Pirotta</span>, Nicola Smacchia, Luca Bascetta, Marcello Restelli:<br>
<span class="pubs_title">Policy
Gradient Approaches for Multi-Objective Sequential Decision Making: A Comparison.
</span>
ADPRL 2014, Orlando, Florida, United States.
</li>
<li>Simone Parisi, <span class="pubs_me">Matteo Pirotta</span>, Nicola Smacchia, Luca Bascetta and Marcello Restelli:<br>
<span class="pubs_title">Policy
Gradient Approaches for Multi-Objective Sequential Decision Making.
</span>
IJCNN 2014, Beijing, China.
</li>
<li><span class="pubs_me">Matteo Pirotta</span>, Giorgio Manganini, Luigi Piroddi, Maria Prandini and Marcello Restelli:<br>
<span class="pubs_title">A particle-based policy for the optimal control of Markov decision processes.</span>
IFAC 2014, Cape Town, South Africa.
</li>
<li><span class="pubs_me">Matteo Pirotta</span>, Marcello Restelli, Luca Bascetta:<br>
<span class="pubs_title">Adaptive Step-Size for Policy Gradient Methods.
</span>
NIPS 2013, Lake Tahoe, Nevada, USA.
</li>
<li><span class="pubs_me">Matteo Pirotta</span>, Marcello Restelli, Alessio Pecorino, and Daniele Calandriello:<br>
<span class="pubs_title">Safe policy iteration.</span>
ICML 2013, Atlanta, Georgia, USA. [<a href="http://proceedings.mlr.press/v28/pirotta13.html" target="_blank">Paper</a>]
</li>
<li>Martino Migliavacca, Alessio Pecorino, <span class="pubs_me">Matteo Pirotta</span>,
Marcello Restelli, and Andrea Bonarini:<br>
<span class="pubs_title">
Fitted Policy Search.
</span>
ADPRL 2011, Paris, France.
</li>
<li>Martino Migliavacca, Alessio Pecorino, <span class="pubs_me">Matteo Pirotta</span>, Marcello Restelli, and Andrea Bonarini:<br>
<span class="pubs_title">
Fitted Policy Search: Direct Policy Search using a Batch Reinforcement Learning Approach.
</span>
ERLARS 2010, Lisboa, Portugal.
</li>
</ul>
<h5>Journal Papers</h5>
<ul class="pubs_ul">
<li>
Alberto Maria Metelli, <span class="pubs_me">Matteo Pirotta</span>, Daniele Calandriello, Marcello Restelli:<br>
<span class="pubs_title">
Safe Policy Iteration: A Monotonically Improving Approximate Policy Iteration Approach.
</span>
JMLR 22(97), 2021. [<a href="https://www.jmlr.org/papers/v22/19-707.html">Paper</a>]
</li>
<li>
Simone Parisi, <span class="pubs_me">Matteo Pirotta</span> and Jan Peters:<br>
<span class="pubs_title">Manifold-based Multi-objective Policy Search with Sample Reuse.
</span>
Neurocomputing 263, 2017. [<a href="https://www.sciencedirect.com/science/article/pii/S0925231217310986">Paper</a>]
</li>
<li>
Giorgio Manganini, <span class="pubs_me">Matteo Pirotta</span>, Marcello Restelli, Luigi Piroddi, and
Maria Prandini:<br>
<span class="pubs_title">Policy search for the optimal control of Markov decision processes:
a novel particle-based iterative scheme.
</span>
IEEE Transactions on Cybernetics 46, 2016. [<a href="https://ieeexplore.ieee.org/document/7303937/">Paper</a>]
</li>
<li>
Simone Parisi, <span class="pubs_me">Matteo Pirotta</span> and Marcello Restelli:<br>
<span class="pubs_title">Multi-objective Reinforcement Learning through Continuous
Pareto Manifold Approximation.
</span>
Journal of Artificial Intelligence Research 57, 2016. [<a href="https://jair.org/index.php/jair/article/view/11026">Paper</a>]
</li>
<li>
<span class="pubs_me">Matteo Pirotta</span>, Marcello Restelli and Luca Bascetta:<br>
<span class="pubs_title">Policy Gradient in Lipschitz Markov Decision Processes.</span>
Machine Learning 100, 2015. [<a href="https://link.springer.com/article/10.1007/s10994-015-5484-1">Paper</a>]
</li>
</ul>
<!--
<h5>Workshops Papers</h5>
<ul class="pubs_ul">
<li>Ronan Fruit, <span class="pubs_me">Matteo Pirotta</span>, Alessandro Lazaric and Emma Brunskill:<br>
<span class="pubs_title">
Regret Minimization in MDPs with Options without Prior Knowledge.
</span>
<a href="http://rlabstraction2016.wixsite.com/icml-2017">Lifelong Learning: A Reinforcement Learning Approach</a>, ICML 2017 Workshop, Sydney, Australia.
[<a href="https://drive.google.com/file/d/0B9dqzboiV5u-Z2h0cHVWa2JwWUk/view">PDF</a>, <a href="./files/2017/ICML_LLL_options.pdf">PDF2</a>]
</li>
<li><span class="pubs_me">Matteo Pirotta</span>, and Marcello Restelli:<br>
<span class="pubs_title">
Cost-Sensitive Approach for Batch Size Optimization.
</span>
<a href="http://probabilistic-numerics.org/meetings/NIPS2016/">Optimizing the optimizers</a>, NIPS 2016 Workshop, Barcelona, Spain.
[<a href="http://probabilistic-numerics.org/assets/pdf/NIPS2016/Pirotta_Restelli.pdf">PDF</a>]
</li>
</ul>-->
<h5>Technical Reports</h5>
<ul class="pubs_ul">
<li>
Pierre-Alexandre Kamienny, <span class="pubs_me">Matteo Pirotta</span>, Alessandro Lazaric, Thibault Lavril, Nicolas Usunier, Ludovic Denoyer:<br>
<span class="pubs_title">
Learning Adaptive Exploration Strategies in Dynamic Environments Through Informed Policy Regularization.
</span> arXiv:2005.02934, 2020. [<a href="https://arxiv.org/abs/2005.02934">arXiv</a>]
</li>
<li>
Ronan Fruit, <span class="pubs_me">Matteo Pirotta</span> and Alessandro Lazaric:<br>
<span class="pubs_title">Improved Analysis of UCRL2 with empirical Bernstein bounds.
</span> ALT Tutorial, 2019. [<a href="https://rlgammazero.github.io/docs/ucrl2b_improved.pdf">arXiv</a>]
</li>
<li>
Jian Qian, Ronan Fruit, <span class="pubs_me">Matteo Pirotta</span> and Alessandro Lazaric:<br>
<span class="pubs_title">Concentration Inequalities for Multinoulli Random Variables.
</span> ALT Tutorial, 2019. [<a href="https://arxiv.org/abs/2001.11595">arXiv</a>]
</li>
<li>
<span class="pubs_me">Matteo Pirotta</span> and Marcello Restelli:<br>
<span class="pubs_title">Cost-Sensitive Approach to Batch Size Adaptation for Gradient Descent.
</span> <a href="http://probabilistic-numerics.org/meetings/NIPS2016/">Optimizing the optimizers</a>, NIPS 2016 Workshop, Barcelona, Spain. [<a href="https://arxiv.org/abs/1712.03428">arXiv</a>]
</li>
</ul>
</div>
</div>
</div><!-- section pubs -->
</div><!-- container -->
<footer class="page-footer blue darken-3" id="foot">
<div class="container">
<div class="row">
<div class="col offset-l1 l6 s8">
<h5 class="white-text">Office</h5>
<p class="grey-text text-lighten-4">
META<br>
Paris, France
</p>
</div>
<div class="col offset-l2 l3 s4">
<h5 class="white-text">Links</h5>
<ul>
<li><a class="white-text" target="_blank" href="https://it.linkedin.com/in/matteo-pirotta-4593a994">Linkedin</a></li>
<li><a class="white-text" target="_blank" href="https://scholar.google.com/citations?user=6qWcDTAAAAAJ&hl=en">Google Scholar</a></li>
<li><a class="white-text" target="_blank" href="http://arxiv.org/find/grp_cs/1/au:+pirotta_matteo/0/1/0/all/0/1">Arxiv</a></li>
<li><a class="white-text" target="_blank" href="https://dblp.org/pid/137/3249.html">dblp</a></li>
</ul>
</div>
</div>
</div>
<div class="footer-copyright">
<div class="container">
<div class="row">
<div class="col offset-l1 l5 s6">
<p class="left">
Powered by <a class="grey-text text-lighten-3" href="http://materializecss.com">Materialize</a>
</p>
</div>
<div class="col l5 s6">
<p class="right">
Hosted on <a class="grey-text text-lighten-4" href="https://pages.github.com/">GitHub Pages</a>
</p>
</div>
</div>
</div>
</div>
</footer>
<!--Import jQuery before materialize.js-->
<script src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="js/materialize.min.js"></script>
<script src="js/init.js"></script>
<script async src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/MathJax.js?config=TeX-MML-AM_CHTML"></script>
</body>
</html>