-
Notifications
You must be signed in to change notification settings - Fork 0
/
keynote.html
192 lines (192 loc) · 13.3 KB
/
keynote.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
<!DOCTYPE HTML>
<!--
Mexican Conference on Pattern Recognition 2021
-->
<html>
<head>
<title>MCPR | Keynote Speakers</title>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="assets/css/main.css" />
<link rel="icon" type="image/png" href="/assets/images/favicon/img-inaoe-192.png" sizes="192x192">
<link rel="icon" type="image/png" href="assets/images/favicon/img-inaoe-110.png" sizes="110x110">
<link rel="icon" type="image/png" href="/assets/images/favicon/img-inaoe-96.png" sizes="96x96">
<link rel="icon" type="image/png" href="/assets/images/favicon/img-inaoe-32.png" sizes="32x32">
<link rel="icon" type="image/png" href="/assets/images/favicon/img-inaoe-16.png" sizes="16x16">
<style>
.bio h2 {color: black; margin-top: 23px; margin-bottom: 23px;}
.name-tag {text-align: center; font-size: 34px;}
.title-tag {text-align: left; font-size: 30px;}
.bio a {color: black; display: inline; width: auto; height: auto; background-color: rgba(0,0,0,0); border-radius: 0px;
font-size: inherit; margin-bottom: 0px; margin-top: 0px; padding: 0px; text-decoration: underline;}
.bio a:hover {color:white;}
.left-img {float: left; margin-right: 23px;}
.right-img {float: right; margin-left: 23px;}
.bio p {text-align: justify;}
hr {clear:both; width: 100%; margin-left: auto; margin-right: auto; background-color:#424242;
height: 2px; margin-top: 23px; margin-bottom: 23px;}
@media only screen and (max-width: 600px) {
.potrait {float: none; margin-bottom: 23px; width: 80%; height: 80%; margin-left: auto; margin-right: auto;}
}
</style>
</head>
<body>
<div id="page-wrapper">
<!--Header-->
<div class="header" id="testsub">
</div>
<div class="header-text">
<h1>Mexican Conference on Pattern Recognition</h1>
<br>
<h2>Mexico City, Mexico<br>June 23 - 26, 2021</h2>
</div>
<!--Body-->
<div class="container">
<div class="logos">
<img src="assets/images/logos/img-ITAM-logo.png" alt="ITAM">
<img src="assets/images/logos/img-INAOE-logo.png" alt="INAOE">
<img src="assets/images/logos/img-LNCS-logo.jpg" alt="LNCS">
</div>
<div class="text-area">
<h1>Keynote Speakers</h1>
<hr>
<div class="bio">
<h2 class="name-tag">Lei Zhang</h2>
<img src="assets/images/keynote/img-leiZhang.jpeg" width="324" height="324" alt="Lei Zhang" class="left-img potrait">
<p>Lei Zhang is a Principal Researcher and Research Manager of the computer vision research
group in Microsoft Cloud & AI, leading a team working on large-scale visual recognition.
The team has made a significant impact to Microsoft Cognitive Services, including image
tagging, object detection, entity recognition, and image captioning. Prior to this, he has
worked with Microsoft Research Asia for 12 years as a Senior Researcher and later with Bing
Multimedia Search for 2 years as a Principal Engineering Manager. He is an IEEE Fellow and
has published 150+ papers and holds 50+ U.S. patents for his innovation in related fields.</p>
<h2 class="title-tag">Lecture: Aligned representation learning for vision-language understanding</h2>
<p>Vision-language pre-training (VLP) has proved effective for a wide range of vision-language
tasks for its great capability of learning transferrable representations from large-scale
multimodality data. The fundamental problem in this space is to learn multimodality-aligned
representations from large-scale but noisy data. In this talk, I will introduce a series of
research works towards this goal, including how to setup the pretraining tasks, how to use
object detection results as anchor points to bridge the modality gap, how to leverage unpaired
data to learn visual vocabulary for better generalization, and how to improve visual representation
by object detection pretraining. I will present extensive image captioning examples and analysis
to provide insights on the effectiveness of the learned VL-aligned representation.</p>
</div>
<hr>
<div class="bio">
<h2 class="name-tag">Alessandro Vinciarelli</h2>
<img src="assets/images/keynote/img-alessandroVinciarelli.jpg" width="324" height="324" alt="Alessandro Vinciarelli" class="right-img potrait">
<p><a href="http://vinciarelli.net" target="_blank">Alessandro Vinciarelli</a> is with the University of
Glasgow where he is Full Professor at the School of Computing Science and Associate Academic at the
Institute of Neuroscience and Psychology. His main research interest is Social Signal Processing, the
domain aimed at modelling analysis and synthesis of nonverbal behaviour in social interactions.
Overall, Alessandro has published more than 150 works, including one authored book, and 43 journal
papers. He is currently Director and Principal Investigator of the UKRI Centre for Doctoral Training
in <a href="http://socialcdt.org" target="_blank">Socially Intelligent Artificial Agents</a> and, in the past, he has been
Coordinator of the Network of Excellence in Social Signal Processing. Furthermore, he is or has been
PI and co-PI of more than 15 national and international research projects. Alessandro has been general
chair of two conferences (IEEE International Conference on Social Computing in 2012, ACM International
Conference on Multimodal Interactions in 2017) and of more than 25 international events. Last, but not
least, Alessandro is co-founder of <a href="www.klewel.com" target="_blank">Klewel</a>, a knowledge management company recognized
with national and international awards.</p>
<h2 class="title-tag">Lecture: An introduction to Social AI</h2>
<p>In the last 20 years, the AI community has made major efforts aimed at making machines socially
intelligent, i.e., capable to interact with people like people do with one another. The key-idea
underlying these efforts, encompassed by the collective name of Social AI, is that social and
psychological phenomena leave physical, machine detectable traces in peoples’ behaviour, at both
verbal (what they say) and nonverbal (how they say it) level. This talk is an introduction to
the main principles and problems that Social AI researchers address when developing technologies
for the automatic analysis of social and psychological phenomena such as emotions, personality,
conflict, mental health issues, etc. In particular, the talk will try to highlight the differences
between using AI to address such problems and the others.</p>
</div>
<hr>
<div class="bio">
<h2 class="name-tag">Julian Fierrez</h2>
<img src="assets/images/keynote/img-julianFierrez.jpg" width="324" height="480" alt="Julian Fierrez" class="left-img potrait">
<p>Prof. Julian Fierrez, Universidad Autonoma de Madrid, SPAIN. Julian Fierrez received the MSc and
the PhD degrees in telecommunications engineering from Universidad Politecnica de Madrid, Spain,
in 2001 and 2006, respectively. Since 2004 he has been at Universidad Autonoma de Madrid, where
he is Associate Professor since 2010. From 2007 to 2009 he was a visiting researcher at Michigan
State University in the USA under a Marie Curie fellowship. His research is on signal and image
processing, AI fundamentals and applications, HCI, forensics, and biometrics for security and human
behavior analysis. He is actively involved in large EU projects in these topics (e.g., BIOSECURE,
TABULA RASA and BEAT in the past; now IDEA-FAST, PRIMA and TRESPASS-ETN). Since 2016 he has been
Associate Editor for Elsevier's Information Fusion and IEEE Trans. on Information Forensics and
Security, and since 2018 also for IEEE Trans. on Image Processing. He has been General Chair of the
IAPR Iberoamerican Congress on Pattern Recognition (CIARP 2018) and the Iberian Conference on
Pattern Recognition and Image Analysis (IbPRIA 2019). Since 2020 he is a member of the ELLIS Society.
Prof. Fierrez has received best papers awards at AVBPA, ICB, IJCB, ICPR, ICPRS, and Pattern Recognition
Letters. He is also recipient of several world-class research distinctions, including: EBF European
Biometric Industry Award 2006, EURASIP Best PhD Award 2012, Medal in the Young Researcher Awards 2015
by the Spanish Royal Academy of Engineering, and the Miguel Catalan Award to the Best Researcher under
40 in the Community of Madrid in the general area of Science and Technology. In 2017 he was also awarded
the IAPR Young Biometrics Investigator Award, given to a single researcher worldwide every two
years under the age of 40, whose research work has had a major impact in <a href="http://biometrics.eps.uam.es" target="_blank">biometrics</a>.</p>
<h2 class="title-tag">Lecture: Biases in Machine Learning and Responsible Artificial Intelligence</h2>
<p>In the last few years, we are witnessing a growing interest in the Artificial Intelligence research
community in studying bias effects when machine learning methods are applied on large amounts of data.
These bias effects can stem from the data itself or from the learning process, which nowadays is
clearly dominated by deep learning methods that most of the time are quite opaque. When those learning
processes are related to AI applications dealing with personal information, or whose application
affects people’s lives, then biases can result in unfair AI-based automated decision-making processes,
very harmful in terms of undesired discrimination among population groups. This keynote will discuss
the current state of the topic with special emphasis in AI applications involving face biometrics.
Recent methods and approaches to reduce undesired discrimination towards fair biometrics will be also
discussed.</p>
</div>
<hr>
</div>
</div>
<!--Nav-->
<div class="nav">
<div class="dropdown">
<button class="dropbtn">Main</button>
<div class="dropdown-content">
<a href="index.html">Home</a>
<a href="cfp.html">Call for Papers</a>
<a href="keynote.html">Keynote Speakers</a>
<a href="scientificCommittee.html">Scientific Committee</a>
<a href="organization.html">Organization</a>
</div>
</div><div class="dropdown">
<button class="dropbtn">Authors</button>
<div class="dropdown-content">
<a href="submission.html">Submission guidelines</a>
<a href="proceedings.html">Proceedings</a>
<a href="importantDates.html">Important dates</a>
<a href="program.html">Conference Program</a>
</div>
</div><div class="dropdown">
<button class="dropbtn">Attendance</button>
<div class="dropdown-content">
<a href="venue.html">Conference venue</a>
<a href="registration.html">Registration</a>
<a href="contact.html">Contact</a>
</div>
</div><div class="dropdown">
<button class="dropbtn">Previous MCPRs</button>
<div class="dropdown-content">
<a href="https://link.springer.com/conference/mcpr2" target="_blank">Previous MCPR proceedings</a>
<a href="https://ccc.inaoep.mx/~mcpr2020/" target="_blank">MCPR 2020</a>
<a href="https://ccc.inaoep.mx/~mcpr2019/" target="_blank">MCPR 2019</a>
<a href="https://ccc.inaoep.mx/~mcpr2018/" target="_blank">MCPR 2018</a>
<a href="https://ccc.inaoep.mx/~mcpr2017/" target="_blank">MCPR 2017</a>
<a href="https://ccc.inaoep.mx/~mcpr2016/" target="_blank">MCPR 2016</a>
<a href="https://ccc.inaoep.mx/~mcpr2015/" target="_blank">MCPR 2015</a>
<a href="https://ccc.inaoep.mx/~mcpr2014/" target="_blank">MCPR 2014</a>
<a href="https://ccc.inaoep.mx/~mcpr2013/" target="_blank">MCPR 2013</a>
<a href="https://ccc.inaoep.mx/~mcpr2012/" target="_blank">MCPR 2012</a>
<a href="https://ccc.inaoep.mx/~mcpr2011/" target="_blank">MCPR 2011</a>
<a href="https://ccc.inaoep.mx/~mcpr2010/" target="_blank">MCPR 2010</a>
<a href="https://ccc.inaoep.mx/~mwpr2009/" target="_blank">MCPR 2009</a>
</div>
</div>
</div>
<!--Footer-->
<div class="footer">
</div>
</div>
<script src="assets/js/Slider.js"></script>
<script src="assets/js/Behavior.js"></script>
</body>
</html>