On July 04, 2017, Pror.Yann LeCun from Facebook AI Research & New York University comes to Shanghai Jiao Tong University to give the talk about “Deep Learning and AI: Past, Present, and Future”. And we are also very honored to invite him visit Speech Lab before the talk, and Prof.Kai Yu shows Yann LeCun around Speech Lab and gives an introduction about lab research. Pror.Yann LeCun also has discussion and interaction with lab mates.

Yann LeCun is Head of AI Research at Facebook, and Silver Professor at New York University, affiliated with the Center for Data Science, the Courant Institute of Mathematical Science, the Center for Neural Science, and the Electrical and Computer Engineering Department. He received the Electrical Engineer Diploma from ESIEE, Paris (1983), and a PhD in CS from Université P&M Curie (1987). After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories in 1988, later becoming the head of the Image Processing Research Department at AT&T Labs-Research in 1996. He joined NYU as a professor in 2003, following a brief period at the NEC Research Institute in Princeton. In 2012, he became the founding director of the NYU Center for Data Science. In late 2013, he was named Head of AI Research at Facebook, remaining on the NYU faculty part-time. He held a visiting professor chair at Collège de France in 2015-2016. His current interests include AI, machine learning, computer perception, robotics, and computational neuroscience. He is best known for his contributions to deep learning and neural networks, particularly the convolutional network model which is very widely used in computer vision and speech recognition applications. Hence he is called founding father of convolutional nets. He has published over 190 papers on these topics as well as on handwriting recognition, image compression, and dedicated hardware for AI. LeCun is founder and general co-chair of ICLR and has served on several editorial boards and conference organizing committees. He is co-chair of the program Learning in Machines and Brains of the Canadian Institute for Advanced Research. He is on the boards of IPAM and ICERM. He has advised many companies and co-founded startups Elements Inc. and Museami. He is in the New Jersey Inventor Hall of Fame. He is the recipient of the 2014 IEEE Neural Network Pioneer Award, the 2015 IEEE PAMI Distinguished Researcher Award, the 2016 Lovie Lifetime Achievement Award, and a Honorary Doctorate from IPN, Mexico.

This talk is about deep learning which is at the root of revolutionary progress in visual and auditory perception by computers, and is pushing the state of the art in natural language understanding, dialog systems and language translation. Deep learning systems are deployed everywhere from self-driving cars to content filtering, search, and medical image analysis. But almost all real-world applications of deep learning use supervised learning in which the machine is trained with human-annotated data. But humans and animals learn vast amounts of knowledge about the world by observation, with very little feedback from intelligent teachers. Humans construct complex predictive models of the world that allows them to interpret percepts, predict future events, and plan a course of actions. Enabling machines to learn predictive models of the world is a major obstacle towards significant progress in AI. I will describe a number of promising approaches towards unsupervised and predictive learning, particularly variations of adversarial training.

Next Post Previous Post