|Prof. Hari Mohan Srivastava
Department of Mathematics and Statistics
University of Victoria
Biography: H. M. Srivastava (Hari Mohan Srivastava) has held the position of Professor Emeritus in the De- partment of Mathematics and Statistics at the University of Victoria in Canada since 2006, having joined the faculty there in 1969, first as an Associate Professor (1969–1974) and then as a Full Professor (1974– 2006). He began his university-level teaching career right after having received his M.Sc. degree in 1959 at the age of 19 years from the University of Allahabad in India. More...
Speech Title: An Elementary and Introductory Approach to Fractional Calculus and Its Applications
Abstract: The subject of fractional calculus (that is, calculus of integrals and derivatives of any arbitrary real or complex order) has gained considerable popularity and importance during the past over four decades, due mainly to its demonstrated applications in numerous seemingly diverse and widespread fields of science and engineering. It does indeed provide several potentially useful tools for solving differential and integral equations, and various other problems involving special functions of mathematical physics as well as their extensions and generalizations in one and more variables. The main object of this lecture is to present a brief elementary and introductory approach to the theory of fractional calculus and its applications especially in developing solutions of certain interesting families of ordinary and partial fractional "differintegral" equations. This general talk will be presented as simply as possible keeping the likelihood of non-specialist audience in mind.
|Prof. Shun-Feng Su
Department of Electrical Engineering
National Taiwan University of Science and Technology
Biography:Shun-Feng Su received the B.S. degree in electrical engineering, in 1983, from National Taiwan University, Taiwan, and the M.S. and Ph.D. degrees in electrical engineering, in 1989 and 1991, respectively, from Purdue University, West Lafayette, IN.
He is now a Chair Professor of the Department of Electrical Engineering, National Taiwan University of Science and Technology, Taiwan, R.O.C. He is an IEEE Fellow and CACS fellow. He has published more than 190 refereed journal and conference papers in the areas of robotics, intelligent control, fuzzy systems, neural networks, and non-derivative optimization. His current research interests include computational intelligence, machine learning, virtual reality simulation, intelligent transportation systems, smart home, robotics, and intelligent control.
Dr. Su is very active in various international/domestic professional societies. He is the president of the International Fuzzy Systems Association. He is also in the Boards of Governors of various academic societies, including IEEE Systems, Man, and Cybernetics Society. Dr. Su also acted as Program Chair, Program Co-Chair, or PC members for various international and domestic conferences. Dr. Su currently serves as an Associate editor of IEEE Transactions on Cybernetics, of IEEE Transactions on Fuzzy Systems, and of IEEE Access. He is the Editor-in-Chief of International Journal of Fuzzy Systems and a subject editor of Journal of the Chinese Institute of Engineers.
Speech Title: Learning Control: Ideas and Problems in Adaptive Fuzzy Control
Abstract: Intelligent control is a promising way of control design in recent decades. Intelligent control design usually needs some knowledge of the system considered. However, such knowledge usually may not be available. Learning becomes a important mechanism for acquiring such knowledge. Learning control seems a good idea for control design for unknown or uncertain systems. To learn controllers is always a good idea, but somehow like a dream. It is because learning is to learn from something. But when there is no good controller, where to learn from? Nevertheless, there still exist approaches, such as adaptive fuzzy control, that can facilitate such an idea. It is called performance based learning (reinforcement learning and Lyapunov stability). This talk is to discuss fundamental ideas and problems in one learning controller -- adaptive fuzzy control. Some deficits of such an approach are discussed. The idea is simple and can be extended to various learning mechanisms. In fact, such an idea can also be employed in various learning control schemes. If you want to use such kind of approaches, those issues must be considered in your study
|Prof. Yizhou Yu
Department of Computer Science
University of Hong Kong
Biography: Yizhou Yu received the PhD degree from the Computer Vision Group at University of California, Berkeley. He is currently a full professor at the University of Hong Kong. He was first a tenure-track and then a tenured professor at University of Illinois at Urbana-Champaign for more than 10 years, and a visiting researcher at Microsoft Research Asia during 2001 and 2008. Prof Yu has made important contributions to artificial intelligence (AI), deep learning, image recognition, machine vision and VR/AR. He has over 100 peer-reviewed publications in international conferences and journals. He is a recipient of US National Science Foundation CAREER Award, NNSF China Overseas Distinguished Young Investigator Award as well as multiple best paper awards. He has served on the editorial board of IET Computer Vision, IEEE Transactions on Visualization and Computer Graphics, The Visual Computer, and International Journal of Software and Informatics. He has also served on the program committee of many leading international conferences, including SIGGRAPH and International Conference of Computer Vision. His current research is focused on deep learning methods for visual computing, video analytics and biomedical data analysis.
Speech Title: Deep Transfer Learning through Selective Joint Fine-Tuning
Abstract: Deep learning is a powerful machine learning paradigm that involves deep neural network architectures, and is capable of extracting high-level representations from multi-dimensional sensory data. Such high-level representations are essential for many intelligence related tasks, including visual recognition, speech perception, and language understanding. Deep neural networks require large amounts of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this talk, I introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. Our core idea is to identify and retrieve a useful subset of training images from the original source learning task, and jointly fine-tune shared convolutional layers for both tasks. Experiments demonstrate that our deep transfer learning scheme based on selective joint fine-tuning achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120).