|Prof. Ying Tan
School of Electronics Engineering and Computer Science
Biography: Ying Tan is a full professor and PhD advisor at the School of Electronics Engineering and Computer Science of Peking University, and director of Computational Intelligence Laboratory at Peking University. He is the inventor of Fireworks Algorithm (FWA) in 2010. He received his B.Eng, M.S., and PhD from Southeast University, in 1985, 1988, and 1997, respectively. Then he worked at University of Science and Technology of China (USTC) as a postdoctoral fellow, an associate professor, and professor under The 100 Talent Program of Chinese Academy of Science (CAS) in 2005. He also worked at Chinese University of Hong Kong in 1999, 2004-05. As a visiting or guest professor, he visited University of California, Columbia University, Kyushu University, Auckland University of Technology, and City University of Hong Kong. He undertook more than 40+ projects supported by natural science foundation of China, ministry of scientific and technology of China, etc. He won the 2nd-Class Natural Science Award of China in 2009, the 3rd–Class Wu Wenjun Innovation Award of CAAI in 2016 and a number of academic prizes in his fields, etc. He received 4 invention patents in China. He published five monographs by world famous publishers, including Springer, M-K(Elsevier), Wiley-IEEE, Taylor-Francis-CRC press, IET press, IGI-Global, and Science Publisher in China. He serves as the Editor-in-Chief of International Journal of Computational Intelligence and Pattern Recognition (IJCIPR), the Associate Editor of IEEE Transaction on Cybernetics (CYB), IEEE Transaction on Neural Networks and Learning Systems (NNLS), International Journal of Swarm Intelligence Research (IJSIR), International Journal of Artificial Intelligence (IJAI), etc. He also served as an Editor of Springer’s Lecture Notes on Computer Science (LNCS) for 20+ volumes, and Guest Editors of several referred Journals, including IEEE/ACM Transactions on Computational Biology and Bioinformatics, Information Science, Softcomputing, Neurocomputing, Natural Computation, IJSIR, IJAI, B&B, CJ, etc. He is an IEEE senior member and a member of Emergent Technologies Technical Committee (ETTC) of IEEE Computational Intelligence Society since 2010. He is the founder general chair of the ICSI International Conference series. He is/was the general chairs of International Conferences on Swarm Intelligence (ICSI’2010-2017), and was one of joint general chairs of 1st&2nd BRICS Congress on Computational Intelligence (CCI), and the program committee co-chair of IEEE World Congress on Computational Intelligence (WCCI 2014), etc. His research interests include computational intelligence, swarm intelligence, swarm robotics, data mining, pattern recognition, intelligent information processing for information security, etc. He has published more than 260 papers in refereed journals and conferences in these areas, and authored/co-authored 10 books and 12 chapters in book.
Speech Title: Latest Research Progress of Fireworks Algorithm and Its Applications in Big Data
Abstract: Recently, inspired from the collective behaviors of many swarm-based creatures in nature or social phenomena, swarm intelligence (SI) has been received attention and studied extensively, gradually becomes a class of efficiently intelligent optimization methods. Inspired by fireworks explosion at night, the fireworks algorithm (FWA) was developed in 2010. Since then, several improvements and some applications were proposed to improve the efficiency of FWA. In this talk, the fireworks algorithm is first described in detail and reviewed, then several effective improved fireworks algorithms are highlighted individually. By changing the ways of calculating numbers and amplitudes of sparks in fireworks’ explosion, the improved FWA algorithms become more reasonable and explainable. In addition, the multi-objective fireworks algorithm and the graphic processing unit (GPU) based FWA are also briefly presented, particularly the GPU-based FWA is able to speed up the optimization process considerably and much suitable for big-data applications. Extensive experiments on IEEE-CEC’s benchmark functions demonstrate that the improved fireworks algorithms significantly increase the accuracy of found solutions, yet decrease the running time dramatically. Finally, several typical applications of FWA are briefly described, while its shortcomings and future research directions are identified in a conclusion.
|Prof. Yizhou Yu
Department of Computer Science
University of Hong Kong
Biography: Yizhou Yu received the PhD degree from the Computer Vision Group at University of California, Berkeley. He is currently a full professor at the University of Hong Kong. He was first a tenure-track and then a tenured professor at University of Illinois at Urbana-Champaign for more than 10 years, and a visiting researcher at Microsoft Research Asia during 2001 and 2008. Prof Yu has made important contributions to artificial intelligence (AI), deep learning, image recognition, machine vision and VR/AR. He has over 100 peer-reviewed publications in international conferences and journals. He is a recipient of US National Science Foundation CAREER Award, NNSF China Overseas Distinguished Young Investigator Award as well as multiple best paper awards. He has served on the editorial board of IET Computer Vision, IEEE Transactions on Visualization and Computer Graphics, The Visual Computer, and International Journal of Software and Informatics. He has also served on the program committee of many leading international conferences, including SIGGRAPH and International Conference of Computer Vision. His current research is focused on deep learning methods for visual computing, video analytics and biomedical data analysis.
Speech Title: Deep Transfer Learning through Selective Joint Fine-Tuning
Abstract: Deep learning is a powerful machine learning paradigm that involves deep neural network architectures, and is capable of extracting high-level representations from multi-dimensional sensory data. Such high-level representations are essential for many intelligence related tasks, including visual recognition, speech perception, and language understanding. Deep neural networks require large amounts of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this talk, I introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. Our core idea is to identify and retrieve a useful subset of training images from the original source learning task, and jointly fine-tune shared convolutional layers for both tasks. Experiments demonstrate that our deep transfer learning scheme based on selective joint fine-tuning achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120).
|Prof. Shu-Cherng Fang
North Carolina State University
Biography: BS (National Tsing Hua University); MA (Johns Hopkins University); PhD (Northwestern University)
Shu-Cherng Fang holds the Walter Clark Chair Professorship in Industrial Engineering and Alumni Distinguished Graduate Professorship at North Carolina State University, USA. He is also Chair Professor and Team Leader of Chair Professors in Mathematical Sciences and Industrial Engineering at Tsinghua University, University Chair at Fudan University, Honorary University Professor of Northeast University, Honorary University Professor of Shanghai University, Graduate University Advisory Professor of the Chinese Academy of Sciences, Honorary University Chair Professor of National Chiao Tung University and Honorary IEEM Chair Professor of National Tsinghua University. Before joining NC State, Professor Fang was Senior Member of Research Staff at Western Electric Engineering Research Center, Supervisor at AT&T Bell Labs, and Department Manager at the Corporate Headquarters of AT&T Technologies. Professor Fang has published over two hundred refereed journal articles. He authored the books of Linear Optimization and Extensions: Theory and Algorithms (Prentice Hall 1993, with S. C. Puthenpura), Entropy Optimization and Mathematical Programming (Kluwer Academic 1997, with J.R. Rajasekera and H.-S. Tsao), and Linear Conic Optimization (Science Publisher 2013, with Wenxun Xing). He currently serves on the editorial board of 24 scientific journals, including Optimization, Journal of Global Optimization, Optimization Letters, Pacific Journal of Optimization, Journal of Management and Industrial Optimization, Journal of Operations and Logistics, International Journal of Operations Research, OR Transactions, Journal of Uncertainties, International Journal of Fuzzy Systems, Iranian Journal of Fuzzy Systems, Journal of Chinese Institute of Industrial Engineers and Journal of the Operations Research Society of China. He is also the Editor-in-Chief of Fuzzy Optimization and Decision Making. Professor Fang has won many awards and has been listed in several major biographic references. His research interests include Nonlinear Programming, Fuzzy Optimization and Decision Making, Soft Computing, and Logistics and Supply Chain Management.
Speech Title: Intuitionistic Fuzzy Set-based Semi-supervised Support Vector Machine for Binary Classification with Mislabeled Information
Abstract: Robust and fuzzy support vector machine models are commonly used to handle binary classification problems with noise and outliers. These models often suffer from the adverse effects caused by mislabeled training points and disregard the valuable position information carriedby such points. In this talk, we present a novel approach to address this issue. An intuitionistic fuzzy set is adopted for detecting “suspectable” mislabeled training points first. Then a semi-supervised support vector machine model is constructed by omitting suspected labels but utilizing their position information. This model can be reformulated as a non-convex quadratically constrained quadratic programming problem. We specially design a branch-and-bound algorithm with a new lower bound estimator to improve the accuracy and efficiency in computations. Some numerical tests are conducted to benchmark the performance of the proposed method vs. other support vector machine models. The results strongly support the superiority of the proposed approach.
|Prof. Xizhao Wang
Big Data Institute
College of Computer Science and Software Engineering
ShenZhen 518060, China
Biography: BS and MA (Hebei University); PhD (Harbin Institute of Technology)
Xizhao Wang received his PhD in computer science from Harbin Institute of Technology on September 1998. From 2000 to 2012 Dr. Wang served in Hebei University as a professor and the dean of school of Mathematics and Computer Sciences. From 2013 to now Dr. Wang worked as a professor in Big Data Institute of ShenZhen University since 2013. Prof. Wang’s major research interests include uncertainty modeling and machine learning for big data. Prof. Wang has edited 6+ special issues and published 3 monographs, 2 textbooks, and 150+ peer-reviewed research papers. By the Google scholar, the total number of citations is over 3000 and the maximum number of citation for a single paper is over 200. The H-index is 25 up to March 2015. Prof. Wang is on the list of Elsevier 2015 most cited Chinese authors. As a Principle Investigator (PI) or co-PI, Prof. Wang's has completed 30+ research projects. Prof. Wang is an IEEE Fellow, the previous BoG member of IEEE SMC society, the chair of IEEE SMC Technical Committee on Computational Intelligence, and the Chief Editor of Machine Learning and Cybernetics Journal.
Speech Title: Learning from Uncertainty for Big Data
Abstract: Big data refers to the datasets that are so large that conventional database management and data analysis tools are insufficient to work with them. Big data has become a bigger-than-ever problem with the quick developments of data collection and storage technologies. Model simplification is one of the most popular approaches to big data processing. After a brief tutorial of the existing techniques of processing big data, this talk will present some key issues of learning from big data with uncertainty, focusing on the impact of handling uncertainty on the model simplification. It shows that the representation, measure, and handling of the uncertainty have a significant influence on the performance of learning from big data. Some new advances in our Big Data Institute regarding the research on big data analysis and its applications to different domains are briefly introduced.
|Prof. Hari Mohan Srivastava
Department of Mathematics and Statistics
University of Victoria
Biography: H. M. Srivastava (Hari Mohan Srivastava) has held the position of Professor Emeritus in the De- partment of Mathematics and Statistics at the University of Victoria in Canada since 2006, having joined the faculty there in 1969, first as an Associate Professor (1969–1974) and then as a Full Professor (1974– 2006). He began his university-level teaching career right after having received his M.Sc. degree in 1959 at the age of 19 years from the University of Allahabad in India. More...
Speech Title: An Elementary and Introductory Approach to Fractional Calculus and Its Applications
Abstract: The subject of fractional calculus (that is, calculus of integrals and derivatives of any arbitrary real or complex order) has gained considerable popularity and importance during the past over four decades, due mainly to its demonstrated applications in numerous seemingly diverse and widespread fields of science and engineering. It does indeed provide several potentially useful tools for solving differential and integral equations, and various other problems involving special functions of mathematical physics as well as their extensions and generalizations in one and more variables. The main object of this lecture is to present a brief elementary and introductory approach to the theory of fractional calculus and its applications especially in developing solutions of certain interesting families of ordinary and partial fractional "differintegral" equations. This general talk will be presented as simply as possible keeping the likelihood of non-specialist audience in mind.