1st Keynote Speaker

Prof. Sun-Yuan Kung

Princeton University, USA

Title: From Deep Learning to Internal and Explainable Learning applicable to XAI

Abstract: Deep Learning (NN/AI 2.0) depends solely on Back-propagation (BP), now classic learning paradigm whose supervision is exclusively accessed via the external interfacing nodes (i.e. input/output neurons). Hampered by BP's external learning paradigm, Deep Learning has been limited to the parameter training of the neural nets (NNs), while the task of optimizing the net structure is left to trial and error. It is important that the next generation of NN technology may fully address the issue of simultaneously training both parameter and structure of NNs. In addition, it should support Internal Neuron's Explainablility, championed by DARPA's Explainable AI (XAI) or AI3.0. For both purposes, we propose an internal learning paradigm to facilitate a notion of structural gradient critical for structural learning models. In order to effectively rank the trained neurons (i.e. the hidden nodes), we propose an Explainable Neural Networks (Xnet) comprising (1) internal teacher labels (ITL) and (2) internal optimization metrics (IOM). WE then develop a joint parameter/structure training paradigm for Deep Learning Networks by combining both external and internal learning. Xnet can simultaneously compress the net and raise the net’s accuracy. Pursuant to our simulation studies, it appears to outperform existing pruning/compression methods. Furthermore, Xnet opens up promising research fronts on (1) explainable learning models for XAI and (2) machine-to-machine mutual learning in the soon-coming 5G era.


Bio:S.Y. Kung, Life Fellow of IEEE, is a Professor at Department of Electrical Engineering in Princeton University. His research areas include machine learning, data mining, systematic design of (deep-learning) neural networks, statistical estimation, VLSI array processors, signal and multimedia information processing, and most recently compressive privacy. He was a founding member of several Technical Committees (TC) of the IEEE Signal Processing Society. He was elected to Fellow in 1988 and served as a Member of the Board of Governors of the IEEE Signal Processing Society (1989-1991). He was a recipient of IEEE Signal Processing Society's Technical Achievement Award for the contributions on "parallel processing and neural network algorithms for signal processing" (1992); a Distinguished Lecturer of IEEE Signal Processing Society (1994); a recipient of IEEE Signal Processing Society's Best Paper Award for his publication on principal component neural networks (1996); and a recipient of the IEEE Third Millennium Medal (2000). Since 1990, he has been the Editor-In-Chief of the Journal of VLSI Signal Processing Systems. He served as the first Associate Editor in VLSI Area (1984) and the first Associate Editor in Neural Network (1991) for the IEEE Transactions on Signal Processing. He has authored and co-authored more than 500 technical publications and numerous textbooks including "VLSI Array Processors", Prentice-Hall (1988); "Digital Neural Networks", Prentice-Hall (1993) ; "Principal Component Neural Networks", John-Wiley (1996); "Biometric Authentication: A Machine Learning Approach", Prentice-Hall (2004); and "Kernel Methods and Machine Learning”, Cambridge University Press (2014).

2nd Keynote Speaker

Prof. H. J. Siegel

Colorado State University, USA

Title: Measuring the Robustness of Computing Systems

Abstract: Throughout all fields of science and engineering, it is important that resources are allocated so that systems are robust against uncertainty. The robustness analysis approach presented here can be adapted to a variety of computing and communication environments. What does it mean for a system to be “robust”? How can the performance of a system be robust against uncertainty? How can robustness be described? How does one determine if a claim of robustness is true? How can one measure robustness to decide which of two systems is more robust? We explore these general questions in the context of parallel and distributed computing systems. Such computing systems are often heterogeneous mixtures of machines, used to execute collections of tasks with diverse computational requirements. A critical research problem is how to allocate heterogeneous resources to tasks to optimize some performance objective. However, systems frequently have degraded performance due to uncertainties, such as inaccurate estimates of actual workload parameters. To reduce this degradation, we present a model for deriving the robustness of a resource allocation. The robustness of a resource allocation is quantified as the probability that a user-specified level of system performance can be met. We show how to use historical data to build a probabilistic model to evaluate the robustness of resource assignments and to design resource management techniques that produce robust allocations.


Bio: H. J. Siegel is a Professor Emeritus and Senior Research Scientist/Scholar at Colorado State University (CSU). From 2001 to 2017, he was the George T. Abell Endowed Chair Distinguished Professor of Electrical and Computer Engineering at CSU, where he was also a Professor of Computer Science. From 2002 to 2013, he was the first Director of the CSU Information Science and Technology Center (ISTeC), a university-wide organization for enhancing CSU’s activities pertaining to the design and innovative application of computer, communication, and information systems. He was a professor at Purdue University from 1976 to 2001. He received two B.S. degrees from the Massachusetts Institute of Technology (MIT), and the M.A., M.S.E., and Ph.D. degrees from Princeton University. He is a Life Fellow of the IEEE and a Fellow of the ACM. Prof. Siegel has co-authored over 460 published technical papers in the areas of parallel and distributed computing and communications, which have been cited over 18,000 times. As Principal Investigator (PI) or Co-PI, he has received over $20 million in research grants and contracts. He was a Coeditor-in-Chief of the Journal of Parallel and Distributed Computing, and was on the Editorial Boards of the IEEE Transactions on Parallel and Distributed Systems and the IEEE Transactions on Computers. Prof. Siegel has served as an “IEEE Computer Society Distinguished Visitor” and an “ACM Distinguished Lecturer.” For more information, please see www.engr.colostate.edu/~hj.

3rd Keynote Speaker

Prof. Bhavani Thuraisingham

The University of Texas at Dallas, USA

Title: SecAI: Integrating Cyber Security and Artificial Intelligence

Abstract: Artificial Intelligence (AI) emerged as a field of study in Computer Science in the late 1950s. Researchers were interested in designing and developing systems that could behave like humans. This interest resulted in substantial developments in areas such as expert systems, machine learning, planning systems, reasoning systems and robotics. However, it is only recently that these AI systems are being used in practical applications in various fields such as medicine, finance, marketing, defense, and manufacturing. The main reason behind the success of these AI systems is due to the developments in data science and high-performance computing. For example, it is now possible collect, store, manipulate, analyze and retain massive amounts of data and therefore the AI systems are now able to learn patterns from this data and make useful predictions.


While AI has been evolving as a field during the past sixty years, the developments in computing systems and data management systems have resulted in serious security and privacy considerations. Various regulations are being proposed to handle big data so that the privacy of the individuals is not violated. For example, even if personally identifiable information is removed from the data, when data is combined with other data, an individual can be identified. Furthermore, the computing systems are being attacked by malware resulting in disastrous consequences. In order words, as progress is being made with technology, the security of these technologies is in serious question due to the malicious attacks.


Over the decade. AI and Security are being integrated. For example, machine learning techniques are being applied to solve security problems such as malware analysis, intrusion detection and insider threat detection. However, there is also a major concern that the machine learning techniques themselves could be attacked. Therefore, the machine leading techniques are being adapted to handle adversarial attacks. This area is known as adversarial machine learning. Furthermore, while collecting massive amounts of data causes security and privacy concerns, big data analytics applications in cyber security is exploding. For example, an organization can outsource activities such as identity management, intrusion detection and malware analysis to the cloud. While AI techniques are being applied to solve cyber security problems, the AI systems have to be protected. For example, how can the machine learning systems be protected from the attacks? What are the threats to the planning systems? How can expert system carry out their functions in the midst of malware attacks? What are the appropriate access control models for AI systems? How can we develop appropriate security policies for AI systems? These are questions that researchers are beginning to provide answers to.


To assess the developments on the integration of AI and Security over the past decade and to determine future directions, the presentation will focus on two major questions: (i) how can the developments in AI techniques be used to solve security problems and (ii) how can we ensure that the AI systems are secure and(iii) what are the security and privacy considerations for AI systems. Second, it will describe the application of AI including machine learning for cyber security applications such as insider threat detection. Third, it will discuss the trends in areas such as adversarial machine learning that take into consideration the attacker’s behavior in developing machine learning techniques. Fourth, it will discuss some emerging trends in carrying out trustworthy AI so that the AI techniques can be secured against malicious attacks. Fifth, it will focus on the privacy threats due to the collection of massive amounts of data and potential solutions. Finally, it will discuss the next steps.


Bio: Dr. Bhavani Thuraisingham is the Founders Chaired Professor of Computer Science and the Executive Director of the Cyber Security Research and Education Institute at the University of Texas at Dallas. She is also a visiting Senior Research Fellow at Kings College, University of London and a Fellow of the ACM, IEEE, the AAAS, the NAI and the BCS. She has received several awards including the IEEE CS 1997 Technical Achievement Award, ACM SIGSAC 2010 Outstanding Contributions Award, and the ACM SACMAT 10 Year Test of Time Awards for 2018 and2019. She co-chaired the Women in Cyber Security Conference (WiCyS) in 2016 and delivered the featured address at the 2018 Women in Data Science (WiDS) at Stanford University and has chaired several conferences for ACM and IEEE. Her 39 years career included industry (Honeywell), federal laboratory (MITRE), US government (NSF) and US Academia. Her work has resulted in 130+ journal articles, 300+ conference papers, 140+ keynote and featured addresses, six US patents, fifteen books as well as technology transfer of the research to commercial and operational systems. She received her PhD from the University of Wales, Swansea, UK, and the prestigious earned higher doctorate (D. Eng) from the University of Bristol, UK.

 

 

 

 

sponsor sponsor sponsor sponsor sponsor venuesponsor