The 4th IEEE International Conference on Edge Computing and Scalable Cloud
(IEEE EdgeCom 2018)
June 22-24, 2018, Shanghai, China.

Keynote Speakers


Sun-Yuan Kung
IEEE Life Fellow,
Princeton University, USA

Bio: Professor S.Y. Kung received his Ph.D. Degree in Electrical Engineering from Stanford University in 1977. In 1974, he was an Associate Engineer of Amdahl Corporation, Sunnyvale, CA. From 1977 to 1987, he was a Professor of Electrical Engineering-Systems of the University of Southern California, L.A. Since 1987, he has been a Professor of Electrical Engineering at the Princeton University. In addition, he held a Visiting Professorship at the Stanford University (1984); and a Visiting Professorship at the Delft University of Technology (1984); a Toshiba Chair Professorship at the Waseda University, Japan (1984); an Honorary Professorship at the Central China University of Science and Technology (1994); and a Distinguished Chair Professorship at the Hong Kong Polytechnic University since 2001. His research interests include VLSI array processors, system modelling and identification, neural networks, wireless communication, sensor array processing, multimedia signal processing, bioinformatic data mining and biometric authentication. Professor Kung has authored more than 400 technical publications and numereous textbooks, Professor Kung has co-authored more than 400 technical publications and numerous textbooks including "VLSI and Modern Signal Processing," with Russian translation, Prentice-Hall (1985), "VLSI Array Processors", with Russian and Chinese translations, Prentice-Hall (1988); "Digital Neural Networks", Prentice-Hall (1993) ; "Principal Component Neural Networks", John-Wiley (1996); and "Biometric Authentication: A Machine Learning Approach", Prentice-Hall (2004). Professor Kung is a Fellow of IEEE since 1988. He served as a Member of the Board of Governors of the IEEE Signal Processing Society (1989-1991). He was a founding member of several Technical Committees (TC) of the IEEE Signal Processing Society , including VLSI Signal Processing TC (1984), Neural Networks for Signal Processing TC (1991) and Multimedia Signal Processing TC (1998), and was appointed as the first Associate Editor in VLSI Area (1984) and later the first Associate Editor in Neural Network (1991) for the IEEE Transactions on Signal Processing. He presently serves on Technical Committees on Multimedia Signal Processing. Since 1990, he has been the Editor-In-Chief of the Journal of VLSI Signal Processing Systems. Professor Kung was a recipient of IEEE Signal Processing Society's Technical Achievement Award for his contributions on "parallel processing and neural network algorithms for signal processing" (1992); a Distinguished Lecturer of IEEE Signal Processing Society (1994) ; a recipient of IEEE Signal Processing Society's Best Paper Award for his publication on principal component neural networks (1996); and a recipient of the IEEE Third Millennium Medal (2000).

Topic: MINDnet: a methodical and cost-effective learning paradigm for training deep neural networks

Time: June 23rd, 2018, 8:45 AM.

Abstract: We shall first introduce two basic machine learning subsystems: (1) Feature Engineering (FE), e.g. CNN for image/speech feature extraction and (2) Label Engineering (LE), e.g. Multi-layer Perceptron (MLP). It is also important that we stress both the strength of weakness of deep learning. For the former, the success of deep neural networks (DNN) hinges upon the rich nonlinear space embedded in their nonlinear hidden neuron layers. As to the weakness, the prevalent concerns over deep learning include two major fronts: one analytical and one structural.

From the analytical perspective, the ad hoc nature of deep learning renders its success at the mercy of trial-and-errors. To rectify this problem, we advocate a methodic learning paradigm, MINDnet, which is computationally efficient in training the networks and yet mathematically feasible to analyze. MINDnet hinges upon the use of an effective optimization metric, called Discriminant Information (DI). It will be used as a surrogate of the popular metrics such as 0-1 loss or prediction accuracy. Mathematically, DI is equivalent or closely related to Gauss’ LSE, Fisher’s FDR, and Shannon’s Mutual Information. We shall explain why is that higher DI means higher linear separability, i.e. higher DI means that the data are more discriminable. In fact, it can be shown that, both theoretically and empirically, a high DI score usually implies a high prediction accuracy.

In the structural front, the curse of depth it is widely recognized as a cause of serious concern. Fortunately, many solutions have been proposed to effectively combat or alleviate such a curse. Likewise, in our case, MINDnet offers yet another cost-effective solution by circumventing the depth problem altogether via a new notion (or trick) of omni-present supervision, i.e. teachers hidden a “Trojan-horse” being transported (along with the training data) from the input to each of the hidden layers. Opening up the Trojan-horse at any hidden-layer, we can have direct access to the teacher’s information for free, in the sense that no BP is incurred. In short, it amount to learning with no-propagation (NP). By harnessing the teacher information, we will be able to construct a new and slender “inheritance layer” to summarize all the discriminant information amassed by the previous layer. Moreover, by horizontally augmenting the inheritance layer with additional randomized nodes and applying back-propagation (BP) learning, the discriminant power of to the newly augmented network will be further enhanced.

In our experiments, the MINDnet was applied to several real-world datasets, including CIFAR-10 dataset reported below. As the baseline of comparison, the highest prediction accuracies published in recent years are: 93.57% (ResNet, 2015) < 96.01% (DenseNet, 2016) < 97.35% (NAS-Net, 2018) For fairness, we applied both MINDnet and MLP(with ReLU/dropout) to the same 64-dimensional feature vectors extracted by ResNET. Our results shows that MINDnet can deliver a substantial margin of improvement - up by nearly 5% over the original baseline of 93.57%. In short, MINDnet has the highest performance so far: 98.26% (MINDnet, 2018).

In summary, MINDnet advocates a new learning paradigm to Monotonically INcrease the Discriminative power (quantified by DI) of the classifying networks. It offers a new LE learning model to efficiently tackle both the afore-mentioned analytical and structural concerns over deep learning networks.




Prof. Edwin Sha
Chang-Jiang Honorary Chair Professorship,
China Thousand-Talent Chair Professorship, Distinguished Professor,
East China Normal University, Shanghai, China

Bio: Edwin Hsing-Mean Sha received BS degree from National Taiwan University in 1986, and Ph.D. degree from the Department of Computer Science, Princeton University, USA in 1992. From August 1992 to August 2000, he was with the Department of Computer Science and Engineering at University of Notre Dame, USA. Since 2000, he has been a tenured full professor at the University of Texas at Dallas. From 2012 to 2017, he served as the Dean of College of Computer Science at Chongqing University, China. He is currently a tenured distinguished professor at East China Normal University, Shanghai, China. He has published more than 400 research papers in refereed international conferences and premier journals including over 60 IEEE/ACM Transactions articles. He served as program committee members and chairs of numerous international conferences and editors of many journals. He received many awards including Teaching Award, Microsoft Trustworthy Computing Curriculum Award, NSF CAREER Award, NSFC Overseas Distinguished Young Scholar Award, Chang-Jiang Honorary Chair Professorship, China Thousand-Talent Chair Professorship, etc. He received the ACM TODAES Best Paper Award from ACM Transactions on Design Automation of Electronic Systems, the Editor's pick of the year of 2016 from IEEE Transactions on Computers for his work on SIMFS, and many other best paper awards.

Topic: Towards the Design of Efficient In-Memory Storage Systems

Time: June 23rd, 2018, 10:00 AM.

Abstract: This talk will present, from the perspective of system software, how to design the most efficient in-memory storage system, including files systems, database systems, etc.. As the emerging technologies of persistent memory, like PCM, MRAM, provide opportunities for preserving data in memory, traditional storage system structures may need re-studying and re-designing. The talk will first present a framework based on a new concept that each file has its own ``Virtual Address Space." A file system called SIMFS is then designed and fully implemented. SIMFS outperforms other in-memory file systems such as Intel's PMFS. We believe that this concept has a great impact to the design of many in-memory storage systems. Based on the concept, we have conducted work on the design and implementation of hybrid file systems, user-level file systems, distributed in-memory file system, etc. This lecture will also present our new efficient and effective index structure, different from B+ trees or alike, for NVM-based relational databases, and present some of our work on NVM-based key-value database. All of the results give the best known ones in literatures.




Dr. Xiaodong Wang
IEEE Fellow
Columbia University, USA

Bio: Xiaodong Wang received the Ph.D degree in Electrical Engineering from Princeton University. He is a Professor of Electrical Engineering at Columbia University in New York. Dr. Wang’s research interests fall in the general areas of signal processing and communications, and has published extensively in these areas. Among his publications is a book entitled “Wireless Communication Systems: Advanced Techniques for Signal Reception”, published by Prentice Hall in 2003. His current research interests include wireless communications, statistical signal processing, and genomic signal processing. Dr. Wang received the 1999 NSF CAREER Award, the 2001 IEEE Communications Society and Information Theory Society Joint Paper Award, and the 2011 IEEE Communication Society Award for Outstanding Paper on New Communication Topics. He has served as an Associate Editor for the IEEE Transactions on Communications, the IEEE Transactions on Wireless Communications, the IEEE Transactions on Signal Processing, and the IEEE Transactions on Information Theory. He is a Fellow of the IEEE and listed as an ISI Highly-cited Author.

Topic: Tensor Completion – Fundamental Limits, Efficient Algorithms, and Privacy

Time: June 23rd, 2018, 3:35 PM.

Abstract: The availability of numerous affordable and deployable sensors of various types has enabled the collection of massive sensing data on the same object or phenomenon from multiple perspectives. Tensors are natural multi-dimensional generalizations of matrices and have attracted tremendous interests in recent years. Low-rank tensor completion finds applications in many fields. A completion is a tensor whose entries agree with the observed entries and its rank matches the given rank. We analyze the manifold structure corresponding to the tensors with the given rank and define a set of polynomials based on the sampling pattern and tensor decomposition. Then, we show that finite completability of the sampled tensor is equivalent to having a certain number of algebraically independent polynomials among the defined polynomials. Our proposed approach results in characterizing the maximum number of algebraically independent polynomials in terms of a simple geometric structure of the sampling pattern, and therefore we obtain the deterministic necessary and sufficient condition on the sampling pattern for finite completability of the sampled tensor. Moreover, assuming that the entries of the tensor are sampled independently with probability p and using the mentioned deterministic analysis, we propose a combinatorial method to derive a lower bound on the sampling probability p, or equivalently, the number of sampled entries that guarantees finite completability with high probability.

Moreover, we present a new approach to low-rank tensor completion when the number of samples is only slightly more than the dimension of the corresponding manifold, by solving a set of polynomial equations using Newton’s method. In many applications, sampled data are sent to a central cloud server to complete the tensor completion task. However, revealing data to the server raises privacy concerns. To that end we propose a novel framework for privacy-preserving tensor completion, called homomorphic tensor completion, that is relatively easy to implement in practice.




Professor Geyong Min
University of Exeter, U.K.

Bio: Professor Geyong Min is a Chair in High Performance Computing and Networking and the academic lead of Computer Science in the College of Engineering, Mathematics and Physical Sciences at the University of Exeter, UK. His recent research has been supported by European FP6/FP7, UK EPSRC, Royal Academy of Engineering, Royal Society, and industrial partners including Motorola, IBM, Huawei Technologies, INMARSAT, and InforSense Ltd. Prof. Min is the Co-ordinator of two recently funded FP7 projects: 1) Quality-of-Experience Improvement for Mobile Multimedia across Heterogeneous Wireless Networks; and 2) Cross-Layer Investigation and Integration of Computing and Networking Aspects of Mobile Social Networks. As a key team member and participant, he has made significant contributions to several EU funded research projects on Future Generation Internet. He has published more than 200 research papers in leading international journals including IEEE/ACM Transactions on Networking, IEEE Journal on Selected Areas in Communications, IEEE Transactions on Communications, IEEE Transactions on Wireless Communications, IEEE Transactions on Multimedia, IEEE Transactions on Computers, IEEE Transactions on Parallel and Distributed Systems, and at reputable international conferences, such as SIGCOMM-IMC, ICDCS, IPDPS, GLOBECOM, and ICC. He is an Associated Editor of several international journals, e.g., IEEE Transactions on Computers. He served as the General Chair/Program Chair of a number of international conferences in the area of Information and Communications Technologies.

Topic: Distributed Network Big Data Processing Platform

Time: June 23rd, 2018, 11:00 AM.

Abstract: With the ever-increasing migration of business services to the Cloud, the past years have witnessed an explosive growth in the volume of network data driven by the popularization of smart mobile devices and pervasive content-rich multimedia applications, creating a critical issue of Internet traffic flooding. How to handle the ever-increasing network traffic has become a pressing challenge. This talk will present a distributed processing platform we have recently developed to support data acquisition from different network domains and achieve effective representation and efficient analysis of heterogeneous network big data. This big data processing platform has the potential to discover valuable insights and knowledge hidden in rich network big data for improving the design, operation, and management of future Internet. The talk offers the theoretical underpinning for efficient analysis of network big data as well as the insights on implementation of distributed data processing platform for online anomaly prediction and detection in future Internet.




Dr. Shui Yu
School of Information Technology,
Deakin University, Australia

Bio: Shui Yu is currently an Associate Professor of School of Information Technology, Deakin University, Australia. Dr Yu’s research interest includes Security and Privacy, Networking, Big Data, and Mathematical Modelling. He has published two monographs and edited two books, more than 200 technical papers, including top journals and top conferences, such as IEEE TPDS, TC, TIFS, TMC, TKDE, TETC, ToN, and INFOCOM. Dr Yu initiated the research field of networking for big data in 2013. His h-index is 29. Dr Yu actively serves his research communities in various roles. He is currently serving the editorial boards of IEEE Communications Surveys and Tutorials, IEEE Communications Magazine, IEEE Internet of Things Journal, IEEE Communications Letters, IEEE Access, and IEEE Transactions on Computational Social Systems. He has served more than 70 international conferences as a member of organizing committee, such as publication chair for IEEE Globecom 2015, IEEE INFOCOM 2016 and 2017, TPC chair for IEEE BigDataService 2015, and ACSW 2017. He is a Senior Member of IEEE, a member of AAAS and ACM, the Vice Chair of Technical Committee on Big Data  of IEEE Communication Society, and a Distinguished Lecturer of IEEE Communication Society.

Topic: Cybersecurity and Privacy: State-of-Art, Challenges, and Opportunities

Time: June 24th, 2018, 10:00 AM.

Abstract: Cybersecurity and privacy are two hot topics in our society. However, both of them are mainly uncharted territories, and we have far more questions than answers from applications all the way to theories. In this talk, we review the state-of-art of the fields based on two research topics: distributed denial of service and big data privacy, respectively, aiming to present audience an overview of the current battle ground. We also discuss the problems and challenges that we are facing, and explore the possible promising directions in the fields.






Homepage