The 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications (IEEE TrustCom-18)
July 31th - August 3rd, 2018, New York, USA.


Keynote Speakers


Prof. Witold Pedrycz
Canada Research Chair,
IEEE Fellow,
Professional Engineer,
Department of Electrical and Computer Engineering,
University of Alberta

Bio: Witold Pedrycz (IEEE Fellow, 1998) is Professor and Canada Research Chair (CRC) in Computational Intelligence in the Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Canada. He is also with the Systems Research Institute of the Polish Academy of Sciences, Warsaw, Poland. In 2009 Dr. Pedrycz was elected a foreign member of the Polish Academy of Sciences. In 2012 he was elected a Fellow of the Royal Society of Canada. Witold Pedrycz has been a member of numerous program committees of IEEE conferences in the area of fuzzy sets and neurocomputing. In 2007 he received a prestigious Norbert Wiener award from the IEEE Systems, Man, and Cybernetics Society. He is a recipient of the IEEE Canada Computer Engineering Medal, a Cajastur Prize for Soft Computing from the European Centre for Soft Computing, a Killam Prize, and a Fuzzy Pioneer Award from the IEEE Computational Intelligence Society.

His main research directions involve Computational Intelligence, fuzzy modeling and Granular Computing, knowledge discovery and data science, fuzzy control, pattern recognition, knowledge-based neural networks, relational computing, and Software Engineering. He has published numerous papers in this area. He is also an author of 16 research monographs and edited volumes covering various aspects of Computational Intelligence, data mining, and Software Engineering.

Dr. Pedrycz is vigorously involved in editorial activities. He is an Editor-in-Chief of Information Sciences, Editor-in-Chief of WIREs Data Mining and Knowledge Discovery (Wiley), and Int. J. of Granular Computing (Springer). He serves on an Advisory Board of IEEE Transactions on Fuzzy Systems and is a member of a number of editorial boards of international journals.

Topic: User-Centricity in Big Data Problems

Time: August 2nd, 2018, 9:00 AM.

Abstract: Big Data technology offers enormous potential and becomes a necessity in the era of omnipresent data. To unleash this potential, along with new paradigms, some existing principles need to be thoroughly revisited. As never seen so vividly before, the user assumes a central position in facilitating pursuits of big data by formulating some initial direction of the overall analysis and subsequently evaluating the value and actionability of the obtained findings. This entails that when considering the well-known list of Vs present in big data, the properties of value and veracity assume a pivotal role. The feature of user–centricity deserves a thorough discussion, especially in terms of defining the concept itself and identifying its multiway nature embracing transparency, interpretability, comprehension, and scalability.

The notions of abstraction and levels of abstraction, which are inherently involved in data analytics, can be conveniently realized in the form of information granules. The facet of abstraction (information granularity) makes the problems more manageable by positioning various constructs and processes at the level of a limited number of information granules. The abstraction mechanism is completed in the data space as well as feature (attribute) space resulting in granular data and granular features. Information granules can be sought as an outcome of realization of a generalized sampling mechanism.

In the talk, discussed are main ways of building information granules along with pertinent mechanisms of characterization of their quality and abilities to represent original data (reconstruction aspects). The tradeoffs present among the specificity of information granules, their abilities to describe the original data and related computing overhead are identified and quantified. Building a variety of models (predictors, classifiers, linkage analyzers, etc.) carried out in the presence of information granules (granular data) instead of original data comes with intriguing questions about the relevance of findings discovered at this particular level of abstraction, their comprehension and stability (robustness).





Prof. Sun-Yuan Kung

IEEE Fellow,
Princeton University, USA


Bio: S.Y. Kung, Life Fellow of IEEE, is a Professor at Department of Electrical Engineering in Princeton University. His research areas include machine learning, data mining, systematic design of (deep-learning) neural networks, statistical estimation, VLSI array processors, signal and multimedia information processing, and most recently compressive privacy. He was a founding member of several Technical Committees (TC) of the IEEE Signal Processing Society. He was elected to Fellow in 1988 and served as a Member of the Board of Governors of the IEEE Signal Processing Society (1989-1991). He was a recipient of IEEE Signal Processing Society's Technical Achievement Award for the contributions on "parallel processing and neural network algorithms for signal processing" (1992); a Distinguished Lecturer of IEEE Signal Processing Society (1994); a recipient of IEEE Signal Processing Society's Best Paper Award for his publication on principal component neural networks (1996); and a recipient of the IEEE Third Millennium Medal (2000). Since 1990, he has been the Editor-In-Chief of the Journal of VLSI Signal Processing Systems. He served as the first Associate Editor in VLSI Area (1984) and the first Associate Editor in Neural Network (1991) for the IEEE Transactions on Signal Processing. He has authored and co-authored more than 500 technical publications and numerous textbooks including ``VLSI Array Processors'', Prentice-Hall (1988); ``Digital Neural Networks'', Prentice-Hall (1993) ; ``Principal Component Neural Networks'', John-Wiley (1996); ``Biometric Authentication: A Machine Learning Approach'', Prentice-Hall (2004); and ``Kernel Methods and Machine Learning”, Cambridge University Press (2014).

Topic: MINDnet: a methodical  and cost-effective learning paradigm for training deep neural networks

Time: August 1st, 2018, 2:30 PM.

Abstract: We shall first introduce two basic machine learning subsystems: (1) Feature Engineering (FE), e.g. CNN for image/speech feature extraction and (2) Label Engineering (LE), e.g. Multi-layer Perceptron (MLP). It is also important that we stress both the strength of weakness of deep learning. For the former, the success of deep neural networks (DNN) hinges upon the rich nonlinear space embedded in their nonlinear hidden neuron layers. As to the weakness, the prevalent concerns over deep learning include two major fronts: one analytical and one structural.

From the analytical perspective, the ad hoc nature of deep learning renders its success at the mercy of trial-and-errors. To rectify this problem, we advocate a methodic learning paradigm, MINDnet, which is computationally efficient in training the networks and yet mathematically feasible to analyze. MINDnet hinges upon the use of an effective optimization metric, called Discriminant Information (DI). It will be used as a surrogate of the popular metrics such as 0-1 loss or prediction accuracy. Mathematically , DI is equivalent or closely related to Gauss’ LSE, Fisher’s FDR, and Shannon’s Mutual Information. We shall explain why is that higher DI means higher linear separability, i.e. higher DI means that the data are more discriminable. In fact, it can be shown that, both theoretically and empirically, a high DI score usually implies a high prediction accuracy.

In the structural front, the curse of depth it is widely recognized as a cause of serious concern. Fortunately, many solutions have been proposed to effectively combat or alleviate such a curse. Likewise, in our case, MINDnet offers yet another cost-effective solution by circumventing the depth problem altogether via a new notion (or trick) of omni-present supervision, i.e. teachers hidden a “Trojan-horse” being transported (along with the training data) from the input to each of the hidden layers. Opening up the Trojan-horse at any hidden-layer, we can have direct access to the teacher’s information for free, in the sense that no BP is incurred. In short, it amount to learning with no-propagation (NP). By harnessing the teacher information, we will be able to construct a new and slender “inheritance layer” to summarize all the discriminant information amassed by the previous layer. Moreover, by horizontally augmenting the inheritance layer with additional randomized nodes and applying back-propagation (BP) learning, the discriminant power of to the newly augmented network will be further enhanced.

In our experiments, the MINDnet was applied to several real-world datasets, including CIFAR-10 dataset reported below. As the baseline of comparison, the highest prediction accuracies published in recent years are: 93.57% (ResNet, 2015) < 96.01% (DenseNet, 2016) < 97.35% (NAS-Net, 2018) For fairness, we applied both MINDnet and MLP(with ReLU/dropout) to the same 64-dimensional feature vectors extracted by ResNET. Our results shows that MINDnet can deliver a substantial margin of improvement - up by nearly 5% over the original baseline of 93.57%. In short, MINDnet has the highest performance so far: 98.26% (MINDnet, 2018).

In summary, MINDnet advocates a new learning paradigm to Monotonically INcrease the Discriminative power (quantified by DI) of the classifying networks. It offers a new LE learning model to efficiently tackle both the afore-mentioned analytical and structural concerns over deep learning networks.




Prof. Bhavani Thuraisingham

Louis A. Beecherl, Jr. I, Distinguished Professor,
Department of Computer Science
Executive Director of the Cyber Security Research Institute
Erik Jonsson School of Engineering and Computer Science
The University of Texas at Dallas, USA.

Bio: Dr. Bhavani Thuraisingham is the Louis A. Beecherl, Jr. Distinguished Professor in the Erik Jonsson School of Engineering and Computer Science at the University of Texas, Dallas (UTD) and the Executive Director of UTD’s Cyber Security Research and Education Institute. She is also a visiting Senior Re-search Fellow at Kings College, University of London and a 2017-2018 Cyber Security Policy Fellow at the New America Foundation. Her research is on integrating Data Science and Cyber Security. Prior to joining UTD she worked at the MITRE Corporation for 16 years including a three-year stint as a Program Director at the NSF where she managed the Information Management and Analytics area and was part of the Cyber Trust theme. She was also a Department Head in Information and Data Management at MITRE. Prior to MITRE, she worked for the commercial industry for six years including at Honeywell. She is the recipient of numerous awards including the IEEE CS 1997 Technical Achievement Award, the ACM SIGSAC 2010 Outstanding Contributions Award, 2013 IBM Faculty Award, 2017 ACM CODASPY Research Award, 2017 IEEE CS Services Computing Technical Committee Research Innovation Award, and 2018 ACM SACMAT Best Paper test of Time Award. She is a 2003 Fellow of the IEEE and the AAAS and a 2005 Fellow of the British Computer Society. She has published over 120 journal articles, 250 conference papers, and 15 books, has delivered over 130 keynote addresses, and is the inventor of six patents in data analytics and secure data management. She co-chaired the Women in Cyber Security conference (WiCyS) in 2016 and is serving as the Co-Program Chair of the 2018 IEEE Conference on Data Mining.

Topic: Secure Data Science: Integrating Cyber Security and Data Science

Time: August 1st, 2018, 9:00 AM.

Abstract: The collection, storage, manipulation, analysis and retention of massive amounts of data have resulted in serious security and privacy considerations. Various regulations are being proposed to handle big data so that the privacy of the individuals is not violated. For example, even if personally identifiable information is removed from the data, when data is combined with other data, an individual can be identified. While collecting massive amounts of data causes security and privacy concerns, big data analytics applications in cyber security is exploding. For example, an organization can outsource activities such as identity management, intrusion detection and malware analysis to the cloud. The question is, how can the developments in data science techniques be used to solve security problems? Furthermore, how can we ensure that such techniques are secure and adapt to adversarial attacks? This presentation will first describe our research in big data security and privacy as well as developing a privacy aware data management framework. Second, it will then discuss stream data analytics and novel class detection and describe their applications to insider threat detection. Third, it will discuss the emerging research area of adversarial machine learning. Finally, it will discuss applications to assured information sharing.




Prof. Jie Wu

IEEE Fellow,
Director of International Affairs,
College of Science and Technology,
Director of Center for Networked Computing (CNC),
Laura H. Carnell Professor, Department of Computer and Information Sciences,
Temple University

Bio: Jie Wu is a Chinese computer scientist. He is the Associate Vice Provost for International Affairs and Director for Center for Networked Computing at Temple University. He also serves as the Laura H. Carnell professor in the Department of Computer and Information Sciences. He served as Program Director of Networking Technology and Systems (NeTS) at the National Science Foundation from 2006 to 2008. Jie Wu is noted for his research in routing for wired and wireless networks. His main technical contributions include fault-tolerant routing in hypercube-based multiprocessors, local construction of connected dominating set and its applications in mobile ad hoc networks, and efficient routing in delay tolerant networks, including social contact networks.

He served as the General Chair of IEEE ICDCS 2013, IEEE IPDPS 2008, and IEEE MASS 2006 and the Program Chair of CCF CNCC 2013, IEEE INFOCOM 2011, and IEEE MASS 2004. He is a Fellow of IEEE and serves on the editorial board for a number of journals, including IEEE Transactions on Computers (TC), IEEE Transactions on Services Computing (TSC), and Journal of Parallel and Distributed Computing (JPDC). He received 2011 China Computer Federation (CCF) Overseas Outstanding Achievements Award. He was a Fulbright Senior Specialist. He was also an IEEE Distinguished Visitor and an ACM Distinguished Speaker and is currently a CCF Distinguished Speaker.

Topic: On Authenticated Query Processing via Untrusted Cloud Service Providers

Time: August 1st, 2018, 10:15 AM.

Abstract: In data publishing, the owner usually delegates the role of query processing to a third party publisher, such as cloud service providers (CSPs). CSPs are untrusted as they can fabricate query results, provide incomplete ones, or do both. We need to develop sound while efficient mechanisms to ensure completeness and authenticity of query results from a CSP. Validation can be done in one of the two ways: a small number of digests distributed periodically from the owner to the user, or, embedded verification objects stored in CSP, together with data, by the owner and passed to the user as part of query results. We consider a set of special queries which return a partition of data, based on the notion of logical or physical vicinity. These queries include range, top-k, skyline, and kNN (k-nearest-neighbor). Verification objects, through digital signatures and hash functions, authenticate and compress all partitioned data through chains and trees. The design of verification objects also depends on the query type and the structure of data, which may be multi-dimensional. This talk discusses several efficient designs of verification objects. The focus is on a special verification object based on composite linear certified chains. Such chains can be efficiently applied to multi-dimensional data applications where data change relatively frequently, and as a result, certified chains need to be quickly updated as well.




Prof. Zhiyun Qian
Computer Science & Engineering Department

University of California Riverside, USA

Bio: Dr. Zhiyun Qian is an associate professor at University of California, Riverside. His research interest is on system and network security, including vulnerability discovery, Internet security (e.g., TCP/IP), Android security, side channels. He has published more than a dozen papers at the top security conferences including IEEE Security & Privacy, ACM CCS, USENIX Security, and NDSS. His work has resulted in real-world impact with security patches applied in Linux kernel, Android, and firewall products. His work on TCP side channel attacks won the most creative idea award at GeekPwn 2016 and winner award at GeekPwn 2017. His work is currently supported by 8 NSF grants (including the NSF CAREER Award) and two industrial gifts. Topic: Network side channel attacks: An Oversight Yesterday, A Lingering Threat Today

Topic: Network side channel attacks: An Oversight Yesterday, A Lingering Threat Today

Time: August 1st, 2018, 1:30 PM.

Abstract: In this talk, I will discuss the history of attacks against one of the most widely used protocol --- TCP. As side channels were never really considered carefully when designing network protocols, I will demonstrate a blind off-path attacker can use side channels to hijack a remote TCP connection. Very recently, we show that a pure off-path attack can be carried out against Linux hosts without being able to run any malicious code on either the client or server. Essentially the attacker can infer if any two arbitrary hosts on the Internet are communicating using a TCP connection. Further, if the connection is present, such an off-path attacker can also infer the TCP sequence numbers in use, from both sides of the connection; this in turn allows the attacker to cause connection termination and perform data injection attacks. I will conclude by giving the insights on how to systematically discover and fix such problems.




Prof. Ruqian Lu

中国科学院院士
Academy of Mathematics and Systems Science,
Chinese Academy of Sciences, China.

Bio: Ruqian Lu is a professor of computer science of the Institute of Mathematics, Academy of Mathematics and Systems Science, at the same time an adjunct professor of Institute of Computing Technology, Chinese Academy of Sciences and Peking University. He is also a fellow of Chinese Academy of Sciences. His research interests include artificial intelligence, knowledge engineering, knowledge based software engineering, formal semantics of programming languages and quantum information processing. He has published more than 180 papers and 10 books. He has won two first class awards from the Chinese Academy of Sciences and a National second class prize from the Ministry of Science and Technology. He has also won the 2003 Hua Loo-keng Mathematics Prize from the Chinese Mathematics Society and the 2014 lifetime achievements award from the China’s Computer Federation.

Topic: Next to Big Data is Big Knowledge

Time: August 2nd, 2018, 1:30 PM.

Abstract: Recently, the topic of mining big data to obtain knowledge (called big data knowledge engineering) has become hot interest of researchers. Also the concept of big knowledge was coined in this process. The new challenge was to mine big knowledge (not just knowledge) from big data. While researchers used to explore the basic characteristics of big data in the past, it seems that very few or even no researcher has tried to approach the problem of defining or summarizing the basic characteristics of big knowledge. This talk will first provide a retrospective view on the research of big data knowledge engineering and then introduce formally the big knowledge concept with five major characteristics, both qualitatively and quantitatively. Using these characteristics we investigate six large scaled knowledge engineering projects: the Shanghai project of fifth comprehensive investigation on city’s traffic, the Xia-Shang-Zhou chronology project, the Troy city and Trojan War excavation, the international human genome project, the Wiki-world project and the currently very hot research on knowledge graphs. We show that some of them are big-knowledge projects but some aren’t. Based on these discussions, the concept of big-knowledge system will be introduced with additional five characteristics. Also big-knowledge engineering concepts and their lifecycle models are introduced and discussed. At last, a group of future research problems on big knowledge is proposed.