Keynotes Speakers
De. 10th-12th, 2018. Tokyo, Japan, Waseda University

Prof.Ruqian Lu

中国科学院院士

Academy of Mathematics and Systems Science, Chinese Academy of Sciences, China.

  • Bio
    Ruqian Lu is a professor of computer science of the Institute of Mathematics, Academy of Mathematics and Systems Science, at the same time an adjunct professor of Institute of Computing Technology, Chinese Academy of Sciences and Peking University. He is also a fellow of Chinese Academy of Sciences. His research interests include artificial intelligence, knowledge engineering, knowledge based software engineering, formal semantics of programming languages and quantum information processing. He has published more than 180 papers and 10 books. He has won two first class awards from the Chinese Academy of Sciences and a National second class prize from the Ministry of Science and Technology. He has also won the 2003 Hua Loo-keng Mathematics Prize from the Chinese Mathematics Society and the 2014 lifetime achievements award from the China's Computer Federation.
  • Topic
    Next to Big Data is Big Knowledge
  • Time
    December 11, 2018, 9:00 AM - 10:00 AM
  • Abstract
    Recently, the topic of mining big data to obtain knowledge (called big data knowledge engineering) has become hot interest of researchers. Also the concept of big knowledge was coined in this process. The new challenge was to mine big knowledge (not just knowledge) from big data. While researchers have explored the basic characteristics of big data in the past, it seems that very few or even no researcher has tried to approach the task of defining or summarizing the basic characteristics of big knowledge. This talk will first provide a retrospective view on the research of big data knowledge engineering and then introduce formally the big knowledge concept with five major characteristics, both qualitatively and quantitatively. Using these characteristics we investigate six large scaled knowledge engineering projects: the Shanghai project of fifth comprehensive investigation on city's traffic, the Xia-Shang-Zhou chronology project, the Troy city and Trojan War excavation project, the international human genome project, the Wiki-world project and the currently very hot research on knowledge graphs. We show that some of them are big-knowledge projects but some aren't. Based on these discussions, the concept of big-knowledge system will be introduced with additional five characteristics. Also big-knowledge engineering concepts and their lifecycle models are introduced and discussed. At last, a group of future research problems on big knowledge is proposed.

Prof. H. J. Siegel

Department of Electrical and Computer Engineering

Department of Computer Science

Colorado State University

Fort Collins, Colorado, USA

  • Bio
    H. J. Siegel is a Professor Emeritus and Senior Research Scientist/Scholar at Colorado State University (CSU). From 2001 to 2017, he was the George T. Abell Endowed Chair Distinguished Professor of Electrical and Computer Engineering at CSU, where he was also a Professor of Computer Science. He was a professor at Purdue University from 1976 to 2001. He received two B.S. degrees from the Massachusetts Institute of Technology (MIT), and the M.A., M.S.E., and Ph.D. degrees from Princeton University. He is a Fellow of the IEEE and a Fellow of the ACM. Prof. Siegel has co-authored over 450 published technical papers in the areas of parallel and distributed computing and communications, which have been cited over 18,000 times. He was a Coeditor-in-Chief of the Journal of Parallel and Distributed Computing, and was on the Editorial Boards of the IEEE Transactions on Parallel and Distributed Systems and the IEEE Transactions on Computers. For more information, please see www.engr.colostate.edu/~hj.
  • Topic
    Measuring the Robustness in Computing Systems
  • Time
    December 11, 2018, 10:15 AM - 11:15 AM
  • Abstract
    Throughout all fields of science and engineering, it is important that resources are allocated so that systems are robust against uncertainty. The robustness analysis approach presented here can be adapted to a variety of computing and communication environments.
    What does it mean for a system to be "robust"? How can the performance of a system be robust against uncertainty? How can robustness be described? How does one determine if a claim of robustness is true? How can one measure robustness to decide which of two systems is more robust?
    We explore these general questions in the context of parallel and distributed computing systems. Such computing systems are often heterogeneous mixtures of machines, used to execute collections of tasks with diverse computational requirements. A critical research problem is how to allocate heterogeneous resources to tasks to optimize some performance objective. However, systems frequently have degraded performance due to uncertainties, such as inaccurate estimates of actual workload parameters. To reduce this degradation, we present a model for deriving the robustness of a resource allocation. The robustness of a resource allocation is quantified as the probability that a user-specified level of system performance can be met. We show how to use historical data to build a probabilistic model to evaluate the robustness of resource assignments and to design resource management techniques that produce robust allocations.

copyright © 2016-2018 SmartCom