Geoffrey Charles Fox received a Ph.D. in Theoretical Physics from Cambridge University where he was Senior Wrangler. He is now a distinguished professor of Engineering, Computing, and Physics at Indiana University where he is the director of the Digital Science Center. He previously held positions at Caltech, Syracuse University, and Florida State University after being a postdoc at the Institute for Advanced Study at Princeton, Lawrence Berkeley Laboratory, and Peterhouse College Cambridge. He has supervised the Ph.D. of 73 students and published around 1300 papers (over 510 with at least ten citations) in physics and computing with an hindex of 78 and over 35000 citations. He is a Fellow of APS (Physics) and ACM (Computing) and works on the interdisciplinary interface between computing and applications. Current work is in Biology, Pathology, Sensor Clouds and Ice-sheet Science, Image processing, Deep Learning, and Particle Physics. His architecture work is built around High-performance computing enhanced Software Defined Big Data Systems on Clouds and Clusters. The analytics focuses on scalable parallel machine learning. He is an expert on streaming data and robot-cloud interactions. He is involved in several projects to enhance the capabilities of Minority Serving Institutions. He has experience in online education and its use in MOOCs for areas like Data and Computational Science.
Title: HPC, Big Data, and Machine Learning Convergence
Abstract: We describe how High Performance Computing (HPC) can be used to enhance Big Data and Machine Learning (ML) systems (HPC for ML) but also how machine learning can be used to enhance system execution (ML for HPC). We discuss how these communitities as well cloud and IoT make up today's academic and industry activities. We note the differences between HPC software architectures and Big Data Systems such as Hadoop, Spark, and TensorFlow, which makes it not easy to integrate HPC and Big Data. Further, we identify eight distinct ways MLforHPC can be used and give examples and describe compute science challenges in implementing these different ways that HPC and ML interact. We describe our big data framework Twister2 and explain where it can offer improved capabilities over current systems in both MLforHPC and HPCforML.
Ümit V. Çatalyürek is currently professor and associate chair of the School of Computational Science and Engineering in the College of Computing at the Georgia Institute of Technology. He received his PhD, MS and BS in Computer Engineering and Information Science from Bilkent University, in 2000, 1994 and 1992, respectively. Professor Çatalyürek is a Fellow of the IEEE, a member of the Association for Computing Machinery (ACM) and the Society for Industrial and Applied Mathematics, and the elected chair for the IEEE’s Technical Committee on Parallel Processing for 2016-2019. He is also vice-chair for the ACM’s Special Interest Group on Bioinformatics, Computational Biology and Biomedical Informatics for 2015-2021. He currently serves as the editor-in-chief for Parallel Computing, and on the program and organizing committees of numerous international conferences. His main research areas are in parallel computing, combinatorial scientific computing and biomedical informatics. He has co-authored more than 200 peer-reviewed articles, invited book chapters and papers.
Title: Seeking Performance Portability on Graph Analytics
Abstract: Graphs became de facto standard for modeling complex relations and networks in computers. With an increase in the size of the graphs and the complexity of the analyses to perform on them, many software systems have been designed to leverage modern high performance computing platforms. Some of them provide very productive programming environment for graph analysis, however, they cannot get even close to single threaded performance. In this talk, we will, briefly, present some important graph analytics problems and techniques, such as centrality, pattern search and alignment, and talk about how we achieve high performance on modern computer architectures.
Jin Song Dong is a professor at the School of Computing at the National University of Singapore (NUS) and a research professor at Griffith University (part-time). From 2017-2018, Jin Song was the Director of the Institute for Integrated Intelligent Systems (IIIS) at Griffith University and managed to double IIIS external funding in two years. His research is in the areas of formal methods, safety and security systems, probabilistic reasoning and trusted machine learning. He co-founded PAT verification system which has attracted 4000+ registered users from 1000+ organizations in 150 countries and won 20 Year ICFEM Most Influential System Award in 2018. Jin Song is on the editorial board of ACM Transaction on Software Engineering and Methodology, Formal Aspects of Computing and Innovations in Systems and Software Engineering, A NASA Journal. He has successfully supervised 26 PhD students and many of them have become tenured faculty members in the leading universities around the world. He is Fellow of Institute of Engineers Australia.
Title: Trusted Decision Making
Abstract: Model Checking was invented for formally verifying concurrent and parallel systems. This talk focuses on applying model checking to event planning, goal reasoning, prediction, strategy analysis and decision making based on the process analysis toolkit (PAT). PAT integrates the expressiveness of state, event, time, and probability-based languages with the power of model checking. PAT currently supports various modeling languages with many application domains and has attracted thousands of registered users from hundreds of organizations. In this talk, we will also present some ongoing research projects, i.e., “goal analytics for autonomous systems” and “Silas: trusted machine learning” (in collaboration with Dependable Intelligence www.depintel.com)
Koji Nakano received the BE, ME and Ph.D degrees from Department of Computer Science, Osaka University, Japan in 1987, 1989, and 1992 respectively. In 1992-1995, he was a Research Scientist at Advanced Research Laboratory. Hitachi Ltd. In 1995, he joined Department of Electrical and Computer Engineering, Nagoya Institute of Technology. In 2001, he moved to School of Information Science, Japan Advanced Institute of Science and Technology, where he was an associate professor. He has been a full professor at School of Engineering, Hiroshima University from 2003. He has published extensively in journals, conference proceedings, and book chapters. He served on the editorial board of journals including IEEE Transactions on Parallel and Distributed Systems, IEICE Transactions, on Information and Systems, and International Journal of Foundations on Computer Science. His research interests include machine learning, quantum computing, GPU-based computing, FPGA-based reconfigurable computing, parallel computing, algorithms and architectures.
Title: Single Kernel Soft Synchronization Technique to Maximize the Hardware Resource Usage of the GPU
Abstract: Usually, CUDA programs running on the GPU perform separated multiple kernel calls for barrier synchronization of all threads, because the deadlock may occur by data request to waiting CUDA blocks and there is no direct way to synchronize CUDA blocks in a kernel call. The number of kernel calls should be as small as possible to reduce the kernel call overhead. In this talk, Single Kernel Soft Synchronization (SKSS) technique, which performs only one kernel call, is introduced. The idea of SKSS is to use a global counter to assign sequential IDs to invoked CUDA blocks and those with smaller IDs work for tasks that must start earlier. We can guarantee that the deadlock never occurs if all data requests are destined for CUDA blocks with smaller ID. We will show that several problems such as the prefix-sums, the summed area table (SAT), the 0-1 knapsack problem, and the error diffusion can be solved very efficiently on the using the SKSS technique.