Keynote Speakers

Keynote Speakers

Distinguished Keynote Speaker:

Why is it so Hard to Make Self-driving Cars?
(Trustworthy Autonomous Systems)

JOSEPH SIFAKIS (Verimag Laboratory, Grenoble, France)

Joseph Sifakis

Why is self-driving so hard? Despite the enthusiastic involvement of big technological companies and the massive investment of many billions of dollars, all the optimistic predictions about self-driving cars “being around the corner” went utterly wrong.

I argue that these difficulties emblematically illustrate the challenges raised by the vision for trustworthy autonomous systems. These are critical systems intended to replace human operators in complex organizations, very different from other intelligent systems such as game-playing robots or intelligent personal assistants.

I discuss complexity limitations inherent to autonomic behavior but also to integration in complex cyber-physical and human environments. I argue that existing critical systems engineering techniques fall short of meeting the complexity challenge. I also argue that emerging end-to-end AI-enabled solutions currently developed by industry, fail to provide the required strong trustworthiness guarantees.

I advocate a hybrid design approach combining model-based and data-based techniques and seeking tradeoffs between performance and trustworthiness. I also discuss the validation problem emphasizing the need for rigorous simulation and testing techniques allowing technically sound safety evaluation.

I conclude that building trustworthy autonomous systems goes far beyond the current AI vision. To reach this vision, we need a new scientific foundation enriching and extending traditional systems engineering with data-based techniques.

Prof. Joseph Sifakis is Emeritus Research Director at Verimag laboratory. His current area of interest is trustworthy autonomous systems design with a focus on self-driving cars. In 2007, he received the Turing Award for his contribution to the theory and application of model checking. He is a member of the French Academy of Sciences, of the French National Academy of Engineering, of Academia Europea, of the American Academy of Arts and Sciences, of the National Academy of Engineering, and of the Chinese Academy of Sciences. He is a Grand Officer of the French National Order of Merit, a Commander of the French Legion of Honor. He has received the Leonardo da Vinci Medal in 2012.

Keynote Speakers:

Easy Development and Execution of Workflows with eFlows4HPC

ROSA M. BADIA (Barcelona Supercomputing Center, Spain)

Rosa M. Badia

Distributed computing infrastructures are evolving from traditional models to environments that involve sensors, edge devices, instruments, etc, and,  as well, high-end computing systems such as clouds and HPC clusters. A key aspect is how to describe and develop the applications to be executed on such platforms. What is more, the use of data analytics and artificial intelligence, in general, is in high demand in current HPC applications. However, the methodologies to develop such workflows that integrate HPC simulations and data analytics is not well integrated. eFlows4HPC project aims at providing a workflow software stack and an additional set of services to enable the integration of HPC simulations and modeling with big data analytics and machine learning in scientific and industrial applications. The project is also developing the HPC Workflows as a Service (HPCWaaS) methodology that aims at providing tools to simplify the development, deployment, execution, and reuse of workflows. The project will demonstrate its advances through three application Pillars with high industrial and social relevance: manufacturing, climate, and urgent computing for natural hazards; these applications will help to prove how the realization of forthcoming efficient HPC and data-centric applications can be developed with new workflow technologies. The talk will present the motivation, challenges, and project workplan.

Rosa M. Badia holds a Ph.D. in Computer Science (1994) from the Technical University of Catalonia (UPC). She is the manager of the Workflows and Distributed Computing research group at the Barcelona Supercomputing Center (BSC). She is considered one of the key researchers in Parallel programming models for multicore and distributed computing due to her contribution to task-based programming models during the last 15 years. The research group focuses on PyCOMPSs/COMPSs, parallel task-based programming distributed computing, and its application to the development of large heterogeneous workflows that combine HPC, Big Data, and Machine Learning. The group is also doing research around the dislib, a parallel machine learning library parallelized with PyCOMPSs.  Dr. Badia has published nearly 200 papers in international conferences and journals on the topics of her research. She has been very active in projects funded by the European Commission in contracts with industry. She has been actively contributing to the BDEC international initiative and is a member of the HiPEAC Network of Excellence. She received the Euro-Par Achievement Award 2019 for her contributions to parallel processing and the DonaTIC award, category Academia/Researcher in 2019.  Rosa Badia is the IP of eFlows4HPC. 

Simulating Our Universe: Leveraging Present and Future Supercomputing Systems

JOACHIM STADEL (University of Zurich, Switzerland)

We have seen enormous progress in the field of cosmology, such that we now refer to the science as “precision cosmology”. This has been brought about because of the discovery and quantification of temperature (and hence density) fluctuations in the microwave background radiation (CMB). The satellites COBE, WMAP and PLANCK have allowed us to measure the total amount of matter, dark matter, and dark energy in the Universe, despite the fact that the last 2 of these remain mysterious. The effect of weak lensing allows us to “map out” the dark matter between us and distant galaxies which will be observed over a very large fraction of the sky by the ESA EUCLID mission. Modelling what will be seen by Euclid requires very large simulations to achieve even the minimum requirements. This is needed in order to quantify systematics in the measurements as well asconnecting the observations with the fundamental physics parameters. In order to perform simulations with trillions of particles we need to use the O(N) Fast Multipole Method as well as taking advantage of all possible performance of modern supercomputer architectures. I will discuss some of these details including vector and GPU computing, hybrid communication, load balancing and memory “crunching”. These methods have been used in the N-body gravity code, PKDGRAV3, which has performed the world’s largest N-body simulation for EUCLID simulating 4 trillion particles evolving under their own self gravity. Looking into the future, we can currently simulate the baryons along with the dark matter, forming very realistic galaxies. However, these simulations are very challenging and their precision is still debatable. To extend such simulations to very large portions of the observable Universe will remain impossible for a long time yet, but replacing parts of the simulation with machine learning, trained on a wide range of galaxy formation environments, may be a way to jump to larger scales with such simulations. This would be the frontier in the field of computational cosmology.

Joachim Stadel

Joachim Stadel is Professor of Computational Cosmology at the University of Zurich in the Institute of Computational Science. He holds a Ph.D. in Astronomy from University of Washington, Seattle. His research interests include dark matter structure formation in the universe, rocky planet formation via collisional growth and gas accretion, hydrodynamical galaxy and planet formation simulations and long term stability of planetary systems.

He is a member of the ESA Euclid Consortium: Science Working Group for Cosmological Simulations. One of the Project Leaders of the SNF NCCR Project PlanetS.
Author of the HPC Gravity Simulation Codes: PKDGRAV, and GENGA
.

The Road to a Universal Internet Machine

RACHID GUERRAOUI (École Polytechnique Fédérale de Lausanne, Switzerland)

Rachid Guerraoui

This talk will discuss what it would mean to build the abstraction of a widely distributed universal computer. In the process, the talk will revisit (a) cryptocurrency payment; and (b) machine learning problems through the lenses of first distributed computing principles, introducing simpler and more robust solutions to those considered nowadays.

Rachid Guerraoui is a professor in computer science at EPFL, where he leads the Distributed Computing Laboratory. He worked in the past with HP Labs in Palo Alto and MIT. He has been elected ACM Fellow and Professor of the College de France, and was awarded a Senior ERC Grant and a Google Focused Award.

The Green vs.  Exascale HPC: Carbon-neutral site operations, energy efficiency, and overall sustainability

SPONSORED KEYNOTE
This event is promoted by HPE (ISPDC Platinum Sponsor)

UTZ-UWE HAUS, Head of the HPE HPC/AI EMEA Research Lab (ERL)

Sustainability of HPC systems is a multifaceted topic, as it encompasses not only the operational aspects, like energy consumption, and resource lifecycle of the components, but also extends into the integration – both as a facility and as a tool – into a circular economy. In view of having achieved Exascale, and with a perspective towards the post-exascale computing era, all of these aspects come with different mathematical and thus algorithmic requirements, offering both new challenges and new opportunities for the HPC community and vendors alike.

Utz-Uwe Haus

Utz-Uwe Haus is the head of the HPE HPC/AI EMEA Research Lab (ERL). He studied mathematics and computer science at the Technical University of Berlin (TU Berlin). After obtaining a doctorate in mathematics at the University of Magdeburg (Germany) he worked on nonstandard applications of mathematical optimization in chemical engineering, material science, and systems biology. He co-founded CERL, the CRAY EMEA Research Lab in 2015, which is now ERL. His research interests focus on parallel programming and data-aware scheduling problems, data analytics in the context of semantic databases, and novel compute architectures, and their relation to Mathematical Optimization and Operations Research, as well as GreenHPC, i.e., making data centers flexible and efficient energy network participants in a decentralized European energy landscape.

From Timestamping Process to Blockchains and More (cancelled)

JEAN-JACQUES QUISQUATER (Université Catholique de Louvain, Belgium)

Timestamping is the (secure) process of adding time, with some precision, to events, data, transactions, aso, using some trusted clock and some format. In fact, very often the exact timing is not important but the exact flow of events is mostly relevant (we want to precisely know what is happening before and after a given event, and in some cases events just in parallel). This flow will be denoted by some chaining of events or, better, chaining of their identities. So chaining is the main process. In order to be trusted by everybody, it is a good idea to publish each timestamp. That is today, on the web for everybody. To be efficient a good idea is to publish it by blocks of several timestamps in some way (Merkle tree, verkle trees, …). Chaining and block give together the blockchain. The next question is: who is managing the registration of the flow, that is the chain? To be trusted we want to involve everybody in a credible way: that is how to use decentralization at the best? My talk will explain that taking into account the history (very old), the first implementations, the surprise of bitcoin, and how to handle that in parallel and distributed for the future. 

Jean-Jacques Quisquater

Jean-Jacques Quisquater holds a Ph.D. in Computer Science (1987) from Laboratoire de Recherche en Informatique(LRI), Orsay. He is now professor emeritus of cryptography, multimedia security and secure circuits at the Ecole Polytechnique de Louvain, Catholic University of Louvain (UCL), Louvain-la-Neuve,  Belgium, where he was responsible of many projects related to smart cards to secure protocols for communications, digital signatures,  payTV, protection of copyrights and security tools for electronic commerce.  He was the head of the well-known group “UCL Crypto Group”, now he is the scientific advisor to the group. He is a research affiliate at MIT from 2004. Project TIMESEC (1996-1998) was the first blockchain on the web and it was cited by Satoshi Nakamoto in his Bitcoin white-paper.

Dr. Quisquater was a scientist from 1970 to 1991 at the Philips Research Lab, Brussels and the head of cryptologic research group with first strong cryptographic algorithms between 1985-1988. He has been consulting for Math RiZK, SRL since 1991. Dr. Quisquater has published more than 250 papers and holds 20 patents. He is a full member of Royal Belgian Academy and a fellow of International Association for Cryptologic Research since 2010 and has won RSA Excellence Award (Mathematics) in 2013.