BEGIN:VCALENDAR PRODID:-//Microsoft Corporation//Outlook MIMEDIR//EN VERSION:2.0 BEGIN:VEVENT DTSTART:20151115T143000Z DTEND:20151115T180000Z LOCATION:18C DESCRIPTION;ENCODING=QUOTED-PRINTABLE:ABSTRACT: High performance computing (HPC) environments have traditionally been designed to meet the compute demands of scientific applications; data has only been a second order concern. With science moving toward data-driven discoveries relying on correlations and patterns in data to form scientific hypotheses, the limitations of HPC approaches become apparent: Low-level abstractions and architectural paradigms, such as the separation of storage and compute, are not optimal for data-intensive applications. While there are powerful computational kernels and libraries available for traditional HPC, there is an apparent lack of functional completeness of analytical libraries. In contrast, the Apache Hadoop ecosystem has grown to be rich with analytical libraries, e.g. Spark MLlib. Bringing the richness of the Hadoop ecosystem to traditional HPC environments will help address some gaps.=0A=0AIn this tutorial, we explore a light-weight and extensible way to provide the best of both: We utilize the Pilot-Abstraction to execute a diverse set of data-intensive and analytics workloads using Hadoop MapReduce and Spark as well as traditional HPC workloads. The audience will learn how to efficiently use Spark and Hadoop on HPC to carry out advanced analytics tasks, e.g. KMeans and graph analytics, and will understand deployment/performance trade-offs for these tools on HPC. SUMMARY:Data-Intensive Applications on HPC Using Hadoop, Spark and RADICAL-Cybertools PRIORITY:3 END:VEVENT END:VCALENDAR