sponsored byACMIEEE The International Conference for High Performance 
Computing, Networking, Storage and Analysis
FacebookTwitterGoogle PlusLinkedInYouTubeFlickr

SCHEDULE: NOV 15-20, 2015

When viewing the Technical Program schedule, on the far righthand side is a column labeled "PLANNER." Use this planner to build your own schedule. Once you select an event and want to add it to your personal schedule, just click on the calendar icon of your choice (outlook calendar, ical calendar or google calendar) and that event will be stored there. As you select events in this manner, you will have your own schedule to guide you through the week.

Data-Intensive Applications on HPC Using Hadoop, Spark and RADICAL-Cybertools

SESSION: Data-Intensive Applications on HPC Using Hadoop, Spark and RADICAL-Cybertools

EVENT TYPE: Tutorials

EVENT TAG(S): Applications, Data-Intensive Computing, Clouds and Distributed Computing

TIME: 8:30AM - 12:00PM

Presenter(s):Shantenu Jha, Andre Luckow



High performance computing (HPC) environments have traditionally been designed to meet the compute demands of scientific applications; data has only been a second order concern. With science moving toward data-driven discoveries relying on correlations and patterns in data to form scientific hypotheses, the limitations of HPC approaches become apparent: Low-level abstractions and architectural paradigms, such as the separation of storage and compute, are not optimal for data-intensive applications. While there are powerful computational kernels and libraries available for traditional HPC, there is an apparent lack of functional completeness of analytical libraries. In contrast, the Apache Hadoop ecosystem has grown to be rich with analytical libraries, e.g. Spark MLlib. Bringing the richness of the Hadoop ecosystem to traditional HPC environments will help address some gaps.

In this tutorial, we explore a light-weight and extensible way to provide the best of both: We utilize the Pilot-Abstraction to execute a diverse set of data-intensive and analytics workloads using Hadoop MapReduce and Spark as well as traditional HPC workloads. The audience will learn how to efficiently use Spark and Hadoop on HPC to carry out advanced analytics tasks, e.g. KMeans and graph analytics, and will understand deployment/performance trade-offs for these tools on HPC.

Chair/Presenter Details:

Shantenu Jha - Rutgers University

Andre Luckow - Clemson University

Add to iCal  Click here to download .ics calendar file

Add to Outlook  Click here to download .vcs calendar file

Add to Google Calendarss  Click here to add event to your Google Calendar