- Home
- Register
- Attend
- Conference Program
- SC15 Schedule
- Technical Program
- Awards
- Students@SC
- Research with SCinet
- HPC Impact Showcase
- HPC Matters Plenary
- Keynote Address
- Support SC
- SC15 Archive
- Exhibits
- Media
- SCinet
- HPC Matters
NRE Demos 2015
Each year the SCinet NRE showcases and number of interesting network-based experiments during SC. The goal of the NRE is to showcase technologies that will impact HPC in general and SCinet in particular.
Topics for SC15's Network Research Exhibition demos and experiments range from software-defined networking (SDN) to security/encryption and resilience. The titles and booths for the demos are listed before each description. Please stop by and check out these new network technologies and innovations!
Title: Network Security @ 100 Gbps
Booth: CIENA #933
Description: This demonstration will showcase SARNET (Security Autonomous Response with programmable NETworks), based on Network Function Virtualization and cloud techniques that enable networks to defend themselves when under (ddos) attacks. SARNET demonstrates the result of research that is exploring how to obtain the knowledge to create ICT systems that model their state and discover exceptional situations by observations and reasoning including if and how an attack is developing. Based on analysis, SARNET determines the associated risks and responds by calculating the effect of counter measures on states and risks and then selects and implements the best response. The core platform used is ExoGENI, a national scale testbed with international extensions on 100 Gbps paths - part of the National Science Foundation’s Global Environment for Network Innovations (GENI) distributed testbed, supporting empirical network science research. ExoGENI integrates the GENI environment with open cloud computing (OpenStack) and dynamic circuit fabrics. ExoGENI orchestrates a federation of cloud sites throughout the US and international sites, using R&E networks and native IaaS API to integrate GENI resources.
Title: Secure Science DMZ With Event-Driven SDN
Booth: Cisco #588
Description: "Science DMZ" is a method of architecting an organization's WAN edge to reduce performance impediments to wide-area transfers while limiting risks to the corporate network. Many current solutions either use unsupported code, narrowly focused solutions, or leave large security exceptions that compromise the organizations overall security posture. This novel demonstration uses Splunk as an application on top of Cisco’s Open SDN Controller to provide a supportable, SDN-enabled mechanism that leverages existing components and skill sets. We will demonstrate the application of Event-Based SDN to the use case of Science DMZ and provide a secure solution for wide-area transfers over R&E networks.
Title: Network Functions Virtualisation
Booth: SURF #2321
Description: This demonstration shows Network Functions Virtualisation (NFV) based on programmable capabilities of the network and its available functions within virtual machines. Examples of NFV functionality are firewalls, intrusion detection and load balancers. However, for this demonstration we decided to replace these functions with the manipulation of a live video stream. Examples of our NFV functions are mirroring, color filtering, the addition of logos, etc. A 4K video stream is sent from SC'15 in Austin to NetherLight in Amsterdam, where a 100G programmable network will steer the flows, based on user input, through various NFV functions - made available in collaboration with SURFsara, CloudSigma, Okeanos, TeleCity, Microsoft and GÉANT - that manipulate the video content in real-time based upon the SC user’s request via a control channel. SC visitors standing in front of the 4K camera will be able to switch on/off different video effects, sending the traffic stream through the proper NFV functions. The result can be seen near real-time on a 4K screen at the SURF booth.
Title: Automated GOLE
Booths: SURF #2321, AIST #1725, NICT #2621
Description: The Global Lambda Integrated Facility (GLIF) Automated GOLE is a collaboration of GLIF Open Lightpath Exchanges (GOLEs) and networks to deliver dynamic circuits end-to-end. To date this is achieved by using VLANs and QoS, although the underlying standard, the Network Service Interface (NSI) by OGF is technology agnostic. This demonstration shows the creation and usage of dynamic circuits through multiple domains. In order to monitor the framework the AutoGOLE Dashboard is developed, that is shown as well.
Title: LHCONE Point2point Service with Data Transfer Nodes
Booth: California Institute of Technology # 1248, Univ of Michigan #2103, Vanderbilt University #271
Description: LHCONE (LHC Open Network Environment) is an globally distributed specialized environment in which the large volumes of LHC data are transferred among different international LHC Tier (data center and analysis) sites. To date these transfers are conducted over the LHC Optical Private Network (LHCOPN, dedicated high capacity circuits between LHC Tier1s (data centers)) and via LHCONE, currently based on L2+VRF services. The LHCONE Point2Point Service is planning to future-proof ways of networking for LHC – e.g., by providing support for OpenFlow. This demonstration will show how this goal can be accomplished, using dynamic path provisioning and at least one Data Transfer Node (DTN) in the US connected to at least one DTN in Europe, transferring LHC data between tiers. This demonstration will show a network services model that matches the requirements of LHC high energy physics research with emerging capabilities for programmable networking. This demonstration will integrate programmable networking techniques, including the Network Service Interface (NSI), a protocol defined within the Open Grid Forum standards organization. Multiple LHC sites will transfer LHC data through DTNs. DTNs are edge nodes that are specifically designed to optimize for high performance data transport, they have no other function. The DTNs will be connected by layer 2 circuits, created through dynamic requests by NSI.
Title: Apply NSI in SDX Topology Exchange
Booths: NCHC #351, iCAIR #749
Description: In our previous work, we have designed and implemented inter-domain SDN topology discovery and real time flow viewer, which works well in pure SDN scenarios. However, SDN networks are more likely to coexist with legacy networks in an exchange point, especially a SDN Exchange Point (SDX), which obviously needs a new mechanism to reveal the overall topology. In our newly proposed system, the Network Service Interface (NSI) message exchange mechanism is incorporated to exchange topology information between SDN and legacy networks, which aims to enable hybrid network topology automatic discovery in a SDX. The main goal of our demonstration is to utilize NSI protocol as a SDN controller East-West bound interface. We will demonstrate the capability of SDX topology discovery across multiple domains.
Title: Dynamic Remote I/O
Booth: LAC/OCC #749
Description: Demonstration of large-scale remote data access between distant operating locations leveraging a dynamic pipelined distributed processing framework and software defined networking. Live production quality 4K video workflows across a nationally distributed set of storage and computing resources - relevant to emerging data processing challenges. The Dynamic Remote I/O strategy allows data processing to begin as soon as data begins to arrive at the compute location rather than waiting for bulk transfers to complete. NRL plans to source multiple full quality 4K video streams, including live 4K x 60fps, to fill 100G links. Live and stored video data streams will be provided from SC15 in Austin, TX, from NERSC in Oakland, CA, and from NRL in Washington, DC. The 100G network for this demonstration will leverage resources from ESNet’s 100G network testbed, the StarLight Software Defined eXchange (SDX) in Chicago, and from DREN/CenturyLink networks as well as switches provided by multiple vendors. Video processing will be accomplished in Oakland, Chicago, Washington, DC and Austin, TX showing the ability to process against remote data "on the fly" without first doing bulk data transfers. The “Pipelines” processing framework application will dynamically switch video sources and redistribute the processing.
Title: Demonstrations of 100 Gbps Disk-to-Disk WAN File Transfer Performance via SDX and 100G FW
Booth: LAC/OCC #749
Description: NASA requires the processing and exchange of ever increasing vast amounts of scientific data, so NASA networks must scale up to ever increasing speeds, with 100 Gigabit per second (Gbps) networks being the current challenge. However it is not sufficient to simply have 100 Gbps network pipes, since normal data transfer rates would not even fill a 1 Gbps pipe. The NASA Goddard High End Computer Networking (HECN) team will demonstrate systems and techniques to achieve near 100G line-rate disk-to-disk data transfers between a single pair of high performance RAID servers across a national wide area 100G network. In addition the HECN team will meet the security challenge of passing the 100G data transfer through an affordable 100G firewall (FW) built by the HECN team, and will demonstrate the ability of Software Defined Networking (SDN) Exchanges (SDXs) to dynamically establish the 100G layer2 network path needed for the data transfer.
Title: Virtualized Science DMZ as a service
Booth: RENCI #181
Description: Many campuses are installing ScienceDMZs to support efficient large-scale scientific data transfers. There’s a need to create custom configurations of ScienceDMZs for different groups on campus. Network function virtualization (NFV) combined with compute and storage virtualization enables a multi-tenant approach to deploying virtual ScienceDMZs. It makes it possible for campus IT or NREN organizations to quickly deploy well-tuned ScienceDMZ instances targeted at a particular collaboration or project. This demo shows a prototype implementation of ScienceDMZ-as-a-Service using ExoGENI racks (ExoGENI is part of NSF GENI federation of testbeds) deployed at StarLight facility in Chicago and at NERSC. The virtual ScienceDMZs deployed on-demand in these racks connect to a data source at Argonne National Lab and a compute cluster at NERSC to provide seamless end-to-end high-speed data transfers of data acquired from Argonne’s Advanced Photon Source (APS) to be processed at NERSC. The ExoGENI racks dynamically instantiate necessary compute virtual resources for ScienceDMZ functions and connect to each other on-demand using ESnet’s OSCARS and Internet2’s AL2S system.
Title: Active Measurement in a Dynamic Network Sliced/Virtual World
Booths: Indiana School of Informatics and Computer Science #542, The University of Utah #259
Time: Thursday, November 19, 10am-11am
Description: The demo “Active Measurement in a Dynamic Network Sliced/Virtual World” will show the use of active measurement points executing in a very granular “virtual network” comprising one or more “flows” across an infrastructure. The demo defines “flows” as traffic matching a particular set of labels, such as vlan, source/destination IP, source/destination port, physical port, MPLS label, etc. This demo a) validates an existing virtual topology b) recognizes an introduced failure c) isolates the failure d) signals a new path, and, e) validates the new data path. This demo also demonstrates a visualization of the measurement and topology changes. Key insights of this demo are the abilities to: a) execute active measurement at a granular “virtual network” level b) use the active measurement data to isolate, troubleshoot, and automate path selection around failed paths c) utilize active topology information for troubleshooting and visualization.
Title: InfiniCortex
Booths: A*STAR Computational Resource Centre #306, Obsidian Strategics #287, and GÉANT #3218
Description: A*STAR Computational Resource Centre (A*CRC) presents InfiniCortex, an InfiniBand fabric spanning across four continents and six nations overlaid on high performance (100/30/10Gbps) national and international network connectivity provided by National Research and Education Networks (NRENs) and commercial carriers. Using Obsidian Strategics range extenders, A*CRC created a ring-around-the-world and is showcasing for the first time InfiniBand routing on the global scale, consisting of six separate subnets. Connecting supercomputing facilities of six nations around the globe to build a Galaxy of Supercomputers, the project showcase a wide range of significant HPC applications run live over intercontinental distances utilizing RDMA, high network bandwidth, high I/O and high performance storage enabling the next level of performance scaling and collaborative global Science and Research. The final goal is to break the boundaries of any nation’s supercomputing capability and establishing new frontiers on the path to Exascale computing.
Title: The MDTM Project
Booth: Department of Energy (DOE) #502
Times: Wednesday, November 18, 11am-12pm, 3pm-4pm
Description: Multicore and manycore have become the norm for high-performance computing. These new architectures provide advanced features that can be exploited to design and implement a new generation of high-performance data movement tools. DOE ASCR program is funding FNAL and BNL to collaboratively work on a Multicore-Aware Data Transfer Middleware (MDTM) project. MDTM aims to accelerate data movement toolkits on multicore systems. In this demo, we use MDTM data transfer tools to demonstrate bulk data movement over wide area networks. We will compare MDTM with existing data transfer tools such as GridFTP and BBCP. Our purpose is to show the advantages of MDTM in fully utilizing the multi-core system resources, in particular with NUMA architecture.
Title: 100 Gbps over IP with Aspera FASP
Booth: Aspera #286
Description: Aspera’s next generation FASP is a distance neutral secure way to transfer data disk-to-disk and node-to-node over regular IP packets at speeds up to 100 Gbps. Next generation FASP is a new architecture built upon Intel technologies in collaboration with BioTeam designed for Petascale computing. Please come by booth #286 for a demo where we show transfers from the show floor to NCSA National Petascale Computing Facility at the University of Illinois-Urbana Champaign.
Title: Scientific Data Management using Named Data Networking
Booth: NCAR #359, and California Institute of Technology / CACR #1248
Times: Tuesday, November 17, 10am-11am, 2pm-3pm
Description: Data management is an evolving challenge for scientific communities such as HEP and Climate Science. Named Data Networking (NDN), a next generation Internet architecture, simplifies applications and reduces data management complexities by data-oriented rather than host-oriented network services. We will demonstrate a NDN-based distributed application that provides secure data publication, multiple data discovery mechanisms and intelligent retrieval functionality for CMIP5 data. A number of synchronized federated catalogs hold NDN names derived from CMIP5 files. Users can query any catalog using a web UI and discover NDN names for the desired CMIP5 datasets. Once the names are known, users retrieve data by simply asking the network for those names. We will show visualizations of data transfers that demonstrate failover and intelligent forwarding strategies.
Title: Deep Network Visibility Using R-Scope and Ensign Technologies by Reservoir Labs
Booth: Email Reservoir Labs if you are interested in this demonstration: https://www.reservoir.com/company/contact/
Description: Reservoir Labs will demonstrate the smart fusion of two of its cutting-edge technologies - 1) R-Scope, a high-performance cyber security appliance enabling deep network visibility, advanced situational awareness, and real-time security event detection by extracting cyber-relevant data from network traffic, and 2) ENSIGN, a high-performance Big Data analysis tool that provides the fastest and most scalable tensor analysis routines to reveal interesting patterns and discover subtle undercurrents and deep cross-dimensional correlations in data.
Title: Resource Aware Data-centric collaboration Infrastructure (RADII)
Booth: RENCI #181
Time: Tuesday, November 17, 10:30 AM and Wednesday, November 18, 1:30 PM
Description: Data-centric collaborations have become the engines of scientific research. However, these collaborations can be difficult to realize because the appropriate infrastructure, including dedicated network infrastructure needed to transfer large data sets, is often unavailable and few mechanisms exist for controlling data access. Solutions that bridge the gap between infrastructure and data management technologies are needed to make data-centric collaborations feasible. This demonstration presents a novel cloud-based platform, called Resource Aware Data-centrIc collaboratIon Infrastructure (RADII), that addresses these challenges. RADII integrates the Open Resource Control Architecture (ORCA) and integrated Rule Oriented Data System (iRODS) to allow scientists to create and manage collaborations. The research team will show how scientists can use RADII to create data-centric collaborations using data-flow diagram formalisms. RADII provides a user-friendly graphical interface that scientists can use to determine their infrastructure requirements and data access policies. The policies are then automatically mapped to the infrastructure and data management system by the RADII software. The demonstration will also show how RADII allows scientists to manage their collaborations throughout the lifecycle of a project. The team has deployed RADII on ExoGENI to support collaborations over a worldwide federated environment of resources and infrastructure.
Title: Petatrans 100Gbps Data Transfer Node (DTN)
Booth: iCAIR #749
Description: This demonstration will showcase PetaTrans – a 100 Gbps Data Transfer Node (DTN) for Wide Area Networks (WANs), especially trans-oceanic WANs to support high performance transport for petascale science. This DTN is being designed by iCAIR specifically to optimize capabilities for supporting large scale, high capacity, high performance, reliable, high quality, sustained individual data streams for science research. As a component of a National Science Foundation project (NSF), PetaTrans is being designed, created and implemented as a prototype, and it is being used for experiments with edge server configured with 100 Gbps NICs. This DTN has been optimized for supporting high capacity individual data streams on a data plane (forwarding plane) for science research over many thousands of miles. The DTN has also been designed to ensure high performance for those streams and to support highly reliable services for long duration data flows. Resolving these issues requires addressing and optimizing multiple components in an E2E path. This prototype model for a DTN design is being integrated with a prototype SDX that will address many of these issues.
Title: Adaptive QOS for Wide-area Data Flows
Booth: Department of Energy (DOE) #502
Times: Tuesday, November 17, 10am-11am, 2pm-3pm
Wednesday, November 18, 11am-12pm, 3pm-4pm
Thursday, November 19, 10am-11am
Description: Software-defined networking (SDN) adoption in HPC environments, though slow, is gaining traction only on WAN and local area networks (LANs) but not on the storage area networks (SANs). In the current HPC environment, most common deployments have DTNs and compute nodes mount a global shared parallel storage system, which is connected through a SAN. In such scenarios, it is not possible to ensure contention-free access to the storage system. And the data transfer nodes do not support scheduling CPU resources. The end result is that some portion of the reserved bandwidth on WAN and LAN goes unused when circumstances conspire to throttle the throughput of the hosts' connection, and under traditional QoS schemes this bandwidth is effectively lost. We demonstrate a new traffic management algorithm that takes advantage of the SDN controller's ability to monitor flow statistics and push flow control logic down to the network fabric in real-time. The algorithm leverages existing OpenFlow switch queues and the ability to dynamically meter both individual and aggregate flows to exert fine-grained control over how other flows may expand into unused portions of bandwidth that would otherwise go waste with traditional queue-based bandwidth slicing approaches.
Title: Data Commons and Data Peering at 100Gbps
Booth: iCAIR #749
Times: Tuesday, November 17, 10am-12pm
Wednesday, November 18, 12:30pm-2:30pm
Description: This demonstration will showcase the data science capabilities of the OCC Data Commons and the OCC Open Science Data Cloud (OSDC). The demonstration will stream and peer data from OCC Data Commons, the OSDC and SC15 over 100 Gbps paths, based on a scientific Software Defined Network Exchange (SDX).
Title: Network Optimization with OpenFlow for Science DMZ: Testing to the Pacific Research Platform and Sun Corridor Networks
Booths: Arizona Research Computing #399, and Stanford #2009
Description: There are often different political pockets of networking and technology camps across the average research university, research has a way of not caring about them and just needs to get work done. By SC15, many Universities will have mature 100g connections established but probably haven't 'stress' tested them nor have they thought of security optimization. Our demonstration will highlight how SDN will participate in the future of national research efforts providing flow detection, path optimization, base security analysis and class of traffic (flows) will consume resources in a manner more sustainable than legacy networking implementations. This testing will be done both on the floor of SC15 and via remote connections from the floor back to in situ equipment at ASU and Stanford. Further testing will take place between Pacific Research Platform Universities and Sun Corridor R&E Network participants.
Title: PetaTransfer Experiment The Pacific Research Platform (PRP)
Booth: SDSC #823
Description: You can’t rsync a petabyte, but data centers are dealing with the need to move and replicate petabyte-scale file systems on a regular basis. New supercomputers, large disk-based parallel file systems, lack of sustainable archives, and maintenance requirements all drive the need to take a massive non-uniform distribution of file and folders and put them somewhere else. The PetaTransfer Experiment will demonstrate an efficient parallel recursive algorithm for file system replication from the San Diego Supercomputer Center (SDSC) to our booth at SC15 in Austin over a 100Gbps link provided via the NSF-funded Pacific Research Platform (PRP). The connection via the PRP makes the SC15 show floor another campus on the PRP, and the SDSC booth represents a Science DMZ on the SC15 campus. The storage at SDSC is a petabyte-sized randomized clone of a real Lustre file system, with all the small files and deep directory trees that come with it, which will be pushed to flash storage arrays using ZFS in the booth. Thank you to Arista Networks for providing 100Gigabit Ethernet networking and SanDisk for providing the storage equipment. And thanks to Calit2's Qualcomm Institute for providing the servers and system integration. More information at: http://ucsdnews.ucsd.edu/pressrelease/nsf_gives_green_light_to_pacific_research_platform.
Title: International Multidomain Open Architecture E2E 100Gpbs Services
Booth: iCAIR #749
Description: 100 Gbps WAN paths are proliferating, as are 100 Gbps LAN paths in data centers. Currently, opportunities exist to explore the potential for 100 Gbps end to end multi-domain paths across both WAN and LANs, including directly to edge servers. This demonstration will showcase an example of multi-domain capability based on 1) 100 G NIC servers at the LAN edge 2) integration with multipath support for 100 G across the Atlantic using MultipathTCP 3) Open Architecture edge 100 Gbps switches with 3 Terabit per second non-blocking backplanes and 4) high performance, high capacity, high quality individual WAN flows across the Atlantic 5) advanced SDX Services.
Title: High Performance Science Networking @ 100Gbps
Booth: iCAIR #749
Description: This demonstration will showcase highly programmable networking techniques for controlling individual high capacity streams over global 100 Gbps network paths. In addition it will showcase SDN techniques for managing dynamic multi-domain global 100 Gbps WAN paths - integrated with a Network Service Interface 2.0 (NSI) based technique for direct edge path provisioning of a 100Gbps high performance optical switch and specially configured Ciena ExoGENI racks which can output more than 170Gbps worth of data, controlled by the Open Resource Control Architecture (ORCA) which allows the specification of which LAN speed each devices can use to interact at the Data Layer. ExoGENI is a national scale testbed with international extensions - part of the National Science Foundation's Global Environment for Network Innovations (GENI) distributed testbed, supporting empirical network science research. ExoGENI integrates the GENI environment with open cloud computing (OpenStack), dynamic circuit fabrics, and dynamically allocated storage and compute devices. ExoGENI orchestrates a federation of cloud sites across the world, using R&E networks and native IaaS API, while exposing a variety of user-facing APIs, including GENI AM API.
Title: Bioinformatics SDX for Precision Medicine
Booth: iCAIR #749
Times: Monday, November 16, 7-9pm,
Wednesday, November 18, 10am-12:00pm
Description: This demonstration will showcase a prototype Bioinformatics Software Defined Networking Exchange (SDX) designed to support the specific requirements of bioinformatics workflows in highly distributed research environments. This approach demonstrates how precision medicine is enabled by precision networks. Genomic data and associated patient metadata including treatments and outcomes is managed through an advanced informatics framework, based on a technology foundation comprised of integrated high performance clouds, networks and storage systems. This enables new approaches to precision medicine that will be demonstrated with real-time use of genomic data analysis for cancer diagnosis and to inform treatment.
Title: SDN Optimized High-Performance Data Transfer Systems for Exascale Science
Booth: California Institute of Technology / CACR #1248, University of Michigan #2103, Stanford University #2009, OCC #749, Vanderbilt #271, Dell #1009, and Echostreams #582
Description: The next generation of major science programs face unprecedented challenges in harnessing the wealth of knowledge hidden in Exabytes of globally distributed scientific data. Researchers from Caltech, FIU, Stanford, Univ of Michigan, Vanderbilt, UCSD, UNESP, along with other partner teams have come together to meet these challenges, by leveraging the recent major advances in software defined and Terabit/sec networks, workflow optimization methodologies, and state of the art long distance data transfer methods. This demonstration focuses on network path-building and flow optimizations using SDN and intelligent traffic engineering techniques, built on top of a 100G OpenFlow ring at the show site and connected to remote sites including the Pacific Research Platform (PRP). Remote sites will be interconnected using dedicated WAN paths provisioned using NSI dynamic circuits. The demonstrations include (1) the use of Open vSwitch (OVS) to extend the wide area dynamic circuits storage to storage, with stable shaped flows at any level up to wire speed, (2) a pair of data transfer nodes (DTNs) designed for an aggregate 400Gbps flow through 100GE switches from Dell, Inventec and Mellanox, and (3) the use of Named Data Networking (NDN) to distribute and cache large high energy physics and climate science datasets.
Title: Interconnecting Heterogeneous Supercomputers with FPGA-Accelerated Key Value Store (KVS)
Booth: Algo-Logic Systems Inc. #3012
Description: Algo-Logic's Key-Value Store (KVS) systems associate values with keys so that data can be easily shared between heterogeneous computing systems. Most existing KVS systems run in software and scale out to improve throughput but suffer from poor latency. We show how an alternate approach using gateware implements KVS with sub-microsecond latency and performs 150 Million KVS transactions using a single Field Programmable Gate Array (FPGA) logic device. As with a software-based KVS, lookup transactions are sent over Ethernet to the machine that stores the value associated with that key. With the implementation in logic, however, this KVS scales up to provide Billions of IOPs with lower latency and power consumption than any other option in software. Stop by the Algo-Logic booth to see a live demonstrationthat shows an ultra-low latency KVS implemented in FPGA hardware. We will show that the KVS in FPGA is 88 times faster while using 21x less power than socket I/O running on a traditional PC-based server. Even when using software optimized with kernel bypass, we will show that the KVS in FPGA processes messages 14x faster while using 13x less energy than the next best implementation of KVS using a kernel-bypass approach.