|Search Fields (at least one required):|
|XSEDE National Integration Toolkit (XNIT) Vendor Software XSEDE|
Suppose you already have a cluster that you are happy with and you want to add too it software tools that will allow users to use open sources software like that on XSEDE, or other particular pieces of software that you think are important, but you don't want to blow up your cluster to add that capability? XNIT is for you. You can add all of the basic software that is in SCBC, as relocatable RPMs (Resource Package Manger), via a YUM repo . (YUM Stands for Yellowdog Updater, Modified). The RPMs in XNIT allow you to expand the functionality of your cluster, in ways that mimic the setup on an XSEDE cluster. XNIT packages include specific scientific, mathematical, and visualization applications that have been useful on XSEDE systems. Systems administrators may pick and choose what they want to add to their local cluster ; updates may be configured to run automatically or manually. Currently the XNIT repository is available for x86_64 systems running CentOS 6 or 7. Consult the XSEDE Knowledge Base for more information.
|XSEDE Resource Allocation Service (XRAS) Online Service|
The XSEDE Resource Allocation Service (XRAS) is the Web and database service that supports the XSEDE allocation process. It includes a database for storing information about allocation opportunities, allocation proposals, proposal reviews, allocation process results; and Web interfaces for administering allocation processes and for reviewing allocation proposals. It uses the XSEDE User Portal (XUP) as the Web interface for entering allocation proposals.
|XSEDE Globus Connect Server XSEDE Expanse Online Service|
|XSEDE Globus Connect Server Installation Guide Packaged Software XSEDE|
XSEDE Globus Connect Server Installation Guide
|XSEDE Globus Connect Server XSEDE PSC bridges Online Service|
Regular Shared Memory nodes each consist of two Intel Xeon EP-series CPUs and 128GB of 2133 MHz DDR4 RAM configured as 8 DIMMs with 16GB per DIMM. A subset of RSM nodes contain NVIDIA Tesla GPUs: 16 nodes will contain two K80 GPUs each. We anticipate adding 32 RSM nodes with two Pascal GPUs each in late 2016. Bridges contains many hundreds of RSM nodes for capacity and flexibility.
|XSEDE Globus Connect Server XSEDE TACC stampede2 Online Service|
The new Stampede2 Dell/Intel Knights Landing (KNL) System is configured with 4204 Dell KNL compute nodes, each with a new stand alone Intel Xeon Phi Knights Landing bootable processor. Each KNL node will include 68 cores, 16GB MCDRAM, 96GB DDR-4 memory and a 200GB SSD drive. Stampede2 will deliver an estimated 13PF of peak performance. Compute nodes have access to dedicated Lustre Parallel file systems totaling 28PB raw, provided by Seagate. An Intel Omni-Path Architecture switch fabric connects the nodes and storage through a fat-tree topology with a point to point bandwidth of 100 Gb/s (unidirectional speed). 16 additional login and management servers complete the system. Later in 2017, Stampede2 Phase 2 consisting of next generation Xeon servers and additional management nodes will be deployed.
|XSEDE Globus Connect Server XSEDE UD DARWIN Online Service|
Collection for XSEDE users to access data on DARWIN
|XSEDE Globus Connect Server hpcdev-pub04 Online Service|
|XSEDE Confluence Wiki Online Service|
|XSEDE Central Database Online Service|
XSEDE central resource accounting and user database.
|XSEDE GitHub Repository Online Service|
|XSEDE Resource Identity Packaged Software XSEDE|
XSEDE Resource Identity
|XSEDE Globus Connect Server XSEDE Mason Online Service|
Mason at Indiana University is a large memory computer cluster configured to support data-intensive, high-performance computing tasks. This endpoint can be used to access data stored on the Mason file system.
|XSEDE Globus Connect Server XSEDE Ranch Online Service|
Ranch is a tape archival system with a storage capacity of 160 PB. This endpoint can be used to access data stored on the Ranch file system.
|XSEDE Globus Connect Server XSEDE Data Supercell Online Service|
The Data Supercell is a complex disk-based storage system with a capacity of 4 Petabytes. This endpoint can be used to access data stored on the Data Supercell file system.
|XSEDE Usage (xdusage) Packaged Software XSEDE|
XSEDE Usage (xdusage)
|XSEDE Globus Connect Server XSEDE SDSC comet-gpu Online Service|
The Comet GPU resource features 36 K80 GPU nodes, and 36 P100 nodes, and supports many commercial and community developed applications. Each K80 GPU node also features 2 Intel Haswell processors of the same design and performance as the standard compute nodes (described separately under the Comet resource). Each P100 GPU node also features 2 Intel Broadwell processors with 14 cores/socket (28 cores on the node). The GPU nodes are available through the Slurm scheduler for either dedicated or shared node jobs (i.e., a user can run on 1 or more GPUs/node and will be charged accordingly). Like the Comet standard compute nodes, the GPU nodes feature a local SSD which can be specified as a scratch resource during job execution – in many cases using SSD’s can alleviate I/O bottlenecks associated with using the shared Lustre parallel file system.
|XSEDE Globus Connect Server XSEDE XCI Metrics Online Service|
For storing XSEDE XCI metrics data
|XSEDE Moodle Courses Online Service|
|XSEDE Subversion Repository Online Service|
|XSEDE Globus Connect Server CU Boulder Research Computing XSEDE Online Service|
Provides access to/from all CU-Boulder Research Computing data storage resources via XSEDE authentication
|XSEDE Globus Connect Server XSEDE OSG Virtual Cluster Online Service|
The OSG Virtual Cluster is a Condor pool overlay on top of OSG resources. This endpoint can be used to access data stored on the OSG file system.
|XSEDE Globus Connect Server xsede-test-7-2 Online Service|
|XSEDE Globus Connect Server CU Boulder Research Computing (XSEDE, alternate) Online Service|
|XSEDE Software Copyright and Licensing Guidance and Template Vendor Software XSEDE|
XSEDE Software Copyright and Licensing Guidance and Template Files