Search Fields (at least one required): |
XSEDE User Portal (XUP) Online Service The XSEDE User Portal (XUP) provides XSEDE users, collaborators, and staff that have XSEDE accounts access to their "My XSEDE" profile and to information about resources, documentation, allocations, training, and many other resources.
|
XSEDE User Portal (XUP) Mobile Online Service XSEDE User Portal (XUP) for mobile devices
|
Research Software Portal (RSP) Vendor Software XSEDE A portal designed to help research software users (researchers, educators, students, application developers), research software developers, and the research computing administrators to work together efficiently by sharing requirements, plans, activity status, and information about software available in any form.
|
Research Software Portal (RSP) Online Service A portal designed to help research software users (researchers, educators, students, application developers), research software developers, and the research computing administrators to work together efficiently by sharing requirements, plans, activity status, and information about available software.
|
XSEDE Globus Connect Server XSEDE Mason Online Service Mason at Indiana University is a large memory computer cluster configured to support data-intensive, high-performance computing tasks. This endpoint can be used to access data stored on the Mason file system. |
XSEDE Globus Connect Server XSEDE Ranch Online Service Ranch is a tape archival system with a storage capacity of 160 PB. This endpoint can be used to access data stored on the Ranch file system. |
XSEDE Globus Connect Server XSEDE Expanse Online Service
|
XSEDE Globus Connect Server XSEDE PSC bridges Online Service Regular Shared Memory nodes each consist of two Intel Xeon EP-series CPUs and 128GB of 2133 MHz DDR4 RAM configured as 8 DIMMs with 16GB per DIMM. A subset of RSM nodes contain NVIDIA Tesla GPUs: 16 nodes will contain two K80 GPUs each. We anticipate adding 32 RSM nodes with two Pascal GPUs each in late 2016. Bridges contains many hundreds of RSM nodes for capacity and flexibility. |
XSEDE Globus Connect Server XSEDE TACC stampede2 Online Service The new Stampede2 Dell/Intel Knights Landing (KNL) System is configured with 4204 Dell KNL compute nodes, each with a new stand alone Intel Xeon Phi Knights Landing bootable processor. Each KNL node will include 68 cores, 16GB MCDRAM, 96GB DDR-4 memory and a 200GB SSD drive. Stampede2 will deliver an estimated 13PF of peak performance. Compute nodes have access to dedicated Lustre Parallel file systems totaling 28PB raw, provided by Seagate. An Intel Omni-Path Architecture switch fabric connects the nodes and storage through a fat-tree topology with a point to point bandwidth of 100 Gb/s (unidirectional speed). 16 additional login and management servers complete the system. Later in 2017, Stampede2 Phase 2 consisting of next generation Xeon servers and additional management nodes will be deployed. |
XSEDE Globus Connect Server XSEDE UD DARWIN Online Service Collection for XSEDE users to access data on DARWIN |
XSEDE Globus Connect Server XSEDE XCI Metrics Online Service For storing XSEDE XCI metrics data |
XSEDE Globus Connect Server XSEDE Data Supercell Online Service The Data Supercell is a complex disk-based storage system with a capacity of 4 Petabytes. This endpoint can be used to access data stored on the Data Supercell file system. |
XSEDE Confluence Wiki Online Service XSEDE Confluence wiki primarily for staff use
|
XSEDE Central Database Online Service XSEDE central resource accounting and user database.
|
XSEDE GitHub Repository Online Service XSEDE Official GitHub project and repositories
|
XSEDE Resource Identity Packaged Software XSEDE XSEDE Resource Identity |
XSEDE Subversion Repository Online Service XSEDE's installation of a Subversion repository, operated and maintained by XSEDE personnel
|
XSEDE Moodle Courses Online Service XSEDE Moodle web site
|
XSEDE Usage (xdusage) Packaged Software XSEDE XSEDE Usage (xdusage) |
XSEDE Globus Connect Server XSEDE OSG Virtual Cluster Online Service The OSG Virtual Cluster is a Condor pool overlay on top of OSG resources. This endpoint can be used to access data stored on the OSG file system. |
XSEDE Globus Connect Server xsede-test-7-2 Online Service
|
XSEDE Globus Connect Server XSEDE SDSC comet-gpu Online Service The Comet GPU resource features 36 K80 GPU nodes, and 36 P100 nodes, and supports many commercial and community developed applications. Each K80 GPU node also features 2 Intel Haswell processors of the same design and performance as the standard compute nodes (described separately under the Comet resource). Each P100 GPU node also features 2 Intel Broadwell processors with 14 cores/socket (28 cores on the node). The GPU nodes are available through the Slurm scheduler for either dedicated or shared node jobs (i.e., a user can run on 1 or more GPUs/node and will be charged accordingly). Like the Comet standard compute nodes, the GPU nodes feature a local SSD which can be specified as a scratch resource during job execution – in many cases using SSD’s can alleviate I/O bottlenecks associated with using the shared Lustre parallel file system. |
XSEDE Globus Connect Server XSEDE Metrics Data for Analysis Online Service Access to XSEDE metrics data for people who will be performing analysis on it |
XSEDE Globus Connect Server CU Boulder Research Computing XSEDE Online Service Provides access to/from all CU-Boulder Research Computing data storage resources via XSEDE authentication |
XSEDE Globus Connect Server XSEDE Beacon Online Service
|