|Search Fields (at least one required):|
|XSEDE User Portal (XUP) Online Service|
The XSEDE User Portal (XUP) provides XSEDE users, collaborators, and staff that have XSEDE accounts access to their "My XSEDE" profile and to information about resources, documentation, allocations, training, and many other resources.
|XSEDE User Portal (XUP) Mobile Online Service|
|Research Software Portal (RSP) Online Service|
A portal designed to help research software users (researchers, educators, students, application developers), research software developers, and the research computing administrators to work together efficiently by sharing requirements, plans, activity status, and information about available software.
|XSEDE Globus Connect Server XSEDE Beacon Online Service|
|XSEDE Globus Connect Server XSEDE Expanse Online Service|
|XSEDE Globus Connect Server XSEDE Karnak Service Online Service|
|XSEDE Globus Connect Server XSEDE NCAR GLADE Online Service|
The Globally Accessible Data Environment is a centralized file service that gives users a common view of their data across the HPC, analysis, and visualization resources managed by CISL. This endpoint can be used to access data stored on the GLADE file spaces.
|XSEDE Globus Connect Server XSEDE Data Supercell Online Service|
The Data Supercell is a complex disk-based storage system with a capacity of 4 Petabytes. This endpoint can be used to access data stored on the Data Supercell file system.
|XSEDE Globus Connect Server XSEDE LSU CCT supermic Online Service|
SuperMIC is a 925 TFlop Peak Performance Xeon Phi accelerated cluster. SuperMIC has 360 nodes each with 20 Intel Ivybridge 2.8 GHz cores, 64 GB of RAM, and two Intel Xeon Phi 7120P co-processors. There are 20 nodes that have NVIDIA K20X GPUs. This cluster is 40% allocated to the XSEDE user community and 60% dedicated to authorized users of the LSU community. Access is restricted to those who meet the criteria as stated on our website.
|XSEDE Globus Connect Server XSEDE SDSC comet-gpu Online Service|
The Comet GPU resource features 36 K80 GPU nodes, and 36 P100 nodes, and supports many commercial and community developed applications. Each K80 GPU node also features 2 Intel Haswell processors of the same design and performance as the standard compute nodes (described separately under the Comet resource). Each P100 GPU node also features 2 Intel Broadwell processors with 14 cores/socket (28 cores on the node). The GPU nodes are available through the Slurm scheduler for either dedicated or shared node jobs (i.e., a user can run on 1 or more GPUs/node and will be charged accordingly). Like the Comet standard compute nodes, the GPU nodes feature a local SSD which can be specified as a scratch resource during job execution – in many cases using SSD’s can alleviate I/O bottlenecks associated with using the shared Lustre parallel file system.
|XSEDE Globus Connect Server CU Boulder Research Computing XSEDE Online Service|
Provides access to/from all CU-Boulder Research Computing data storage resources via XSEDE authentication
|XSEDE Metrics on Demand (XDMod) Online Service|
The XDMoD (XD Metrics on Demand) tool provides HPC center personnel and senior leadership with the ability to easily obtain detailed operational metrics of HPC systems coupled with extensive analytical capability to optimize performance at the system and job level, ensure quality of service, and provide accurate data to guide system upgrades and acquisitions.
|XSEDE Digital Object Repository (XDOR) Online Service|
Digital object repository for the Extreme Science and Engineering Discovery Environment (XSEDE) project.
|XSEDE InCommon Identity Provider (IdP) Online Service|
XSEDE's InCommon Identity Provider Service enabling XSEDE users to authenticate to InCommon authentication enabled services using their XSEDE identity.
Note: Leverages https://xsede-xdcdb-api.xsede.org.
|XSEDE JIRA Online Service|
|XSEDE MyProxy Online Service|
MyProxy is open source software for managing X.509 Public Key Infrastructure (PKI) security credentials (certificates and private keys).
Note: A cron job that runs at NCSA does an XCDB query to generate the grid-mapfile needed by myproxy.xsede.org. XSEDE Allocations, Accounting & Account Management CI (A3M) staff at NCSA are responsible for that cron job.
|XSEDE Website Online Service|
|XSEDE Globus Connect Server XSEDE Ranch Online Service|
Ranch is a tape archival system with a storage capacity of 160 PB. This endpoint can be used to access data stored on the Ranch file system.
|XSEDE Globus Connect Server XSEDE Mason Online Service|
Mason at Indiana University is a large memory computer cluster configured to support data-intensive, high-performance computing tasks. This endpoint can be used to access data stored on the Mason file system.
|XSEDE Globus Connect Server XSEDE Comet Online Service|
Comet is a dedicated XSEDE cluster. This endpoint can be used to access data stored on the Comet file system.
|XSEDE Globus Connect Server XSEDE Kyric Online Service|
Kentucky Research Informatics Cloud (KyRIC) Large Memory Nodes
|XSEDE Globus Connect Server XSEDE XCI Metrics Online Service|
For storing XSEDE XCI metrics data
|XSEDE Moodle Courses Online Service|
|XSEDE Subversion Repository Online Service|
|XSEDE Globus Connect Server XSEDE TACC stampede2 Online Service|
The new Stampede2 Dell/Intel Knights Landing (KNL) System is configured with 4204 Dell KNL compute nodes, each with a new stand alone Intel Xeon Phi Knights Landing bootable processor. Each KNL node will include 68 cores, 16GB MCDRAM, 96GB DDR-4 memory and a 200GB SSD drive. Stampede2 will deliver an estimated 13PF of peak performance. Compute nodes have access to dedicated Lustre Parallel file systems totaling 28PB raw, provided by Seagate. An Intel Omni-Path Architecture switch fabric connects the nodes and storage through a fat-tree topology with a point to point bandwidth of 100 Gb/s (unidirectional speed). 16 additional login and management servers complete the system. Later in 2017, Stampede2 Phase 2 consisting of next generation Xeon servers and additional management nodes will be deployed.