|Search Fields (at least one required):|
|XSEDE JIRA Online Service|
|XSEDE MyProxy Online Service|
MyProxy is open source software for managing X.509 Public Key Infrastructure (PKI) security credentials (certificates and private keys).
Note: A cron job that runs at NCSA does an XCDB query to generate the grid-mapfile needed by myproxy.xsede.org. XSEDE Allocations, Accounting & Account Management CI (A3M) staff at NCSA are responsible for that cron job.
|XSEDE Website Online Service|
|XSEDE Metrics on Demand (XDMod) Online Service|
The XDMoD (XD Metrics on Demand) tool provides HPC center personnel and senior leadership with the ability to easily obtain detailed operational metrics of HPC systems coupled with extensive analytical capability to optimize performance at the system and job level, ensure quality of service, and provide accurate data to guide system upgrades and acquisitions.
|XSEDE Digital Object Repository (XDOR) Online Service|
Digital object repository for the Extreme Science and Engineering Discovery Environment (XSEDE) project.
|XSEDE Globus Connect Server XSEDE PSC bridges Online Service|
Regular Shared Memory nodes each consist of two Intel Xeon EP-series CPUs and 128GB of 2133 MHz DDR4 RAM configured as 8 DIMMs with 16GB per DIMM. A subset of RSM nodes contain NVIDIA Tesla GPUs: 16 nodes will contain two K80 GPUs each. We anticipate adding 32 RSM nodes with two Pascal GPUs each in late 2016. Bridges contains many hundreds of RSM nodes for capacity and flexibility.
|XSEDE Globus Connect Server XSEDE TACC stampede2 Online Service|
The new Stampede2 Dell/Intel Knights Landing (KNL) System is configured with 4204 Dell KNL compute nodes, each with a new stand alone Intel Xeon Phi Knights Landing bootable processor. Each KNL node will include 68 cores, 16GB MCDRAM, 96GB DDR-4 memory and a 200GB SSD drive. Stampede2 will deliver an estimated 13PF of peak performance. Compute nodes have access to dedicated Lustre Parallel file systems totaling 28PB raw, provided by Seagate. An Intel Omni-Path Architecture switch fabric connects the nodes and storage through a fat-tree topology with a point to point bandwidth of 100 Gb/s (unidirectional speed). 16 additional login and management servers complete the system. Later in 2017, Stampede2 Phase 2 consisting of next generation Xeon servers and additional management nodes will be deployed.
|XSEDE Globus Connect Server XSEDE UD DARWIN Online Service|
Collection for XSEDE users to access data on DARWIN
|XSEDE Globus Connect Server XSEDE Comet Online Service|
Comet is a dedicated XSEDE cluster. This endpoint can be used to access data stored on the Comet file system.
|XSEDE Globus Connect Server XSEDE Kyric Online Service|
Kentucky Research Informatics Cloud (KyRIC) Large Memory Nodes
|XSEDE GitHub Repository Online Service|
|XSEDE Confluence Wiki Online Service|
|XSEDE Central Database Online Service|
XSEDE central resource accounting and user database.
|XSEDE Resource Identity Packaged Software XSEDE|
XSEDE Resource Identity
|XSEDE Globus Connect Server Installation Guide Packaged Software XSEDE|
XSEDE Globus Connect Server Installation Guide
|XSEDE Ticket System Online Service|
XSEDE Ticketing System
|XSEDE Globus Connect Server XSEDE Expanse Online Service|
|XSEDE Globus Connect Server XSEDE Data Supercell Online Service|
The Data Supercell is a complex disk-based storage system with a capacity of 4 Petabytes. This endpoint can be used to access data stored on the Data Supercell file system.
|XSEDE Globus Connect Server CU Boulder Research Computing (XSEDE, alternate) Online Service|
|XSEDE Usage (xdusage) Packaged Software XSEDE|
XSEDE Usage (xdusage)
|XSEDE Data Transfer Logging Vendor Software XSEDE|
XSEDE data transfer configuration.
|XSEDE Resource Identity (xdresourceid) Vendor Software XSEDE|
XSEDE local resource identifier tool
|XSEDE CA Certificate Installer Packaged Software XSEDE|
XSEDE CA Certificate Installer
|XSEDE Globus Connect Server XSEDE SDSC comet-gpu Online Service|
The Comet GPU resource features 36 K80 GPU nodes, and 36 P100 nodes, and supports many commercial and community developed applications. Each K80 GPU node also features 2 Intel Haswell processors of the same design and performance as the standard compute nodes (described separately under the Comet resource). Each P100 GPU node also features 2 Intel Broadwell processors with 14 cores/socket (28 cores on the node). The GPU nodes are available through the Slurm scheduler for either dedicated or shared node jobs (i.e., a user can run on 1 or more GPUs/node and will be charged accordingly). Like the Comet standard compute nodes, the GPU nodes feature a local SSD which can be specified as a scratch resource during job execution – in many cases using SSD’s can alleviate I/O bottlenecks associated with using the shared Lustre parallel file system.
|XSEDE Allocation Usage Lookup (xdusage) Vendor Software XSEDE|
XSEDE allocation usage lookup command line client