Search Fields (at least one required): |
XSEDE: Customizing Your Computing Environment Training CI-Tutor A default computing environment for each of your allocated resources is automatically set up for you when your XSEDE account is created. It provides access to the default compilers, directories, and software you will need to use XSEDE resources efficiently. Eventually, you may find that the default environment doesn't meet your specific usage requirements. When that happens, you may customize your environment by configuring various settings and creating shortcuts for accomplishing tasks you perform regularly. In this tutorial, you will learn how to customize your XSEDE computing environment using UNIX commands and the Modules package. Note: This course was previously offered on CI-Tutor. |
Introduction to Visualization Training CI-Tutor This tutorial provides information about the evolution of scientific visualization, its uses in computational science, and the creative process involved. Relevant concerns, such as visual perception, representation, audience communication, and information design are discussed throughout the course and are referenced for further investigation. This course does not teach how-to-do scientific visualization but is a valuable resource for learning the basic concepts and how they originated. Note: This course was previously offered on CI-Tutor. |
Multilevel Parallel Programming Training CI-Tutor The Multilevel Parallel Programming (MLP) approach is a mixture of message passing via MPI and either compiler directives or explicit threading. The MLP approach can use one of several combinations referred to as MPI+X, where X can be OpenMP, CUDA, OpenACC, etc. In this tutorial, you will learn about MPI+OpenMP. Both are widely used for scientific applications and are supported on virtually every parallel system architecture currently in production. Prerequisites: A basic understanding of MPI and OpenMP. Note: This course was previously offered on CI-Tutor. |
Debugging Serial and Parallel Codes Training CI-Tutor No matter how experienced you may be at debugging code, it can be a frustrating task especially if all you have at your disposal is the ability to insert print statements at strategic locations with the hope of finding where the problem lies. This tutorial introduces you to a better method for debugging your code -- using debugger software. We start by giving an overview of debugger software capabilities and then describe several different types of errors that can be made and how to debug them. The lessons are divided into debugging serial and parallel codes since the methods for debugging them are slightly different, as are the types of bugs you encounter. To help you understand each lesson's concepts, a sample program is given that has a particular bug, and the process for debugging it is described. To further your understanding, exercises are provided so that you can debug a program on your own. This course's unique feature is the explanation of why a certain debugger command is used during a debugging session. There are many step-by-step manuals for debuggers that list what-to-type-when. However, these manuals do not explain why you should type it. In each debugging session shown in this course, we explain how the debugger is being used and emphasize the overall debugging strategy. Prerequisites: General programming knowledge. Course examples use Fortran and C/C++. Note: This course was previously offered on CI-Tutor. |
Running Matlab On XSEDE Systems Training CI-Tutor This is a short series of lessons on using parallel Matlab for high-performance computing on the San Diego Supercomputer Center's Comet supercomputer, one of XSEDE's allocated resources. Although the lessons focus on Comet, it is possible to transfer what you learn to other XSEDE systems. A brief review of running parallel Matlab on Pittsburgh Supercomputer Center's Bridges and TACC's Stampede2 supercomputers is included. You may complete optional assessments to earn an XSEDE badge.
|
Using the XSEDE User Portal Training CI-Tutor The XSEDE User Portal, or XUP, is a web-based single point-of-entry to information about XSEDE services. It is a valuable resource for learning about XSEDE and supporting your XSEDE project activities. In this tutorial, you will learn how to get started using the XUP and how to use some of its key features. Target Audience: New users of the XSEDE User Portal.
Note: This course was previously offered on CI-Tutor. |
Developing Effective Training Webinars Training CI-Tutor In this course, you will learn an approach to developing effective webinar training. This approach is designed to help you produce instructional materials using webinar best practices, interactive tools for engaging learners, and instructional design principles. Target Audience: Subject Matter Experts with little to no instructional design expertise. ![]() |
Introduction to Multi-core Performance Training CI-Tutor Multi-core is a term used to describe a computer architecture where two or more processors, or cores, are integrated onto a single chip package. This architecture is used to run multiple instructions simultaneously, leading to increased performance for parallel applications. Multi-core processors are ubiquitous in today's computing devices. They are not only found in high-performance computing (HPC) systems but also desktop and laptop computers, tablets, and smartphones. While these multi-core systems can provide automatic performance improvements, individual applications must be modified to take full advantage of them. This tutorial introduces the general concepts of multi-core systems and the methods used to improve HPC application performance on them. Target Audience: HPC application users and developers who run applications on a multi-core system. Note: This course was previously offered on CI-Tutor. |
Introduction to Performance Tools Training CI-Tutor Performance tools are software used to measure application performance, usually with respect to the execution time of all or portions of a code. These tools collect data from a running application that is later analyzed to determine if and where there are performance bottlenecks. This tutorial introduces the capabilities of the following relatively easy to use performance tools: strace, gprof, and TAU. There are many more tools available for analyzing code performance not covered. This tutorial aims to get you started using performance tools and help you later when you explore other tools' capabilities. Target Audience: Scientific application developers with basic parallel programming experience who are new to using performance tools.
Note: This course was previously offered on CI-Tutor. |
Cybersecurity for End Users Training CI-Tutor Cybersecurity is a set of principles and practices designed to safeguard computing and online data resources against unauthorized access. In this tutorial, you will learn about the key issues affecting your online security and how you can keep your computing resources safe. There are ten short lessons covering account security, email safety, workstation security, web browser safety, wi-fi security, mobile device security, data protection, travel precautions, and social network precautions. After completing this tutorial, you will recognize the importance of your role in safeguarding your computing resources, the main types of cybersecurity risks, and the actions you can take to ensure your safety. |
Using the Lustre File System Training CI-Tutor This tutorial covers the fundamentals of how to use the Lustre File System in a scientific application to achieve efficient I/O performance. Lustre is an object-based, parallel distributed file system used for large-scale, high-performance computing clusters. It enables scaling to a large number of nodes (tens of thousands), petabytes (PB) of storage, and high aggregate throughput (hundreds of gigabytes per second) making it an advantageous data storage solution for many scientific computing applications. Many of the top supercomputers in the world, as well as small workgroup clusters and large-scale, multi-site clusters, use Lustre. Target Audience: HPC application users and developers who run applications on a Lustre file system.
Note: This course was previously offered on CI-Tutor. |
Introduction to MPI Training CI-Tutor The Message Passing Interface, or MPI, is a standard library of subroutines (Fortran) or function calls (C) that can be used to implement a message-passing program. MPI allows for the coordination of a program running as multiple processes in a distributed memory environment, yet is flexible enough to be used in a shared memory environment. MPI is the defacto standard for message-passing, and as such, MPI programs should compile and run on any platform supporting it. This provides ease of use and source code portability. It also allows efficient implementations across a range of architectures, offers a great deal of functionality, includes different communication types and special routines for common collective operations, handles user-defined data types and topologies, and supports heterogeneous parallel architectures. This tutorial provides an introduction to MPI so you can begin using it to develop message-passing programs in Fortran or C. Target Audience: Programmers and researchers interested in using or writing parallel programs to solve complex problems. Prerequisites: No prior experience with MPI or parallel programming is required to take this course. However, an understanding of computer programming is necessary. Note: This course was previously offered on CI-Tutor. |
Introduction to OpenMP Training CI-Tutor OpenMP is a standardized API for parallelizing Fortran, C, and C++ programs on shared-memory architectures. This tutorial provides an introduction to OpenMP in a concise, progressive fashion, so you can begin to apply OpenMP to your codes in a minimum amount of time. Some general information on parallel processing is also included to the extent necessary to explain various points about OpenMP. Examples are presented in both Fortran and C. Prerequisites: Knowledge of basic programming in Fortran, C, or C++. Note: This course was previously offered on CI-Tutor. |
Parallel Computing on High-Performance Systems Training CI-Tutor This tutorial provides an introduction to parallel computing on high-performance systems. Core concepts covered include terminology, programming models, system architecture, data and task parallelism, and performance measurement. Hands-on exercises using OpenMP will explore how to build new parallel applications and transform serial applications into parallel ones incrementally in a shared memory environment. (OpenMP is a standardized API for parallelizing Fortran, C, and C++ programs on shared-memory architectures.) After completing this tutorial, you will be prepared for more advanced or different parallel computing tools and techniques that build on these core concepts. |