Courses offered by SCITAS
- Intro courses
- Master course: Parallel and High Performance Computing
- Doctoral school course: Parallel programming
- Specific courses
These courses are organized twice a year, before the semesters start. Each of the four courses has a duration of half a day. These courses can be organized on demand for a group of 4 or more people and can be adapted to the audience. If you wish to have one or more introductory courses for your lab or class, please contact us.
For all the courses, the participants are asked to bring their own laptops.
The goal of this course is to give the basis on the usage of a Linux system in order for the user to be able to use the general purpose clusters, and to feel at ease like a penguin in cold water.
- Overview of Linux
- Connecting to a remote machine (from Linux, Windows or Mac)
- Using a Linux/Unix system with only the command line
- Basics on file organization
- Common shell commands
- Writing a shell script
Having access to a computer with a Linux environment.
This course is for new users of the central HPC resources. You will familiarize yourself with the SCITAS’ HPC clusters and their software environment. You will learn create, launch, and manage your simulations.
- What is a compute cluster
- Introduction to the SLURM batch system
- Disk space management
- How to use modules
- How to create job-scripts
- Job submission exercises
- Querying jobs
- Debugging jobs
- Tools & tips
A minimal knowledge of Linux/Unix environments is required. Our course “Introduction to Linux” can be followed prior to this course.
Simulation data are very varied in size and in nature, e.g. source files, input files and output files. For each data type there is an optimal transfer and storage method. In this workshop the students will learn how to handle their data when using remote machines for computations.
A minimal knowledge of Linux/Unix environments is required. The courses “Introduction to Linux” can be followed prior to this one.
The goal of this course is to teach you the fundamentals of code compilation and using MPI programs on HPC clusters. You will learn how to compile your code (serial and MPI) and link it to external libraries. A preview of two build automation tools will also be presented.
- Basics of compilation
- Linking and libraries
- Compiling and running MPI codes
- Introduction to GNU Make and CMake
Being comfortable on a cluster environment. The course “Using the general purpose clusters” can be followed prior to this course.
The MATH-454 Parallel and High Performance Computing course is offered in the spring semester and covers the following topics:
- The essentials
- Using the facilities
- Understanding HPC concepts
- Writing efficient code
- Parallelization methods
- Advanced topics
- Hybrid computing
- Proposal writing
It can be followed by master students and PhD students. The exam consists in an individual project. The topic is to be chosen among the ones suggested by the instructors. However, PhD students can suggest their own, as long as the project can be included in their PhD work.
The PHYS-743 Parallel programming course is given as a 1-week intensive course followed by a 1-week individual project. The contents of the course are:
- Optimization of a sequential code
- Parallelization on a shared memory node
- Parallelization on a distributed memory cluster: basic concepts
- Parallelization on a distributed memory cluster: advanced concepts
- Hybrid programming (OpenMP + MPI)
This course is organized on-demand.
1 day course.
In this course you will learn how to profile your code in order to measure the performance. Several tools will be presented. The second part of this workshop will show software optimization techniques.
Introduction to profiling and software optimization
- Introduction to profiling and software optimization
- Software optimization tecniques
- Test cases
The course “Introduction to the central HPC facilities” should have been followed prior to this course or users should have equivalent experience.
The course is organized as a three-day, intensive, full-time course. It puts emphasis on practical implementation and includes examples and exercises performed on a dedicated PC Linux cluster. After an introduction to various parallelization models, the course focuses on the Message Passing Interface (MPI) standard and the shared memory programming paradigm OpenMP. After the 3-days course, the attendees will be able to understand, modify, or program from scratch applications in most of the scientific and engineering fields using the functions of the MPI-1 standard and the OpenMP 3.0 specifications. The topics covered include
- Parallelization using MPI (2 days)
- Overview of parallel programming models
- Point-to-point communications
- Collective communications
- Parallelization using OpenMP (1 day):
- Overview of the OpenMP 3.0 standard
- Fine grain / coarse grain approaches
- Control, data and synchronization constructs, scheduling
- Traps with OpenMP
- “OpenMP-ization” methodolgy (good programing practices)
No prior experience of parallel programming is required but a working knowledge of either Fortran or C/C++ as well as a basic knowledge of Unix/Linux is mandatory.
The course is organized as a two-day, intensive, full-time course. It is mainly for participants who have finished the introductory course An Introduction to Parallel Programming or equivalent working experience of MPI programming in Fortran and C/C++.
Programmers having already a solid experience on MPI are indeed welcome.
Goal of the course
To tackle advanced MPI functionalities, including MPI-OpenMP hybrid programming.
GPUs are becoming more and more popular and nowadays they are widely available. During this course you will receive an introduction about numerical methods on GPU.
- GPU architecture
- Parallel algorithms
- Optimizing GPU programs
- Parallel computing patterns