PHYS-743 / 3 credits

Teacher(s): Keller Vincent, Richart Nicolas

Language: English

Remark: Next time: Fall (Block course)


Frequency

Every year

Summary

Learn the concepts, tools and API's that are needed to debug, test, optimize and parallelize a scientific application on a cluster from an existing code or from scratch. Both OpenMP (shared memory) and MPI (distributed memory) paradigms are presented and experimented.

Content

Keywords

OpenMP, MPI, HPC, Parallel programming

Learning Prerequisites

Required courses

Strong knowledge of C, C++ or Fortran 90

Basic knowledge of Linux and bash scripting

Resources

Notes/Handbook

By the end of the course, the student must be able to:

  • Optimize sequential and parallel codes
  • Implement algorithms in parallel with OpenMP and MPI
  • Investigate the performances of parallel code

Moodle Link

In the programs

  • Number of places: 8
  • Subject examined: Parallel programming
  • Lecture: 20 Hour(s)
  • Exercises: 20 Hour(s)
  • Practical work: 8 Hour(s)

Reference week