PHYS-743 / 3 credits

Teacher(s): Lanti Emmanuel, Richart Nicolas

Language: English

Remark: Next time: Fall (Block course)


Every year


Learn the concepts, tools and API's that are needed to debug, test, optimize and parallelize a scientific application on a cluster from an existing code or from scratch. Both OpenMP (shared memory) and MPI (distributed memory) paradigms are presented and experimented.



OpenMP, MPI, HPC, Parallel programming

Learning Prerequisites

Required courses

Strong knowledge of C, C++ or Fortran 90

Basic knowledge of Linux and bash scripting



By the end of the course, the student must be able to:

  • Optimize sequential and parallel codes
  • Implement algorithms in parallel with OpenMP and MPI
  • Investigate the performances of parallel code

Moodle Link

In the programs

  • Number of places: 8
  • Subject examined: Parallel programming
  • Lecture: 20 Hour(s)
  • Exercises: 20 Hour(s)
  • Practical work: 16 Hour(s)

Reference week