Parallelism was born in a performance objective : speed up the resolution of problems that cannot be solved in a purely sequential world, or at an unacceptable cost. Parallel architectures that allow having several threads of computations at the same time were built in this objective. Still, while the computing power at our disposal is today virtually unlimited, being able to fully leverage this computing power and program parallel architectures efficiently is a major challenge of today’s programmers.
This class intends to develop attendees’ capacity to solve problems efficiently in parallel environments. Firstly, this requires to design parallel algorithms and to be able to precisely capture their complexity. Secondly, it requires to master tools for the parallelization of sequential codes such as Open-MP in shared memory architectures or MPI in distributed memory architectures. Finally, this requires to understand how to exploit accelerators / GPUs through programming platforms such as CUDA.
Parallel algorithmics, parallelizing sequential codes with Open-MP, distributed programmation with MPI, GPGPU programming with CUDA.
Cédric Tedeschi (cedric [dot] tedeschiirisa [dot] fr)