Course detail

Parallel System Architecture and Programming

FIT-ARCAcad. year: 2018/2019

The course covers architecture and programming of parallel systems with functional- and data-parallelism. First the parallel system theory and program parallelization are discussed. Programming for shared memory systems in OpenMP follows and then the most proliferated multi-core multiprocessors (SMP) and the advanced DSM NUMA systems are described.  The course goes on in message passing programming in standardized interface MPI.  Interconnection networks are dealt with separately and then their role in clusters, many-core chips and in the most powerful systems is revealed.

Language of instruction

Czech

Number of ECTS credits

5

Mode of study

Not applicable.

Learning outcomes of the course unit

Overview of principles of parallel system design and of interconnection networks, communication techniques and algorithms. Survey of parallelization techniques of fundamental scientific problems, knowledge of parallel programming in MPI and OpenMP. Practical experience with the work on supercomputers Anselm and Salomon.

Knowledge of capabilities and limitations of parallel processing, ability to estimate performance of parallel applications. Language means for process/thread communication and synchronization. Competence in hardware-software platforms for high-performance computing and simulations.

Prerequisites

Von-Neumann computer architecture, computer memory hierarchy, cache memories and their organization, programming in assembly and in C/C++.

Co-requisites

Not applicable.

Planned learning activities and teaching methods

Not applicable.

Assesment methods and criteria linked to learning outcomes

Assessment of two projects, 13 hours in total and, computer laboratories and a midterm examination.
Exam prerequisites:
To get 20 out of 40 points for projects and midterm examination.

Course curriculum

Not applicable.

Work placements

Not applicable.

Aims

To orientate oneself in parallel systems on the market, be able to assess communication and computing possibilities of a particular architecture and to predict the performance of parallel applications. To get acquainted with the most important parallel programming tools (MPI, OpenMP), to learn their practical use and solving problems in parallel.

Specification of controlled education, way of implementation and compensation for absences

  • Missed labs can be substituted in alternative dates (monday or friday)
  • There will be a place for missed labs in the last week of the semester.

Recommended optional programme components

Not applicable.

Prerequisites and corequisites

Not applicable.

Basic literature

Not applicable.

Recommended reading

current PPT slides for lectures
http://dolores.sp.cs.cmu.edu/spring2013/
http://www.cs.kent.edu/~jbaker/ParallelProg-Sp11/ 4.
Pacecho, P.: Introduction to Parallel Programming. Morgan Kaufman Publishers, 2011, 392 s., ISBN: 9780123742605
Hennessy, J.L., Patterson, D.A.: Computer Architecture - A Quantitative Approach. 5. vydání, Morgan Kaufman Publishers, Inc., 2012, 856 s., ISBN: 9780123838728

Classification of course in study plans

  • Programme IT-MGR-2 Master's

    branch MBI , any year of study, summer semester, compulsory-optional
    branch MGM , any year of study, summer semester, compulsory-optional
    branch MIS , any year of study, summer semester, elective
    branch MBS , any year of study, summer semester, elective
    branch MIN , any year of study, summer semester, elective
    branch MMM , any year of study, summer semester, elective
    branch MPV , 1. year of study, summer semester, compulsory
    branch MSK , 1. year of study, summer semester, compulsory

Type of course unit

 

Lecture

26 hours, optionally

Teacher / Lecturer

Syllabus

  1. Introduction to parallel processing.
  2. Patterns for parallel programming.
  3. Shared memory programming - Introduction into OpenMP.
  4. Synchronization and performance awareness in OpenMP.
  5. Shared memory and cache coherency.
  6. Components of symmetrical multiprocessors.
  7. CC NUMA DSM architectures.
  8. Message passing interface.
  9. Collective communications, communicators, and disk operations.
  10. Hybrid programming OpenMP/MPI
  11. Interconnection networks: topology and routing algorithms.
  12. Interconnection networks: switching, flow control, message processing and performance.
  13. Message-passing architectures, current supercomputer systems. Distributed file systems.

Exercise in computer lab

12 hours, compulsory

Teacher / Lecturer

Syllabus

  1. Anselm and Salomon supercomputer intro
  2. OpenMP: Loops and sections
  3. OpenMP: Tasks and synchronization
  4. MPI: Point-to-point communications
  5. MPI: Collective communications
  6. MPI: I/O, debuggers, profilers and traces

Project

14 hours, compulsory

Teacher / Lecturer

Syllabus

  • Development of an application on SMP in OpenMP on a NUMA node.
  • A parallel program in MPI on the supercomputer.