Complicated scientific computing, huge data processing, and image analysis intense the requirement of high-performance computing knowledge in the computer applications in engineering. Current popular computer architectures, in terms of high performance computing, are different from those a few decades ago, especially the shared-data architectures and GPU architectures in medium scale problems are getting much popular and widely used in practical applications. This course mainly focuses on OpenMP in details for shared memory architectures with C++ implementations and coding exercises of related scientific problems, as well as introduction to message passing interface (MPI) for distributed memory architectures and CUDA for GPU architectures. The problems include heat transfer problems, image processing and analysis, and matrix multiplications.
Current progress: W01: History and impacts of high-performance computing W02: Introduction to OpenMP and its exercise (I) W03: Introduction to OpenMP and its exercise (II) W04: Introduction to heat problems, from mathematical partial equations to computer solutions. W05: Introduction to dynamic arrays and OpenCV for matrix operations and visualization. W06: Heat problem and programming W07: OpenMP parallelization exercises W08: 2D visualization and image manipulation W09: Midterm W10: Image searching and matching with parallelization W11: Complexity analysis of summation on grid mesh architecture W12: Message-passing interface (MPI) and matrix multiplication exercise W13: GPU/CUDA and matrix multiplication W14: Midterm presentations W15: Graph partitioning W16: Graph partitioning W17: Final presentations W18: Technical report writing
Midterm (programming) 30% +/- 5% Proposal presentation 15% +/- 5% Final presentation and report 40% +/- 5% Misc. 15% +/- 5%
Grama, A. (Ed.). (2003). Introduction to parallel computing. Pearson Education.