School of Computer Science
Computer Science 308-767: Parallel Programming
In attempting to explain the natural wold, it is standard practice for a scientist to develop a theory and to then subject this theory to experimental verification. Similarly, engineers build prototypes to investigate their designs. This paradigm is becoming increasingly difficult to pursue because of the sheer size and complexity of the systems which we are trying to design and the phenomena which we are trying to understand. Trying to subject theories on the evolution of the galaxy or climate change to experimental verification or building a new aircraft on the basis of a paper design are not realistic approaches. This traditional approach is largely being replaced by a new paradigm, a computational science approach in which high performance computer systems are used to simulate the phenomena which we wish to either explain and the systems which we wish to design. Examples of this approach in science include global climate modeling, computational chemistry and brain modeling. Familiar examples in engineering include computational fluid dynamics for aircraft design, semiconductor design and nuclear weapons design. In the business world, economic forecasting models drive both business decisions and stock trading programs.
One inescapable conclusion is that the computing platforms on which we implement these numerical models (or simulations) must be powerful. They are also parallel machines. The reason that they are parallel is that there are limitations to the number of transistors which can be packed onto a chip and we are rapidly approaching this limit, in spite of all of the efforts made by VLSI designers. Moore's law ( enunciated in 1965) predicted that the number of transistors which could be etched on a chip would double once every 18 months. In 1975 that meant about 4,000. In 2006 it means about a billion!! In that time period the clock rate of a microprocessor went from about 1 MHz to 1 GHz. This came at a cost, however. Moore's second law saw in increase in the cost of a semi-conductor foundry go from about $10,000,000 to $10,000,000,000 (constant 1995 dollars). There is an inevitable consequence, which is physical. There is a limit to how far a transistor can shrink. At present, they are not the size of atoms, which will be necessary in order to continue this growth. In short, the future is parallel- for those of little faith, you need look no further then the January, 2006 announcement by Apple that the iMac and the MacBook Pro are now dual core machines.
Here is an outline of the topics which we are going to cover.
These topics are subject to time limitations and the instructor's whims ......
How the course will be run: I am going to cover material from the course text and other places in the form of lectures. You will build parallel programs based on the material which we cover in the course. This is a seriously hands on course. You have the option of replacing one of the assignments which I give you with a project of your very own. It can be from your research. Needless to say, I have to be happy with it. I will expect a detailed description of the algorithms which you chose to implement, a detailed description of related algorithms, a detailed description of the application which you chose along with your motivation for chosing the problem.
On-line references/course materials
Slides for the course can be found at Course slides. The slides are in powerpoint-if you don't have it on your machine, you can download open office.