I just finished listening to an ACM webcast entitled "Changing How Programmers Think about Parallel Programming". It was presented by William Gropp from the University of Illinois at Urbana-Champaign and it was very informative!
He discussed two different types of parallel programming. Course grained parallelism is where you divide a task up into chunks and each process performs the same sequence of operations on its assigned chunk. For example, say you have to mail some letters and have multiple people at your disposal to help you. With course grained parallelism, each person would be given some number of letters to mail and would be responsible for all tasks that are involved with mailing the letter, such as folding the paper, placing it in the envelope, and applying a stamp.
Fine grained parallelism, however, is where you divide the task up by operation and each process is assigned to a specific operation. Using the letter example, this would mean that one person would be responsible for folding the paper, another for placing the paper in the envelope, and so on.
He discussed the fact that processes must often share data with each other. One technique for doing this is for each process to copy some of the data that is assigned to other processes before execution begins. He also mentioned that parallel programs are harder to debug than traditional, single-threaded programs, which is one reason why programming in a parallel fashion is so difficult!