XXIIVV

Computation in which many calculations are carried out simultaneously.

A concurrent program needs to perform several possibly unrelated tasks at the same time. In contrast, a parallel program solves a single problem.

By definition, a concurrent program deals continuously with networking protocols, databases, and the like. A typical parallel program is likely to be more focused: it streams data in, crunches it for a while, then streams data back out.

There are many aspects to the parallel execution of a program: threads are created, execute on a processor, transfer data to and from remote processors, and synchronise with other threads. Managing all of these aspects on top of constructing a correct and efficient algorithm is what makes parallel programming so hard.

Events happen in both time and space. It is possible for two events to occur in the same place one after the other in time (ie. sequentially), and equally possible for events to occur in different places at the same time (ie. concurrently, or in parallel).

Divide & Conquer

To solve a large instance of a problem, break it into smaller instances of the same problem, and use the solutions of these to solve the original problem. The branching factor of a divide-and-conquer algorithm is the number of subproblems into which a problem is divided. A divide-and-conquer algorithm is balanced if it divides the initial problem into equally-sized subproblems.

Communicating Sequential Processes

CSP is a process algebra which is used to describe parallel programs. In this world, a program is a network of processes, which are connected using channels. A channel is a point-to-point, uni-directional, synchronous unbuffered comms link. Processes only need to be aware of the channels connecting them to other processes, and how to communicate on those channels (generally using the same protocol as the process on the other end).

Mutex

In its simplest form, a "binary semaphore" is a flag associated with a resource. Two operations act on semaphores: WAIT and SIGNAL. WAIT checks to see if the resource is available. If so, it is marked "unavailable"; if not, the CPU is released to other tasks until the resource becomes available. SIGNAL just marks the resource "available." A true mutex has a more specific use-case and definition, in that only the task that locked the mutex is supposed to unlock it.

Every task, before using a shared resource, does a WAIT on its semaphore, and after using the resource, does a SIGNAL. This is sufficient to ensure that only one task can use that resource at any time -- and yet even if one task is blocked, the other tasks can run normally.

Lock-free

Lock-free data structures are data structures that are thread and interrupt safe without having to use mutual exclusion mechanisms. Lock-free data structures are most useful for inter process communication, but due to the efficiency of lockfree, it can safely be used for single threaded uses as well, making it good for general purpose use.

However it is worthwhile to reflect on the contrast between the concurrent nature of the world, and the sequential nature of the digital computer. Since the main purpose of the computer is to model the world, there would seem to be a serious mismatch.

Multitasking

Round-robin means that each task takes its turn at the CPU, one at a time, in a fixed sequence like a big loop of tasks. Cooperative means that each task has the CPU as long as it wants, and releases the CPU only when it's ready.

Tuple spaces is a coordination model for communication in parallel computing environments.

A tuple space is a place where processes can put, read, and take tuples, which are in turn just sequences of values. For example, ("job", 12, 1.23) is a tuple made up of a string, an integer, and a floating-point number; a tuple space can contain zero or more copies of that tuple, or of tuples containing other types of values, simple or complex.

A process puts something in tuple space with put(a, b, c, ...), take something out with take(a, b, c, ...), or copy something leaving the original in tuple space with copy(a, b, c, ...). The arguments to take and copy are either actual values, or variables with specific types; values match themselves, while types match things of that type. For example:

There are non-blocking versions of take and copy called try_take and try_copy that either match right away and return true, assigning values to variables in their patterns, or fail to match, don't do any assignment, and return false.

Summary

incoming interaction nets