XML

kent logo

CO538 Anonymous Questions and Answers Keyword Index

This page provides a keyword index to questions and answers. Clicking on a keyword will take you to a page containing all questions and answers for that keyword, grouped by year.

To submit a question, use the anonymous questions page. You may find the keyword index and/or top-level index useful for locating past questions and answers.

Keyword reference for non-determinism

2011

Question 9 (2011):

Submission reference: IN2039

Regarding the integrate process, I was confused on how the network was initiated. Is the following explanation correct?

Thinking sequentially, from the network diagram, it appears to begin with a number read in by the plus component. plus requires two inputs, so my thought was that the network would deadlock since only a single number could possibly be read in.

However, since all components are ran in parallel, the network is in-fact initiated by the *prefix* component. prefix outputs a 0, which is read in by plus (as well as the input number), passed to delta, with prefix copying the stream from delta from then on.

Answer 9:

Thank you for this question. Apologies for this lengthy reply ...

Thinking sequentially about a parallel network is not a good idea. The network does not begin with, nor is it initiated by, any of its parallel components. All its components start independently and in whatever order they happen to get going.

The platform on which the components run does not matter. They could be physically separate pieces of silicon real-estate, part of a larger chip, in which case the order in which they power up is indeterminate (though separated by nanoseconds). They could be on different computers in the cloud (in separate parts of the planet), in which case the order in which they start is also indeterminate (and may be hours apart). They could be software scheduled on a single core (as is the case with the Transterpreter) – in which case the order they are scheduled is still indeterminate! On some magic platform of the future, they might even start up simultaneously (though the Special Theory of Relativity pours scorn on the idea of simultaneity).

The point is that PAR components start in any order and that order does not matter! So, stop worrying about which component physically starts first.

In the case of the integrate process, its components' first actions are to communicate either between each other or with devices external to integrate. Indeed, apart from the addition in the plus component, there is nothing else that they do!

Other components (of other PARs) may get on with lots of computation (e.g. calculate pi to a billion decimal places, invert some very large matrices) before any communication. If those components are placed on separate computational engines (e.g. separate cores on a microprocessor or separate computers in the cloud), then their computations can proceed in parallel (i.e. at the same time). Clearly, the order in which they actually start is not relevant.

Back with integrate, because they communicate with each other, we do need to consider the patterns (i.e. sequences) of communication that are possible. In general (and in this case), there can be several.

The last paragraph in your question pretty much says it right.

The above bullets, however, do not describe all the possible initial patterns of communication!

For example, in the fourth bullet above, the second input value could be taken (it it is being offered by some external device) by plus (actually, by one if its freshly spawned sub-processes). So, that second external input could happen inteleaved anywhere between the two outputs from delta described in this bullet.

Do not be alarmed by all these different possible sequences in which the communications may happen. Apart from getting a feel for the rules of synchronised communication (fundemental to the occam-pi/CSP concurrency model), we don't need to think at this low-level about what is actually happening! A crucial property of the occam-pi/CSP concurrency algebra is that the function performed by a network is deterministic (i.e. independent of the order in which events actually happen). For integrate, its outputs are the running sums of its inputs, regardless of the order of its internal actions.

At least, that's the story so far ...

In lecture 6, we will introduce Choice mechanisms that are provided so that we can design processes whose functions are determined by the order in which events happen. However, the nice thing about occam-pi is that we only get non-determinism by explicitly programming it. For almost all other concurrency models, non-determinacy is the default mode of operation (which means that the understanding we had about serial coding no longer applies) and we have to become skillful in eliminating it ... and that turns out to be very hard ...

With occam-pi/CSP concurrency, our intuition about serial programming remains trustworthy. For example, a variable only changes its value if the code working on it changes its value. There's no guarantee of that with, for example, threads-and-locks concurrency – i.e. what you see is not what you get.

[Aside: in most Object Oriented languages (including Java), the position is even worse ... that guarantee does not even apply to purely serial programming ... but that's another story ...]

Keywords: q2 , non-determinism , choice , alt

2003

Question 122 (2003):

What is actually meant by non-determinism ? Is it the fact the you cannot predict the values of variables and program position ? Due to program flow is un-predictable ?

Answer 122:

Almost.. What you describe is really an artifact of non-determinism. Non-determinism is the inability to predict what event will happen next, even when you do know what events are being offered. A classic example is the difference between `ALT' and `PRI ALT'. For example:

    ALT                          PRI ALT
      c ? x                        c ? x
        P ()                         P ()
      d ? y                        d ? y
        Q ()                         Q ()

In the first (`ALT') example, if we know that the channels `c' and `d' are ready for communication, we cannot tell which guard will be selected by the `ALT'. Hence, that choice is non-deterministic. In the other (`PRI ALT') example, if both channels are ready, we do know what will happen -- it will pick the `c' channel.

`PRI ALT' isn't always determinstic, however. For example:

    PRI ALT
      tim ? AFTER t
        Q ()
      c ? x
        P ()

This too is non-determinstic, since it relies on timing -- the outcome depends on when the code is run. Another way of thinking about a timeout is a parallel-process that waits for a bit then communicates. The above, for example, is equivalent to:

    CHAN BOOL d:
    PAR
      SEQ
        tim ? AFTER t
        c ! TRUE

      PRI ALT
        BOOL any:
        d ? any
          Q ()
        c ? x
          P ()

Unlike the earlier example, where we assumed we knew about the `ready' states of the `c' and `d' channels, we cannot call this example determinstic. The `ready-ness' of `c' depends on timing.

Keywords: non-determinism

Valid CSS!

Valid XHTML 1.0!

This work is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License.
Last modified Mon May 20 13:50:28 2013
This document is maintained by Fred Barnes, to whom any comments and corrections should be addressed.