So far my FDR analysis of JCSP has been met with total silence, perhaps because most of you are away on holiday, but there was a very interesting issue that I should have highlighted. I didn't realise how significant it was at the time I posted my last message. In Peter's CSP model the wait method was defined like this... WAIT(o,me) = release.o.me -> waita.o.me -> waitb.o.me -> claim.o.me -> SKIP Using this definition I found that JCSP was incorrect - the channel may deadlock. The reason is that there is window of time between events release.o.me and waita.o.me where a thread neither holds the monitor nor belongs to the auxilliary pool of waiting threads. During this time interval it is possible for a notify event to be fired and to miss its target. So I redefined the wait method like this... WAIT(o,me) = waita.o.me -> release.o.me -> waitb.o.me -> claim.o.me -> SKIP Now everything works out fine, because a thread holds onto the monitor until after it has joined the waiting pool.. But was I right to do this? Or do implementations of Java exist where the time interval that Peter modelled really does happen? I hope not, because if that were so JCSP would be prone to arbitrary deadlocks, and we would all have to go back to the drawing board. Jeremy ------------------------------------------------------------------------- Dr J. M. R. Martin Oxford Supercomputing Centre Tel: (01865) 273849 Wolfson Building Fax: (01865) 273839 Parks Road Email: jeremy.martin@comlab.ox.ac.uk Oxford OX1 3QD WWW: http://users.ox.ac.uk/~jeremy