CO527 Anonymous Questions and Answers |
This page lists the various questions and answers. To submit a question, use the anonymous questions page. You may find the keyword index and/or top-level index useful for locating past questions and answers.
We have taken the liberty of making some minor typographical corrections to some of the questions as originally put. Although most of the questions here will have been submitted anonymously, this page also serves to answer some questions of general interest to those on the course.
Submission reference: IN1197
Hi, as far as I can see there is only one past exam paper for this module, is there anywhere else that I can find relevant exam questions for the course material?
Look at the past exam papers for CO501 and EL563. You'll have to use your own judgement about which questions you can and cannot answer (going by the material covered in the lectures this year).
Keywords: exams
Submission reference: IN1199
Hi, Is there any chance you could do a revision session at another point some time after Thursday also? All of the CS students have an exam that afternoon and will have to cram for it and learning op sys stuff 2 hours before isn't going to help...
I'm only going to do 1 revision session! I've asked timetabling about a slot on Friday instead of Thursday and will let the group know by email.
Submission reference: IN1198
Is there going to be a recording and material on the CO527 web-page for the revision session, for those who cannot attend?
I'll probably do a recording, but there won't be any specific material (I'll probably just use OHP slides where necessary). If you want to get the most from it, make sure you attend!
Submission reference: IN1185
Regarding question 5c of the 2006 paper:
In terms of the speed-up factor equation, 'k' would be 10 and 6 accordingly, but i am unsure how to find 'n' or 't'. t is when every task finishes, and n is the number of tasks. I understand that 1ghz means 1 billion cycles per second, and that it determines how many instructions per second the microprocessor can execute, but does this map 1:1 i.e. we set 'n' to 1 billion, and t to '1', or am I missing something painfully obvious! any tips would be fantastic, thanks!
The original question was:
Compute the speed-up factor achieved by a 1 GHz 10-stage pipeline processor in comparison with a 400 MHz 6-stage pipeline architecture.
The equation "S = knt / (kt + (n-1)t)" gives the performance increase achieved by adding a pipeline to an architecture. However, the question asks for a comparison between two different architectures, so it doesn't help. Instead we need to consider instruction throughput.
Speedup factor is a function of the number of tasks to be performed. So to calculate relative speed we need to assume a value for n.
Taking for example n=100
The rate of processing for a 1GHz, 10-stage pipeline CPU:
1 GHz - so cycle time is 1nS
First result arrives after 10 clock cycles - i.e. 10nS and one result arrives every nanosecond thereafter so time taken to produce 100 results is 10nS + 99ns = 109nS
The rate of processing for a 400MHz, 6-stage pipeline CPU:
400MHz - so cycle time is 2.5nS
First result arrives after 6 clock cycles - i.e. 15nS and one result arrives every 2.5nS thereafter so time taken to produce 100 results is
15nS + 99*2.5nS = 262.5nS
So the 1GHz machine is faster by a factor of 262.5/109 = 2.4 times faster.
Keywords: architecture
Referrers: Question 14 (2007) , Question 245 (2006)
Submission reference: IN1184
I'm currently revising speed up factor. I'm having some difficulty with the formula. If possible could you give me an example of the speed up factor formula in use, as there doesn't seem to be any in the notes.
Speedup Factor is the rate of increase in processing throughput due to enhancements in processor performance. A pipelined processor has the speedup factor:
S = knt / (kt + (n-1)t)
Where k = number of stages in the pipeline, n = number of tasks to be performed, and t = time unit per pipeline stage
So if a processor is improved by the inclusion of a 5-stage pipeline then speed-up factor for say 1000 tasks is:
S = 5.1000.t / (5t + (999)t) = 5000/1004 = 4.98
Keywords: architecture
Referrers: Question 14 (2007)
Submission reference: IN1200
The question is on sockets:
Processes: A, B
If there is a socket from A to B, can we use a single socket to send data in both directions? Or do we have to use 2 sockets ((A to B) and (B to A))? Thanks.
From the application's point of view, a single socket works in both directions. In practice it's a little less straightforward, e.g. depending on which particular socket protocol is being used. But when most people say "socket" they mean "TCP/IP socket", which is a bi-directional buffered stream communication mechanism.
If you want to know the grubby details of what actually goes on, check in the standard OS textbooks, but also W. Richard Steven's books on TCP/IP and "Advanced Programming in the Unix Environment" (one of the most highly regarded Unix programming texts).
Submission reference: IN1201
I have a question on the Unix I/O model:
What I understand about the model, is that it states that the processes use the principles of the network protocols to communicate between them. Is it correct? Or am I completely lost?
That's not quite correct, nope. The essence of the Unix I/O model is that it is blocking, although non-blocking I/O is available (and increasingly asynchronous I/O). For example, if a process tries to read from the keyboard and the user hasn't pressed any keys, that process will block (i.e. wait somewhere in the OS until keyboard data is received, which will wake it up). And similarly for reading from sockets, pipes, streams, etc. Writing tends to block only when buffers between processes (typically inside the OS) fill up — because the readers aren't reading, or the network has been unplugged, etc..
Non-blocking I/O is where a process tries to read from the keyboard (where no keys have been pressed) and gets back an error-code stating that nothing was there to read. Usually this only makes sense where there is some mechanism to put the process to sleep until something happens — in Unix this is the "select()" system-call, which allows a process to wait for events on any number of sockets/pipes/etc., then deal with only those which were active when it wakes up.
Asynchronous I/O is where the OS invokes a certain piece of program code when an I/O event occurs (e.g. data becomes available for reading). This is more complex, but not entirely unlike the Java Swing GUI (which makes asynchronous callbacks in its own thread).
Keywords: unix
Referrers: Question 208 (2006)
Submission reference: IN1202
On the unix device/drivers:
Process : A
The asynchronous device processing is performed onto the context of a process known as daemon. So if the user performs A,and A calls that asynchronous device processing, then A is run in the context of the daemon.
Is it correct?
This sounds a little confused to me — what question were you trying to answer? We certainly didn't cover asynchronous I/O in any great detail (at most it was probably me mentioning it in a lecture). You seem to be confusing device I/O and daemon operation.. A daemon is a system process which manages a service, usually I/O related (see Question 131 (2006)). The daemon code itself can use blocking, non-blocking or asynchronous I/O, depending on requirements (see Question 207 (2006)). In the context of user processes, if the process requires the services of a daemon, it will need to communicate with it in some way — covered under in inter-process communication (ipc).
Keywords: unix , device-driver
Submission reference: IN1203
Hi I was wondering what the split was like to be between the OS material and the architecture material. Is it likely to be 50/50 ?
Are you referring to the revision session this afternoon or the exam itself? The revision session this afternoon will primarily cover OS topics requested (Question 200 (2006)), it's not a "general overview" of the module (which I think is fairly pointless, as that information is readily available in the lecture slides). I'm not necessarily expecting the other lecturers to make it to the session, as they have other commitments (and this is the 3rd time this has been rescheduled). If you have any specific topics you want covered (other than those already listed), better post them here soon :-).
Keywords: exams
Submission reference: IN1204
For the spooling: When we use transparent mode, do we need to provide exclusive device access?
Not necessarily — and usually it's not a concern for the mechanism providing the transparent spooling. It might be an issue for the daemon driving the underlying device, however.
Keywords: spooling
Submission reference: IN1205
On the reading list you gave us to read at every lecture, there are some things not even mention, in the lecture. Are those things going to be examined? Or are we going to be examined on the parts of the lecture, but on more depth?
If you're referring to the (usually) penultimate slide "where to find out more", it's primarily where to find out more — i.e. the content of those references are further reading, and not strictly examinable (except where they overlap with the lecture content, which is examinable). I put them in there in case people want to find out more about particular things (some people find such things interesting!). The (usually) final slide "self test questions" are based on the lectured material, not that further reading.
Keywords: exams
Submission reference: IN1206
Hello, Would you be able to make the resit past paper 2006 available on the computing website please.
Not easily, as I don't have a PDF of it, but I'll see if it can be made available (unfortunately this won't be before Tuesday).
Keywords: exams
Submission reference: IN1207
What does 'atomicity' mean in context of atomic swap/load operations etc?
See the past questions relating to "atomic" — they tend to talk about "indivisibility", but it's essentially the same thing ("atomic" is from the Greek word "atomos", which translates as "indivisibility").
Keywords: atomic
Submission reference: IN1208
Hi, could you please explain how FIFO penalises heavily modified pages?
This sounds a bit too much like a past exam question.. Consider what happens to pages when the FIFO strategy is used — in particular what will inevitably happen to heavily used pages when other pages are required. Contrast this with the behaviour of the NUR strategy.
Keywords: memory-management
Submission reference: IN1209
Hi, I was wondering do we need to the know the specific implementation of spinlocks for MIPS and Intel for the exam?
Nope.
Submission reference: IN1210
Is "set associative" the same as "fully associative" in terms of the cache? If not, what is the difference between the two and why does virtual memory use fully associative mapping to translate virtual addresses to physical address instead of set associative? finally, are there any disadvantages to set associative mapping? thanks :)
Nope, "set associative" and "fully associative" cache mapping are different, as is a "direct mapped" cache. Set-associative caches are somewhere between a fully-associative cache and a direct mapped cache (a compromise between the two). See the architecture lecture 10 notes (or most architecture books will cover this). Google also throws up various useful references.
Your second question assumes some connection between virtual-to-physical address translation and cache implementation, which isn't really the case. There are two places where these interact, however. First, an architecture decision about whether the data/code cache (level-1 or level-2) operates on physical addresses or virtual addresses. Secondly, the cache implementation of the TLB, which caches page-table entries (used in translating virtual addresses to physical addresses). The paging mechanism itself can be described as a fully associative mapping, based on the way it works — the term "fully associative" would normally be associated with function theory, in this case describing characteristics of the "cache-mapping" and the "virtual-memory mapping" functions.
Keywords: architecture , memory-management
Submission reference: IN1211
Hi, are the case studies examinable? Thanks!
See Question 188 (2004) and Question 189 (2004).
Keywords: exams
Submission reference: IN1212
Hi I am correct in thinking that we will not be examined on CREW locks ?
These were lectured on, so there's no reason why you wouldn't be examined on them! I'm not expecting you to commit the semaphore-based implementations of these to memory, however.
Keywords: exams
Submission reference: IN1213
Regarding the exam paper, will there be an equal split of questions between operating systems and the architecture part? I.e will there be more questions by you instead of Winston? Also, is there any chance you can put up the slides from the revision session from Friday, as I had an exam and missed it. thanks.
There will be an equal split of questions between the OS half and the architecture half (because each covers half the module). As to who sets individual questions, that information is not available, but you could probably safely assume that people who teach particular material will set questions on it (not that it helps you much for the exam though).
The slides I used in Friday's revision lecture were just the lecture slides — which are downloadable from the module web-page. It's a bit unfortunate that you had an exam Friday afternoon.. (most students didn't). There were a couple of OHP slides I drew up too, but these won't be of much use at all. A recording of the lecture (at least the first 70 minutes of it) is downloadable from the module web-page as well.
Keywords: exam
Submission reference: IN1218
For the exam: is it going to be the same structure as last year? I.e., the first part consists of all the parts of the modules (kind of the basic stuff) and the the second part consists of more detailed questions. Thanks.
Not quite — there will be three parts, with the basic learning outcomes tested in the first, and the second two parts covering the OS and architecture learning outcomes respectively. See Question 190 (2006) for a link to the official rubric.
Keywords: exam
Referrers: Question 228 (2006)
Maintained by Fred Barnes, last modified Thu May 8 12:58:00 2014 |