XML

kent logo

CO527 Anonymous Questions and Answers

This page lists the various questions and answers. To submit a question, use the anonymous questions page. You may find the keyword index and/or top-level index useful for locating past questions and answers.

We have taken the liberty of making some minor typographical corrections to some of the questions as originally put. Although most of the questions here will have been submitted anonymously, this page also serves to answer some questions of general interest to those on the course.


Question 221:

Submission reference: IN1215

How old are you?

Answer 221:

With a small amount of resourcefulness, you should be able to find that out for yourself ;-).


Question 222:

Submission reference: IN1216

I didn't quite understand spinlocks and semaphores even in the revision lecture. Am I right in thinking that semaphores are like a queue system that processes are put into when competing for a resource, but this queue has a maximum 'length' and when the queue is full, processes that ask to be put in it are blocked by use of a spinlock? But then I don't understand the wait and signal operations, how do they work? I know wait decrements the value, and signal increments the value, but what do these values actually mean? Where is the semaphore? How many are there? lol Thanks

Answer 222:

Yes, you sound a bit confused :-(. But I'm not sure I can explain any better here than the lecture slides, module textbooks or various on-line resources. I did mention in the lecture that semaphores are a fairly basic building block. Trying to understand the operation of a single semaphore in isolation from any software system is not simple. The flow-diagrams on the lecture-slides show you pictorially what the implementation of each is — which should indicate that sometimes the value is incremented or decremented, but it depends on other conditions. Also make sure that you read through the other questions and answers regarding semaphores and spinlocks.

Semaphores have within them a value and a process queue, and are effectively a basic data-structure in most operating-systems. The process queue has no maximum length. Processes block themselves in this queue if the "wait" on a semaphore and its value is zero. If the value is non-zero, it gets decremented (and the process doesn't block). When another process calls "signal", if there are any blocked processes (which must have "wait"ed to get there), one of these gets woken up (removed from the queue and rescheduled). If there are no blocked processes, "signal" will increment the value of the semaphore. This is what the lecture slides show in the "wait" and "signal" flow-charts.

Semaphores are most often used to implement IPC mechanisms — because processes can interact (albeit fairly indirectly) through a single semaphore, or set of semaphores. The most common example is their use for mutual-exclusion. Here we define the mutex operations and data-type in the following way:

    /* a mutex is just a semaphore initialised with the value 1 */
    Mutex m = new Semaphore (1);

    void claimLock (Mutex m)
    {
        m.wait ();
    }

    void releaseLock (Mutex m)
    {
        m.signal ();
    }

The pattern for code doing mutual-exclusion over some critical section would then look something like this, for example:

    Mutex task_list_lock;
    TaskList the_task_list;

    ...  code doing stuff
    claimLock (task_list_lock);
        {   // critical section
            ...  modify "the_task_list" safely (all other processes excluded)
        }
    releaseLock (task_list_lock);
    ...  more code

For your own understanding, follow the operation of the program (including the structure of the semaphore algorithms) for one process which does the above bit of code. You should see the semaphore have a value of zero inside the critical section, and back to one outside. Then follow the code for two separate processes, where the second tries to enter the critical section whilst the first is already inside it. This should demonstrate to you how the mutual exclusion behaviour actually works — using the semaphore to block processes which are waiting to enter the critical section.

Other algorithms which use semaphores to build more complex functionality (synchronisation, bounded-buffers, readers-and-writers (CREW)) are given in the lecture slides.

Spinlocks are a different locking mechanism — they are intended to provide mutual-exclusion only. They're needed on multi-processor systems to prevent race-hazards on shared memory between processors (i.e. two or more different processors updating the same bit of memory with destructive consequences for the algorithms using them). The implementation of semaphores on multi-processor systems will typically use a spinlock (one per semaphore) to implement the "lock" and "unlock" parts of the semaphore algorithm. These are short critical sections of the semaphore algorithm itself. On a uniprocessor system, the semaphore algorithm's "lock" and "unlock" would be "disable interrupts" and "enable interrupts" respectively (to prevent race-hazards with interrupt handlers).

Keywords: semaphore , spinlock


Question 223:

Submission reference: IN1220

What is mutual exclusion? I can't find anything in the lectures about what it is.

Answer 223:

Lecture 3 slide 10..? If not, look up "mutual exclusion" on Wikipedia, there's a pretty decent explanation there.

Keywords: ipc


Question 224:

Submission reference: IN1221

Hi, we are currently looking through the dispatcher (low level scheduler) and we are not sure about the process descheduling. In what circumstance would a process voluntarily deschedule?

Answer 224:

Various cases, though it's not something many programmers use. Perhaps there's a particularly active process, which notices that the system load is getting higher than normal, in which case it might want to deschedule to give other processes a chance to run (as opposed to doing its own work). Also used in threaded programs when you want to give other threads a relatively higher amount of CPU time (i.e. the one voluntarily descheduling will get less CPU time, because it's descheduling rather than number-crunching or whatever).

Keywords: scheduling


Question 225:

Submission reference: IN1223

Do the two operations of a semaphore have to be carried out in order ? I.e., does the wait operation have to be called before signal can be called ? Cheers.

Answer 225:

It depends on what the semaphore is being used for. In mutual exclusion, yes, the pattern is always "wait" then "signal", but if you look at the bounded buffer code, there is one semaphore which is potentially "signal"ed before it is "wait"ed (depending on what order the processes arrive in).

Keywords: semaphore


Question 226:

Submission reference: IN1224

Hi, we are currently looking through the semaphore and mutual exclusion part of the module, but we are a bit confused about the difference between the two.

Mutual exclusion from what we know is a "mutex", which works in the same way as the semaphore to prevent multiple access in the critical section...right? Then why would you in the lecture say that the semaphore is used to prevent mutual exclusion when they are both trying to prevent the same thing?

Answer 226:

If I said in the lecture that "a semaphore is used to prevent mutual exclusion", then that would have been an error on my part. Correctly, "a semaphore is used to implement mutual exclusion", or "to guarantee mutual exclusion". The difference is that mutual exclusion is a programming paradigm (some pattern which we use to get the desired effect, safe access to shared state/data in the case of mutual-exclusion). A "semaphore" is something which can be used to implement this behaviour — but we could conceivably use something other than a semaphore to implement mutual exclusion (it's just that a semaphore is usually the simplest and most straightforward way of doing it).

Keywords: semaphore , ipc


Question 227:

Submission reference: IN1225

Not vital to my revision but I'm curious.. When I'm running a lot of applications I notice that the amount of space used by page files can get pretty big and relative to this the amount of physical memory use goes up. However, at a certain stage it seems that the amount of physical memory stops growing and doesn't fill up completely. Surely, if its faster to read straight from physical memory rather than having to page something in, a good technique would be to fill physical memory completely, perhaps using some algorithm to predict pages that may be required in the future rather than having a rather large amount of free space. The only reason I can see that this isn't done would be that the extra work deciding which pages to remove when more are needed would be an excessive amount of work... or maybe the free space is assigned for use but hasn't been used by a program yet? but doesn't that defeat the point on virtual memory?

Like I say, not an essential question, but it'd be nice to know! p.s. please excuse me if I've got page files mixed up with page frames or anything like that :)

Answer 227:

There could be various reasons for this, but it's a pretty normal behaviour. Firstly, the OS will probably want to keep a few free page-frames handy for its own use. For instance, if the memory-management code decides that it needs to allocate some real memory for some reason, the last thing it wants to do is force paging — mainly because things get complex in the code.

More relevant perhaps is that on architectures such as the PC, DMA (direct memory access, which allows peripherals to bulk-transfer data into physical memory) can only be done in the first 16 MB of real memory (because of address-size limitations on traditional DMA controllers). Thus it makes sense to keep this memory specifically for DMA buffers, and not allow it to be paged. These limitations are slowly being eroded, but it'll be a while before they're gone completely in the PC architecture.

Keywords: memory-management


Question 228:

Submission reference: IN1226

Hi Fred, I would appreciate it if you could clarify the exam rubric. Am I right in saying that the architecture material will be placed within question 1 (which is mandatory) and the 4 other questions are composed of 2 from yourself and 2 from Bob (of which 2 must be chosen)? Thanks.

Answer 228:

Not quite, see Question 220 (2006). The first section covers the whole of the course, the second section covers operating-systems and the third section covers architecture. The first section contains a single compulsory question. The second and third sections are a choice of one of two (i.e. you must answer one OS question and one architecture question).

Keywords: exam


Question 229:

Submission reference: IN1227

Hi Fred, I've got a question, not strictly on topic but I was wondering which operating system do you most commonly use ? Cheers.

Answer 229:

Linux, specifically Debian.


Question 230:

Submission reference: IN1228

In the lecture about virtual memory you mentioned the 4 virtual memory implementations. Will be expected to know all 4 or just paging ?

Answer 230:

You would be expected to know paging in detail, and be aware of the others. The base+limit mechanism is trivial, and segmentation isn't a huge leap away. Segmentation-with-paging is just a combination of segmentation and paging (best of both mechanisms, though I don't think I covered the reasons why).

Keywords: memory-management


Question 231:

Submission reference: IN1229

Hi, Am I correct in thinking that basically virtual memory is the provision of address checking, address transformation and protection ? Thanks.

Answer 231:

Yes.

Keywords: memory-management


Question 232:

Submission reference: IN1217

I have a question on the Architecture part: Since Kentmail is unavailable, I am going to set it here and if it possible pass it to Dr Waller. For his part of the exam, do we have to know how to implement assembly code?

Answer 232:

No — but you are expected to know about the different types of instruction covered, their operation, the role of the instruction fields etc.

Editor note: in general, please post questions which might be of interest to others taking the module on these anonymous questions pages — it saves us from answering the same student questions multiple times (and/or pasting emails into the anonymous questions system).

Keywords: architecture


Question 233:

Submission reference: IN1222

I'm a bit confused by what is meant by the questions on last years paper:

Explain the difference between: Load and Store instructions.

What would get the marks here? Identifying what they do? Load insturction loads a value into a destination the destination register and Store moves a data from a register to an effective address in memory? Or a more technical different such as they use different 'op' values and effective addresses are as follows:

Load : mem(rs+offset) = rt
Store : mem(rs+offset) = rt

Answer 233:

The difference between LOAD and STORE — that was a question worth 2 marks so I wasn't looking for an essay — just the fact that LOAD transfers data from Data Memory to a register in the register-file and STORE does the opposite.

Keywords: architecture


Question 234:

Submission reference: IN1230

The answer of the 1st lecture question:

What sort of hardware support is required for a multi-tasking operating-system?

Is that, not any hardware support is needed, just the operating system must provide some interprocess communication mechanisms and can also be able to implement virtual machines, in order for the processes to use the resources of the computer, interactively. Is is within the correct lines? If not please give me a hind towards the correct answer.

Answer 234:

Your answer is essentially wrong, and given what you've written, it probably wouldn't attract many marks :-(. The point about virtual-machines and memory-management is half way there; clearly VMs are needed for multi-tasking (at least for it to be safe — protection), and the hardware must provide this support. The OS obviously needs software to control this hardware, but the hardware has to be there in the first place. Lecture 3 has the relevant information, in addition to the memory-management related material. For instance, writing a multi-tasking OS on an 8086, 80186 or 80286 just isn't practical. The 80386 was the first (Intel PC range) processor which did provide sensible memory-management support (the 80286 had something, but it was hard to use productively). You may note that versions of Windows which support multi-tasking do require at least an 80386. Or to put it another way, why can't I run Windows XP on an 80286? — because it lacks the necessary hardware to support memory-management and virtual-machines.

Keywords: hardware

Referrers: Question 236 (2006)


Question 235:

Submission reference: IN1231

Hi, I'm going over the paging stuff again, and I'm trying to get my head around the concept of pages, page frames and page tables. Is it correct that a page frame is an equal-size section of the physical memory, a page is an equal-size section of the virtual memory and a page table maps the pages to the page frames ? Also does an inverse page table just do the opposite of a page table and map page frames to pages ? Thanks

Answer 235:

Yes, that's essentially correct. To make that more robust, you should probably point out that pages and page-frames are the same size (typically a power of 2, e.g. 4096 bytes). Also that the page table may map pages to things other than page-frames (such as the swap-file or a new or invalid page) — the page-to-page-frame mapping is only valid when the 'V' bit in the relevant PTE is set.

Keywords: memory-management


Question 236:

Submission reference: IN1232

With reference to Question 234 (2006):

In the answer, do you want us to mention about semaphores and how they are used in terms of sharing a resource in the system? And one more question: Are semaphore part of the hardware itself? Or are they part of the kernel?

Answer 236:

Nope, mentioning semaphores here probably wouldn't attract any marks, which ties in with your second question. If the hardware provides semaphores, good stuff, and the OS will probably use them (on the grounds that they're more efficient than a software implementation). However, hardware semaphore support is not required (and in fact is not present in most common CPUs); hence, hardware support for semaphores cannot be a requirement for a multi-tasking OS.

Keywords: semaphore


Question 237:

Submission reference: IN1233

I'm a bit confused on how the hardware translates a virtual address into a physical address. Is it something along the lines of: Splitting the virtual address up into the page number and offset then using the page number to look up the page frame number and then adding the offset?

Answer 237:

Yes, in essence it is just that. However, if this were asked in an exam question, I'd expect a more technical answer — e.g. how is the virtual address split? and how is the correct page-table entry located before looking up the page-frame number? The paging slides (rather than handouts) show how this works bit-by-bit.

Keywords: memory-management


Question 238:

Submission reference: IN1234

This is blatantly a question from the mock exam paper, but what is the first step in a bootstrap? I can't see it in the lecture notes, it tells us the different functions a bootstrap may have etc. but I don't see which is the first thing it does. Thanks.

Answer 238:

It's in lecture 7 — the first step in the bootstrap process is that the hardware loads a handful of instructions and starts executing them. On the PC (which starts in real-mode), the CPU simply starts executing code inside the BIOS.

Keywords: bootstrap


Question 239:

Submission reference: IN1235

For bits V,R,M in a Page Table Entry; the V is set to 1 when the page it's describing is currently in the page frame, R is set to 1 when it's been referenced? So if it's been used? And M is set to 1 when its been modified? Is this correct. Thanks

Answer 239:

Yes, that's correct. Referenced means just that — i.e. the memory address in question was either read or written.

Keywords: memory-management

Referrers: Question 240 (2006)


Question 240:

Submission reference: IN1236

Referring to Question 239 (2006), how can a page be modified but not referenced? If referenced means it has been looked at? I don't understand the example in the lecture notes for the NUR replacement strategy.

Os begins scheduling, reads from page 3 which has "1 1 1", then writes to page 7 which also has "1 1 1". Why didn't it write to page 3?

what does VM2(0) mean? page 0 from virtual machine 2? Thanks.

Answer 240:

See the last point on lecture 6 slide 17:

Periodically scan all PTEs and clear the reference bits — otherwise initial situation and condition (2) could never occur

Regarding the scheduling example, the fact it reads from page 3 and writes to page 7 is an artifact of the program. Maybe it did "x = y + 1;", where "y" is located somewhere in page 3 and "x" is located somewhere in page 7.

Yes, "VM2(0)" means page 0 of VM 2. I talked through this in the lecture at the time, did you not take notes? :-).

Keywords: memory-management

Valid CSS!

Valid XHTML 1.0!

Maintained by Fred Barnes, last modified Thu May 8 12:58:00 2014