XML

kent logo

CO527 Anonymous Questions and Answers Keyword Index

This page provides a keyword index to questions and answers. Clicking on a keyword will take you to a page containing all questions and answers for that keyword, grouped by year.

To submit a question, use the anonymous questions page. You may find the keyword index and/or top-level index useful for locating past questions and answers.

Keyword reference for assess3-1213

2012

Question 22 (2012):

Submission reference: IN2389

I'm trying to implement the C-scan algorithm, which involves jumping from the highest block to the lowest block; however, how do I get the head to do this from within the DSched class?

Answer 22:

By requesting a read from a block on that track. This could either be a real request from the system, or a locally fabricated request to force the heads to move.

Keywords: assess3-1213


Question 21 (2012):

Submission reference: IN2386

Hello, I have done some research and found LOOK to be a good disk scheduling algorithm. However for me to implement it I need to know the direction of the head; I can't really see where that is in the file. Could you specify if there is any? Also when uploading the code for do we just upload the dsched.c file, or anything else?

Answer 21:

If you want to keep track of the direction the head is moving in, then you'll need to add that yourself, initialising and using as appropriate. As far as uploading goes, yes, just the dsched.c file (or DSched.java for those who did it in Java). Uploading of anything else will be penalised!

Keywords: assess3-1213


Question 20 (2012):

Submission reference: IN2385

Does it matter if when comparing blocks we compare the track numbers or the block numbers? I've tried both and it doesn't seem to make much difference at the moment.

Answer 20:

Given the way a block number maps to track, sector and head, no, it doesn't really make a difference (though there may be some subtle differences). However, if that mapping were changed elsewhere, then it might make a big difference. Hence it's better to use "block_to_track()" and use the track number that gives, and not have the scheduling code rely on (assume) explicit knowledge of block mapping.

Keywords: assess3-1213


Question 19 (2012):

Submission reference: IN2384

For the C version of the disk scheduling work, can we make new arrays or functions? Also for wrap-around (% MAXREQUESTS) do we need to use it for each block read done so it constantly wraps around? Thanks.

Answer 19:

As far as adding new global variables and functions goes, yes, go ahead! The only real constraint is that modifications are limited to dsched.c (or DSched.java) alone. For wrapping around, it depends on how you modify the implementation. The default FIFO scheduler needs to use a circular buffer, because of the way FIFO works (it's a queue, so wrapping around the array with head and tail pointers saves the issue of moving everything around when something is removed xor added; or implementing as a linked-list). The implementation I posted in Question 17 (2012) (LIFO/stack scheduler) does not need this, but doesn't do a very good either. However, the LIFO scheduler may be a more pleasant starting point, depending on comfort with circular buffers.

Keywords: assess3-1213


Question 18 (2012):

Submission reference: IN2383

Finished at time 301 with 13 pending queues
disk spent 0 idle, 0 seeking to track,0 reading 0 blocks

Is that a good result or am I miles away?

Answer 18:

This is not good, unfortunately — there are 13 pending request queues, so something has gone wrong somewhere (i.e. requests are not being serviced). This could happen if your code neglects to call "highlevel_didread()" at the appropriate point inside "readcomplete()".

Keywords: assess3-1213


Question 17 (2012):

Submission reference: IN2382

I'm struggling to understand the circular buffer in readqueue, any simpler code?

Answer 17:

The FIFO implementation is pretty poor, but there is one (possibly worse) implementation: LIFO (last in, first out), commonly known as a stack. The implementation of this is simpler, but at the expense of being substantially worse. You can download examples of this here:

If you run these, you'll see they have substantially worse times than the FIFO, particularly for the "worse case" (mainly because the first request made by the system will be the last one serviced, so it waits a very long time indeed, as do many others).

Keywords: assess3-1213

Referrers: Question 19 (2012)


Question 16 (2012):

Submission reference: IN2381

I am just wondering if you could explain something to me. I was able to implement an SSTF algorithm successfully with the following loop iterating through trying to find the closest to the current position, hence I used the following loop:

    for (int i = readqueue_tail; i < readqueue_head; i++)			

Since, I have been struggling to even implement the elevator method and after looking on the anonymous Q&A forum found that following loop had been suggested to iterate through:

    for (int i = 0; i < readqueue_size; i++) {			       
	int idx = (readqueue_tail + i) % readqueue.length;
        ...
    }

Since I have implemented this loop, both the normal requests file and requests-big file have seen their times improve. I am just curious as to why these loops produce such a large difference in performance.

Answer 16:

The answer is fairly straightforward: if you're using the default circular buffer implementation, then the first loop is broken. I.e. when the head-pointer wraps around, head will be less than tail, so the loop won't execute at all (condition is false). The second loop (from previous questions (see assess4-1112)) iterates over the loop size, but produces an appropriately wrapped array index. See Question 28 (2011) for a detailed description of what's going on here, and why the first loop is (must be) broken.

Keywords: assess3-1213


Question 15 (2012):

Submission reference: IN2380

How many sectors have we passed in the time it takes to change one track?

Answer 15:

Unknown would be a fair answer: the model of the underlying disk is not ideal, but there is some consideration of sector numbering (e.g. reading from increasing sector numbers on the same track is faster than reading from decreasing sector numbers on the same track, on the assumption that the disk can probably read from lots of sectors in a single sitting). All the code is in "disksim.c" (or Java equivalent) if you're really keen to find out!

Keywords: assess3-1213


Question 14 (2012):

Submission reference: IN2379

What are the SSTF results for requests.txt and requests-big.txt?

Answer 14:

Updated 06/03/2013: my stock implementation of SSTF was broken..! Here are the fixed results:

For requests.txt:

    finished at time 534 with 0 pending queues
    disk spent 36 idle, 377 seeking to track, 120 reading 97 blocks
    time for request in queue (min, max, avg): 0    , 279  , 59.2  +/- 71.6
    time for request in read (min, max, avg):  1    , 188  , 5.1  +/- 20.9
    time for request total (min, max, avg):    1    , 281  , 64.3  +/- 75.1

and for requests-big.txt:

    finished at time 7450 with 0 pending queues
    disk spent 36 idle, 5947 seeking to track, 1466 reading 1000 blocks
    time for request in queue (min, max, avg): 0    , 1475 , 177.2  +/- 312.6
    time for request in read (min, max, avg):  1    , 140  , 7.4  +/- 14.6
    time for request total (min, max, avg):    1    , 1476 , 184.7  +/- 313.1

The earlier (and incorrect) results for a broken SSTF:

For requests.txt:

    finished at time 540 with 0 pending queues
    disk spent 36 idle, 382 seeking to track, 121 reading 97 blocks
    time for request in queue (min, max, avg): 0    , 285  , 73.3  +/- 94.2
    time for request in read (min, max, avg):  1    , 188  , 5.2  +/- 21.0
    time for request total (min, max, avg):    1    , 287  , 78.5  +/- 96.3

and for requests-big.txt:

    finished at time 36526 with 0 pending queues
    disk spent 36 idle, 35029 seeking to track, 1460 reading 1000 blocks
    time for request in queue (min, max, avg): 0    , 35711, 1201.2  +/- 4776.5
    time for request in read (min, max, avg):  1    , 252  , 36.5  +/- 54.3
    time for request total (min, max, avg):    1    , 35712, 1237.7  +/- 4781.3

In terms of comparing with your own solutions (that should be better than SSTF!) the important numbers are the ones I've highlighted in red.

Keywords: assess3-1213


Question 13 (2012):

Submission reference: IN2377

I just posted an anonymous question about using two different variations of C-LOOK. The results have now changed:

Algorithm 1 is better than Algorithm 2 for requests.txt (~530 vs ~1300)
Algorithm 2 is better than Algorithm 1 for requests-big.txt (~36000 vs ~8900)

Which one will get me better marks?

Answer 13:

The short answer is, it's hard to say. The total time taken isn't necessarily a good metric, what's more important is the average time per request and its standard deviation. If you want a quick metric for "good" (as in, what I'll use for marking/scaling) then add together the "time for request total" average plus the standard deviation. The requests-big.txt file is probably more representative of typical workloads, but your solutions will be tested on a variety of request files. The "requests.txt" file (small) is largely for the purposes of having something to debug with, without producing a multitude of screenfuls of information.

Keywords: assess3-1213


Question 12 (2012):

Submission reference: IN2378

From the sources:

	/* defines (approximately) a 5 GB disk */
	public static final int NHEADS = 128;
	public static final int NSECTORSPERTRACK = 64;
	public static final int NTRACKS = 1250;

There are 128 heads? I thought there was one or two per disk, could you please clarify what this is representing?

Answer 12:

Exactly that: the number of heads per disk. Obviously 128 is not practical: it'd imply a disk with 64 or 65 platters, and they don't often come that tall. The reason is to do with historical limitations, but which are still of concern these days. Basically the way you address a disk through the PC BIOS is given in terms of sector, head and cylinder/track. There are a fixed number of bits for each. As media density increased, it drove up the number of cylinders (or tracks) on the disk, but the number of heads and sectors remained mostly unchanged. However, the fixed number of bits available in the BIOS for addressing cylinders presents a problem when you have more than 16384 cylinders (or tracks). Thus the solution was to fake the number of heads, as there were more bits available to specify the head number than you'd reasonably ever have in a physical disk.

It's not uncommon to see PATA disks that have impossibly high (for the physical dimensions) numbers of heads, but a lower number of cylinders. In short: coping with backwards compatibility. Most disks these days just deal in terms of logical block numbers, and where these are actually mapped to on the disk is largely up to the drive electronics (which now incorporate request queues and caches to, mostly, hide latency). In terms of the assessment, you might imagine you're programming the on-board drive electronics or similar, that does really need to be concerned with tracks/heads/etc. since that is how the disk is physically arranged (though not with as many heads!).

Keywords: assess3-1213 , disk-scheduling


Question 11 (2012):

Submission reference: IN2375

Hello, I'm trying to implement SSTF in C code, I would have to arrange the values accordingly in "readqueue", however as an electronic engineering student I'm not great at programming. I do not understand which part in the C code actually is reading the value of the block; is it "blk", "req", or "readqueue"'s head?

Sorry for asking such an easy question but I'm utterly lost.

Answer 11:

It sounds like you're struggling to understand the operation of the system (not unsurprising, since there is some amount of [simulated] concurrency involved). There are two methods/functions inside dsched.c that are of concern. The first, "dsched_blockread (int blk, brequest_t *req)", is where new requests arrive. All the default code there does is add this to the circular buffer in "readqueue" (first couple of lines) then increment the head pointer to record the fact that there's one more thing present (last couple of lines).

The second function, "dsched_readcomplete (int blk, brequest_t *req)" is actually doing two things. It is called with the arguments 'blk' and 'req'. These represent the block that has just finished being read (i.e. is completed), so the first thing this function does is return that to the high-level by calling "highlevel_didread (blk, req)". The function is also called if the disk is idle (and ready to accept a new request) in which case "blk" will be negative (hence the first "if()" in here). In answer to your question, though, the particular values of "blk" and "req" are not of interest in terms of what should be read from the disk next, as they are the block and request of the read that has just completed. The second part of the method, which accesses "readqueue", is the part that sends the next request to the disk, taken from the tail of the queue (i.e. this is what gives the FIFO behaviour). You could implement SSTF fairly straightforwardly by searching through the queue (from tail to head) and sending that as the next request, but you'd need to sort out the requests queue so as not to leave gaps in it.

The slides linked on the assessment description try to describe some of this in pictures, that you may find easier to assimilate than my text above!

Keywords: assess3-1213


Question 10 (2012):

Submission reference: IN2374

disksim.c doesn't work on Windows. Not a question, but I wanted to share:

[snip unified diff]

Answer 10:

Thank-you for that :-). If we run the assessment again next year I'll factor in your diff, as it's a bit prettier than mine! (I guess you'd started or finished this before I sent around the Visual-Studio compatible code).

Keywords: assess3-1213


Question 9 (2012):

Submission reference: IN2373

The "Points to Note" mentions that we can not use the Java collection classes (LinkedList etc.). My question is can we make our own linked list class?

Answer 9:

Yes, that's absolutely fine (for instance, by putting additional field(s) in the inner class, or creating a new wrapper inner-class).

Keywords: assess3-1213


Question 8 (2012):

Submission reference: IN2372

For the Java version, are we allowed to override some basic methods like compareTo and equals? Or is it just the collections that are off limits?

Answer 8:

Nope, overriding basic methods is not allowed :-). Of course, if your code words flawlessly and I don't ever look at it, you'll probably get away with it. (but that's something else to add to my list of "look out for"..!). The reason is the same as with collections: an OS implementor will not have access to such things, unless they construct them specifically. So writing a method to compare two objects/structures passed as parameters is fine, but overriding an existing method to do this is OO-specific (and thus not generally available).

Keywords: assess3-1213


Question 7 (2012):

Submission reference: IN2371

For assessment 3 (disk scheduler), I have created a bubbleSort() method, which sorts the list into ascending order in terms of its track number. I have then implemented bubbleSort() into the 'blockread' method.

The outputs I get are faster than the original method, but still not too quick (around half the time of the original). I have some questions:

  1. Is around half the time of the original faster than SSTF and score me more marks than it?
  2. Do I need to use bubbleSort() to implement the elevator algorithm (and where would I go about starting this)?
  3. Is there any other way I can alter the bubble sort algorithm to make it run more efficiently?

Answer 7:

The actual results for SSTF (for the default request file) are on the assessment description, so you can compare directly with that. As is pointed out there and elsewhere (see disk-scheduling, assess3-1213 and assess4-1112), "faster" is not a straightforward measure in this case.

Regarding the other questions, bubblesort is not a good sorting algorithm, if efficiency is any concern (but may be good if locality were a constraint). However, it doesn't matter what sorting algorithm you use (if any) because this does not affect the measured times. If you want a better sort, implement a recursive quicksort (reasonably easy from a good description/example). However, you don't actually need to sort the contents for elevator, though it may simplify things. The question you need to ask is whether you do [part of] the elevator work when a block is requested by the high-level (in 'blockread()') or when dispatching a new request to the disk (which happens in 'readcomplete()'). Both approaches are acceptable for the purposes of this assessment; which one you choose will probably depend on which one you're more comfortable understanding.

Keywords: assess3-1213 , disk-scheduling


Question 6 (2012):

Submission reference: IN2370

In assignment 3 for CO527: can we use Vectors?

Answer 6:

No; Vectors fall under collection classes (or equivalent). It's not the sort of thing that'd be available to an operating system programmer.

Keywords: assess3-1213


Question 4 (2012):

Submission reference: IN2367

For assessment three you state that we will be marked (mostly) for speed the algorithm works at. Can you provide any rough benchmarks of what results we should be expecting? Or the results of the fastest student last year so we have a rough mark to aim for?

Will there be additional tests run (other than requests and requests-big)?

Will there be consideration when an algorithm may return slower results than another in a few test cases, but the variance is far lower meaning that for a general case its faster?

Answer 4:

The top mark (100%) last year was given to a solution that produced the following results. For requests.txt:

    finished at time 526 with 0 pending queues
    disk spent 36 idle, 377 seeking to track, 112 reading 97 blocks
    time for request in queue (min, max, avg): 0    , 276  , 58.0  +/- 70.6
    time for request in read (min, max, avg):  1    , 188  , 5.0  +/- 21.0
    time for request total (min, max, avg):    1    , 277  , 63.0  +/- 74.0

And for "requests-big.txt":

    finished at time 6573 with 0 pending queues
    disk spent 36 idle, 5098 seeking to track, 1438 reading 1000 blocks
    time for request in queue (min, max, avg): 0    , 823  , 218.8  +/- 234.3
    time for request in read (min, max, avg):  1    , 127  , 6.5  +/- 11.7
    time for request total (min, max, avg):    1    , 826  , 225.3  +/- 234.6

However, the top mark this year will be the 'fastest' solution, whether or not that is better or worse than the best last year remains to be seen!

As for additional tests, yes, there will be (it says something about this in the assessment description). However, you can write your own easily enough! (or auto-generate them)

For higher-average and lower-variance solutions, they'll come out similarly to lower-average and higher-variance; though the average has a naturally higher impact. In practice though, better implementations give better results regardless of the particular test; the final score is calculated on the relative positioning of your solution (amongst other submissions) for a variety of datasets, equally weighted for scoring.

Keywords: assess3-1213 , disk-scheduling

Valid CSS!

Valid XHTML 1.0!

Maintained by Fred Barnes, last modified Wed May 25 15:07:20 2016