CO527 Anonymous Questions and Answers Keyword Index |
This page provides a keyword index to questions and answers. Clicking on a keyword will take you to a page containing all questions and answers for that keyword, grouped by year.
To submit a question, use the anonymous questions page. You may find the keyword index and/or top-level index useful for locating past questions and answers.
Keyword reference for disk-scheduling
2015 |
Submission reference: IN5050
I've based my C-Scan implementation off the LIFO code, and it appears to be working well. However I have to make multiple jumps to high extreme values. I think this is happening because, not all the data has been put into the stack yet, is this a fair assumption or should it find a way to wait for all the data?
That's probably expected. It will depend, of course, on the request file fed to the program. You can look at the two provided, or (even better!) create your own to test this hypothesis. As for waiting for requests, that's a hard one in the sense that it's usually impossible to tell what the best strategy will be: deal with the request(s) you already have, with a possibly large overhead in moving the read/write heads, or wait in the hope some closer requests come in. Of course, by the time you've waited, and no new requests have arrived, it would have been better to have made the other choice. But such is the unpredictable nature (non-determinacy) of this sort of thing. Whilst it's somewhat outside the scope of the assessment, a real OS might employ some form of heuristic, based on past behaviour or how many requests are outstanding. In the context of the assessment, however, don't try and wait — if you have a request to service, service it. I don't think enough of the simulator framework is exposed (in public methods/etc.) to support it.
Keywords: assess4-1516 , disk-scheduling
Submission reference: IN5021
Let's say there's track 100 on head 1, and track 200 on head 2.
If you submitted track 100 to be read, then you would submit track 200 (located in head 2) would this add to the time a lot more than if track 200 was on the same head as track 100?
There seems to be some terminology confusion here! Heads are the electronic read/write gubbins on the end of the actuator arm that moves laterally across the disk. Thus, for any given track [0 .. N-1] there are heads [0 .. H-1] (assuming H is the number of heads and N is the number of tracks). Thus you have physical locations such as track 1 head 1, track 1 head 2, track 2 head 1, track 2 head 2, etc. (putting sectors aside, which are the locations around the disk). Different tracks are not on different heads. Look at the slides I made for a rough idea of how tracks (aka cylinders), heads and sectors are related.
Keywords: disk-scheduling , assess4-1516
Submission reference: IN5015
Hi Fred, in the disk scheduler are are allowed to remove the requests which have been read from the read-queue, or should these stay there even after being read which would mean the largest size of our read-queue would get smaller each time a set of requests are made?
Also, am I correct that we are aiming for the smallest possible score? What would be considered a good score to get (70% +), I am currently getting around 1300?
Not sure I understand the first bit entirely: yes, once a request has been serviced, you remove it from the queue — that's what the existing LIFO and FIFO code do (okay, they don't scrub out the entry in the array/queue, but effectively do as it'll be overwritten with fresh requests coming in). But if you didn't do that, the request queue would just get larger and larger, not smaller!
For score comparison, feed it to the automated checker. You'd be aiming for "better than SSTF" or similar wording in all tests for a good score (where good is at least 65%+ probably). But in value terms, yes, you're trying to minimise the score value — as it represents time, the smaller the better.
Keywords: assess4-1516 , disk-scheduling
Submission reference: IN5010
Hi, are we allowed to implement our own data structure and not stick with the array, i.e. a linked-list or a hash table?
That's fine, but you must not use any of the Java API stuff. E.g. if you want a linked-list, do it yourself, not with the "LinkedList<Blah>" generics stuff. Similar with a hash-table, though I can't see any advantage of that in this case (and it'd be painful to implement in Java regardless).
Keywords: disk-scheduling , assess4-1516
Submission reference: IN5005
For assessment 4, are we explicitly bound to implementing a pre-existing algorithm (c-scan for example), or are we allowed to invent our own.
Basically is the aim of the assessment to implement something specific, or as long as we improve the efficiency of the provided code we pass?
The criteria for marking are clearly described in the assessment description. Any algorithm, provided it improves things, will pass. For a decent mark, however, you should be aiming to improve on SSTF at least (as that has some caveats that are explained in the slide-deck, and elsewhere).
Keywords: assess4-1516 , disk-scheduling
Submission reference: IN4997
Can we use "Math.abs()" in the disk scheduler assessment?
Yes, that's fine (but only on the grounds that it's trivial and can be written inline). E.g.:
int av = Math.abs (v);
Could just as easily (and probably more efficiently) be written:
int av = (v < 0) ? -v : v;
Keywords: disk-scheduling , assess4-1516
Referrers: Question 45 (2015)
Submission reference: IN4984
Right after a rough attempt at implementing the C-Look algorithm, I got some errors out that I can't understand. Could you explain them to me?
highlevel_didread(): read queue 0 got unexpected block 1 highlevel_didread(): read queue 0 got unexpected block 1 highlevel_didread(): read queue 0 got unexpected block 1025 highlevel_didread(): read queue 11 got unexpected block 2218500 [snip] finished at time 2017 with 11 pending queues disk spent 31 idle, 1588 seeking to track, 398 reading 76 blocks *** had 35 logical errors! time for request in queue (min, max, avg): 0 , 1780 , 709.9 +/- 616.1 time for request in read (min, max, avg): 4 , 180 , 34.1 +/- 46.0 time for request total (min, max, avg): 4 , 1818 , 744.0 +/- 606.4 Score value: 1350
These errors mean some high-level read-queue (0 and 11 shown here) got back a block it hadn't requested. Hence, unexpected block error. This means your code is returning a block that hadn't been requested (or, possibly, had been requested in the past and already returned, so this is a duplicate).
Keywords: disk-scheduling
Submission reference: IN4957
Looking at the disk scheduling assessment; am I right in thinking that we only just have to edit the readcomplete method of the DSched class (and add others if we need them)?
There are many possible ways of implementing this — you can probably get away with just editing this method, but I'd recommend changes elsewhere if you're starting from the FIFO version of the code (else you've got all this circular queue stuff in there that's redundant).
Keywords: disk-scheduling , assess4-1516
2014 |
Submission reference: IN3734
I'm still trying to get SSTF working, do you have any advice on how to tackle an error such as "read queue 0 already done at time 489 (block 0)"?
This is typical of reading and returning a block that had been read previously. I.e. check that when you dispatch a read request to the disk, that it is that one which is removed from the request queue/array. I've seen various examples of code that fails because of this (with different errors depending on the circumstances). As a general point though, if you're using the circular queue code, get rid of it and use the LIFO (stack) code as a basis — it's a lot simpler!
Keywords: disk-scheduling
Submission reference: IN3733
I've done the CScan method and get around 2400 for requests-big.txt which is good, however I get ~780 for the requests.txt does that mean I've done something wrong or is it just worst for smaller sets of data?
Neither. Performance ultimately depends on the specific requests, not how many of them there are. But that value for the smaller requests file isn't overly bad (my implementation of C-scan gives around 670). My 'scan' implementation (non-circular) gives more like 800 for the small requests file, around 2400 still for the big requests file.
Keywords: disk-scheduling
Submission reference: IN3729
It is mentioned we cannot use foreach loops, but DSched.java has a for loop that populates the array with ReadReq instances. Are we allowed to use for loops?
Yes. See also Question 41 (2014).
Keywords: disk-scheduling
Submission reference: IN3728
My sorting algorithm doesn't quite work, it uses a nested for loop. I understand how to iterate over a circular queue with one for loop, but it would appear you can't just do the exact same thing for a nested for loop. Any tips?
Yes, don't use a circular queue! The circular queue's only use is for FIFO, if you're implementing something different (or at the very least sorting its contents) the circular nature of it just makes things hugely more complex. For most things, starting from the LIFO code example makes more sense.
Keywords: disk-scheduling
Submission reference: IN3725
Can we make use of OO by making our own inner classes inside of DSched, and if so, how complex can they be?
I'm not sure how I'm going to implement my preferred algorithm, but it might be helpful to make a data structure, like linked lists or tree maps, only in my own code, rather than relying on the Java library.
You can make use of inner-classes as data-structures, but no OO-specific features please! Real OSs obviously do have things like linked-lists, trees, hash-maps (and a whole lot of more complex stuff), which you're perfectly fine to use, provided you implement them yourself and keep it simple (no inheritance/overriding/polymorphism/interfaces/exceptions/...). See also the answers to Question 42 (2014) and Question 43 (2014).
Keywords: disk-scheduling
Submission reference: IN3724
I have implemented my algorithm using a modified LinkedList that I have written myself as a private class inside DSched.java. Am I okay to submit this since technically it isn't a Java collection class.
Yes, that's absolutely fine, provided you are only using it in the basic (data-structure) sense. As mentioned in Question 42 (2014), no implementing 'Iterator' interfaces, exceptions, etc.! (and I'm not bothered if you want to make all fields public and access them directly, as these are imitating data-structures and, given the simplicity, filling the code with accessors/mutators is overkill).
Keywords: disk-scheduling
Referrers: Question 44 (2014)
Submission reference: IN3723
I am planning to implement C-SCAN, but am hitting some issues.
Are we allowed to write private classes, like ReadReq, inside our DSched class? Also, how can I find out what position the disk head is in? Should I assume it starts at 0?
Writing private classes for the purposes of creating data-structures is fine, but these should be no more complex than the level of "ReadReq". I.e. no inheritance, overriding, templates/generics, etc. These things are very much high-level OO features, and not something the typical OS programmer would have (nor want for efficiency) to use.
As far as assuming the disk heads always start on track zero when the simulation (disk) starts, that's a safe assumption (and realistic as most disks probably put the head here after spin-up and self-tests).
Keywords: disk-scheduling
Referrers: Question 43 (2014) , Question 44 (2014)
Submission reference: IN3722
Are we allowed to use any of the following: if statements; if else statements; while loops; foreach loops?
Yes; yes; yes; no. The first three are things that work on primitive types (booleans in this case). The last typically works on collection classes. Regular for-loops, on the other hand, just work on primitive types (boolean again, as the test for executing the loop body), so are perfectly fine. foreach and for loops are very different things.
Keywords: disk-scheduling
Referrers: Question 48 (2014)
Submission reference: IN3718
I keep on getting unexpected blocks while trying to implement SSTF. Is there any typical problem I should look out for?
This suggests something is wrong somewhere — I'd recommend creating some simple test-case data and/or putting in a bunch of debugging and/or running with "-v" (verbose). The specific error ("unexpected block") is because you're telling the high-level code ("highlevel_didread") about a block it didn't ask for. In practical terms, because blk or req passed to it are defunct; there may be various reasons for this, but that's where debugging and whatnot can help.
Keywords: disk-scheduling
Submission reference: IN3717
I'm trying to implement SSTF. I think I've pretty much done it, but I don't really understand how to keep the most recently read block. At least, I'm guessing thats whats going wrong.
When read_complete(blk, BRequest) is called, is blk the block the head is currently at? As far as I can tell I've implemented each step in the slides about the problem otherwise.
If "blk" is >= 0 then yes, it's the block the head is currently on (i.e. the requested block has been read, hence read complete). If "blk" is < 0 then it's as stated in the comments in the code (i.e. idle and able to accept a new request). This means you can't rely on "blk" being "the most recently read block" universally, only when it is >= 0.
Keywords: disk-scheduling
Submission reference: IN3731
Are we allowed to use "Math.abs"?
No. But you are (obviously) allowed to use absolute value functionality, you just have to implement it yourself.
Keywords: disk-scheduling
Submission reference: IN3716
Can I implement the C-Look algorithm?
You can implement whatever algorithm you choose! (the proviso being that it works and hopefully performs better than FIFO and/or SSTF).
Keywords: disk-scheduling
Submission reference: IN3714
I have no idea where to start with disk scheduler. I know I want to implement C-scan but that's about it.
First step, understand the code that is there already (i.e. the FIFO implementation). The slides I made should help somewhat here, but you may need to refer to textbooks and the like as well (depending on what you do or don't understand).
Keywords: disk-scheduling
Submission reference: IN3709
How many marks is it possible to gain by implementing SSTF?
This depends largely on the submissions overall, but somewhere in the range 55% to 65%.
Keywords: disk-scheduling
Submission reference: IN3707
For the final assessment, are we allowed to use functions from Java.Math?
No.
Keywords: disk-scheduling
2012 |
Submission reference: IN2378
From the sources:
/* defines (approximately) a 5 GB disk */ public static final int NHEADS = 128; public static final int NSECTORSPERTRACK = 64; public static final int NTRACKS = 1250;
There are 128 heads? I thought there was one or two per disk, could you please clarify what this is representing?
Exactly that: the number of heads per disk. Obviously 128 is not practical: it'd imply a disk with 64 or 65 platters, and they don't often come that tall. The reason is to do with historical limitations, but which are still of concern these days. Basically the way you address a disk through the PC BIOS is given in terms of sector, head and cylinder/track. There are a fixed number of bits for each. As media density increased, it drove up the number of cylinders (or tracks) on the disk, but the number of heads and sectors remained mostly unchanged. However, the fixed number of bits available in the BIOS for addressing cylinders presents a problem when you have more than 16384 cylinders (or tracks). Thus the solution was to fake the number of heads, as there were more bits available to specify the head number than you'd reasonably ever have in a physical disk.
It's not uncommon to see PATA disks that have impossibly high (for the physical dimensions) numbers of heads, but a lower number of cylinders. In short: coping with backwards compatibility. Most disks these days just deal in terms of logical block numbers, and where these are actually mapped to on the disk is largely up to the drive electronics (which now incorporate request queues and caches to, mostly, hide latency). In terms of the assessment, you might imagine you're programming the on-board drive electronics or similar, that does really need to be concerned with tracks/heads/etc. since that is how the disk is physically arranged (though not with as many heads!).
Keywords: assess3-1213 , disk-scheduling
Submission reference: IN2371
For assessment 3 (disk scheduler), I have created a bubbleSort() method, which sorts the list into ascending order in terms of its track number. I have then implemented bubbleSort() into the 'blockread' method.
The outputs I get are faster than the original method, but still not too quick (around half the time of the original). I have some questions:
The actual results for SSTF (for the default request file) are on the assessment description, so you can compare directly with that. As is pointed out there and elsewhere (see disk-scheduling, assess3-1213 and assess4-1112), "faster" is not a straightforward measure in this case.
Regarding the other questions, bubblesort is not a good sorting algorithm, if efficiency is any concern (but may be good if locality were a constraint). However, it doesn't matter what sorting algorithm you use (if any) because this does not affect the measured times. If you want a better sort, implement a recursive quicksort (reasonably easy from a good description/example). However, you don't actually need to sort the contents for elevator, though it may simplify things. The question you need to ask is whether you do [part of] the elevator work when a block is requested by the high-level (in 'blockread()') or when dispatching a new request to the disk (which happens in 'readcomplete()'). Both approaches are acceptable for the purposes of this assessment; which one you choose will probably depend on which one you're more comfortable understanding.
Keywords: assess3-1213 , disk-scheduling
Submission reference: IN2367
For assessment three you state that we will be marked (mostly) for speed the algorithm works at. Can you provide any rough benchmarks of what results we should be expecting? Or the results of the fastest student last year so we have a rough mark to aim for?
Will there be additional tests run (other than requests and requests-big)?
Will there be consideration when an algorithm may return slower results than another in a few test cases, but the variance is far lower meaning that for a general case its faster?
The top mark (100%) last year was given to a solution that produced the following results. For requests.txt:
finished at time 526 with 0 pending queues disk spent 36 idle, 377 seeking to track, 112 reading 97 blocks time for request in queue (min, max, avg): 0 , 276 , 58.0 +/- 70.6 time for request in read (min, max, avg): 1 , 188 , 5.0 +/- 21.0 time for request total (min, max, avg): 1 , 277 , 63.0 +/- 74.0
And for "requests-big.txt":
finished at time 6573 with 0 pending queues disk spent 36 idle, 5098 seeking to track, 1438 reading 1000 blocks time for request in queue (min, max, avg): 0 , 823 , 218.8 +/- 234.3 time for request in read (min, max, avg): 1 , 127 , 6.5 +/- 11.7 time for request total (min, max, avg): 1 , 826 , 225.3 +/- 234.6
However, the top mark this year will be the 'fastest' solution, whether or not that is better or worse than the best last year remains to be seen!
As for additional tests, yes, there will be (it says something about this in the assessment description). However, you can write your own easily enough! (or auto-generate them)
For higher-average and lower-variance solutions, they'll come out similarly to lower-average and higher-variance; though the average has a naturally higher impact. In practice though, better implementations give better results regardless of the particular test; the final score is calculated on the relative positioning of your solution (amongst other submissions) for a variety of datasets, equally weighted for scoring.
Keywords: assess3-1213 , disk-scheduling
2011 |
Submission reference: IN2137
I'll try to explain this clearly but it may be difficult.
The idea behind my algorithm was to have to separate request lists; whilst one is being operated on (as in, its requests being found/fulfilled) the other list fills up and then the code swaps over, simply using a boolean switch. However, in readcomplete() I have no idea how to search through the entire list at once. I thought I'd be able to something like:
while (queue size > 0) { readblock() didread() //This is on the block just read // then alter the tail and size accordingly. }
However this throws up loads of errors of all kinds; bad hashes, busy drive etc. Do I have the right idea, or is just a load of rubbish? I've got a basic improvement already submitted so this is just for extra marks really.
The idea you have is quite sensible, but you cannot implement it quite like this. Specifically, for each call of readcomplete() you can only dispatch at most one request. Dispatching many, as your loop will do, will result in the "drive busy" style errors that you are seeing. This is one of the mild complexities of OS programming — you can't always have the logic for one thing all in one place; sometimes it has to be spread about, both spatially and temporally. That said, having two queues and switching between them shouldn't be impossible!
Keywords: assess4-1112 , disk-scheduling
Submission reference: IN2130
I have an implementation of C-LOOK that isn't quite working right. It finishes with six pending queues and a large number still waiting to be read. It works by using a third worker variable, firstly, the code checks whether this is equal to the head, if it is, then it it sets it to be the same as the tail, as the end of the disk will have been reached, regardless of this, it then accesses the block at the location of the worker variable, and then moves it along in the array by 1. Any suggestions as to what's wrong?
Logically what you're doing sounds sort of right; the problem probably relates to not scheduling the next block to read whenever one is available -- check each path through the code and make sure that if there are more requests to service (i.e. readqueue_size is greater than zero) then one of them is dispatched. Assuming you're sorting the requests too, I'd recommend getting rid of the circular buffer mechanism and just using array elements 0 through (readqueue_size - 1). That would simplify things a bit. The existing implementation removes the oldest request in the queue as it is dispatched; you should make sure that something in your code is dealing with completed requests, given that you're using a different "worker" variable to track the last one dispatched (whereas the existing implementation uses readqueue_tail for this purpose).
Keywords: assess4-1112 , disk-scheduling
Referrers: Question 24 (2011) , Question 25 (2011)
Submission reference: IN2128
Looking at Question 16 (2011) ...
Your second suggestion is basically SCAN (or one-way elevator), but it should work. You can improve though by alternating the SCAN direction (which makes it C-SCAN or a proper elevator algorithm).
It is my understanding from research that C-SCAN moves in one direction (moving back to 0 or first request in other direction), whilst SCAN moves in both directions.
Wikipedia: One variation of this method ensures all requests are serviced in only one direction, that is, once the head has arrived at the outer edge of the disk, it returns to the beginning and services the new requests in this one direction only (or vice versa). This is known as the "Circular Elevator Algorithm" or C-SCAN.
Is this incorrect?
I'm of a mind to believe that Wikipedia is right and I was wrong. The labelling of a particular algorithm is less important than understanding the mechanics of it though :-).
Keywords: assess4-1112 , disk-scheduling
Submission reference: IN2123
Before I progress too far down my current path I wanted to clarify exactly how you would like an implementation C-SCAN to behave. My current attempt involves having all the read requests dumped into the readqueue and once there ordered from smallest to largest and then actually read. This will provide a very swift solution I am sure but I don't know whether it is representative of the algorithms usage in real life (it requires having all the read requests at the beginning so they can all be ordered).
The alternate approach I considered was simply dumping all the requests into the readqueue and then as they are read have the readcomplete method compare the current readrequest with the last, if it is larger have it read, if it is smaller, skip it and move on. Once the highest request is reached have it jump back to 0 and start increasing again. This method would be slower since this process would potentially need to be repeated many times.
I wanted to check this because the current method I am attempting is throwing up a peculiar error and before I spend too long trying to fix it I want know if it would even result in a correct solution.
If all blocks were available at the start then your first thought would be right. However, the blocks are requested at different (effectively arbitrary) times, so at no point are all blocks to be read known. This is one of the issues of non-determinism, which you code needs to be able to cope with. Sorting then processing will deal with all the requests that are currently outstanding, but more may arrive whilst those are being processed — which have to be handled appropriately (there are many different ways you can deal with this bit!).
Your second suggestion is basically SCAN (or one-way elevator), but it should work. You can improve though by alternating the SCAN direction (which makes it C-SCAN or a proper elevator algorithm).
Keywords: assess4-1112 , disk-scheduling
Referrers: Question 21 (2011)
Submission reference: IN2121
After establishing an error with my for() loop, using the answer from Question 13 (2011), I have come across results that do not fall in line with that of which I was hoping to achieve. In particular I am attempting to implement C-SCAN (elevator) which sorts the blocks into ascending order before processing them, thus not leaving any block missed. However.. My code looks like the below:
for (int i=0; i < readqueue_size; i++) { int idx = (readqueue_tail + i) % readqueue.length; if (the block at readqueue idx is greater than the block of idx+1, AND neither block is -1) { swap the read request at idx with the one at idx+1. } }
Which is a small implementation of bubble sort, but produces the following results:
finished at time 2546 with 0 pending queues disk spent 36 idle, 2382 seeking to track, 127 reading 97 blocks time for request in queue (min, max, avg): 0 , 2243 , 375.7 +/- 519.6 time for request in read (min, max, avg): 1 , 188 , 25.9 +/- 53.7 time for request total (min, max, avg): 1 , 2297 , 401.5 +/- 539.5
which I am sure are slower than the original results. Can you point out where I am going wrong here?
The pseudo-code you give for the bubble-sort isn't right. What you have there will incrementally move a block along the queue until the loop condition returns false -- it assumes, in particular, that the list is already sorted. If it's not, this won't work right! Normally a bubble-sort has two nested loops. Because new blocks get added to the end of the array, rather than the start, a newly added request will be moved at most one space, so most of the time I'd expect this bubble-sort to be doing nothing useful! This is why you're seeing the bad results. If you turned the bubble sort around to go from (head-1) rather than (tail), it should produce better results, but I'd suspect it'd still be a bit broken.
A good idea would be to write a method that prints out the contents of the request queue, and see what it looks like before and after you try and sort it. That should point you in the general direction of where things are going wrong.
Keywords: assess4-1112 , disk-scheduling
Referrers: Question 15 (2011)
Submission reference: IN2112
For the Systems Programming, I have tried to implement C-LOOK, but the results coming out are basically identical to the programs original output. Is perhaps using C-LOOK a bad idea, or maybe I have just implemented it that badly?
I realise this is not the most efficient way to do it (although I still expected it to be faster than FIFO), my (not so good) implementation of C-LOOK just, on each "readcomplete" where there are still requests in the queue, calls another method to find the next block. In pseudo-code this looks something like:
current_blk = ? // Stores current block if (still requests to service) then clear lowest, highest and next block numbers. for (each request still to process) do if (this request lower than lowest block seen so far) then lowest = this request endif if (this request higher than highest block seen so far) then highest = this request endif if (this request higher than current, but less than next) then next = this request endif done // Should now have lowest, highest and next block numbers. if (next not set) then service lowest request else if (current greater than highest) then service lowest request else service next request endif
Any ideas/suggestions?
The logic of this looks pretty sensible and it should give better performance than the default FIFO implementation. If you can't see anything obviously wrong with the implementation, I'd suggest putting in some debugging statements in the "readcomplete()" call to print out what the contents of the request queue are and, the current/last block, then which block your implementation cbooses. You should be able to see at a glance if something is going wrong or not.
Keywords: assess4-1112 , disk-scheduling
Submission reference: IN2108
What software should we be compiling this in? I tried Visual Studio 2010 to compile the C sources but with dsched.c am getting the following errors:
disksim.h(24): error C2054: expected '(' to follow 'inline' disksim.h(25): error C2085: 'block_to_track' : not in formal parameter list disksim.h(25): error C2143: syntax error : missing ';' before '{' disksim.h(30): error C2054: expected '(' to follow 'inline' disksim.h(31): error C2085: 'block_to_sector' : not in formal parameter list disksim.h(31): error C2143: syntax error : missing ';' before '{'
and with disksim.c:
disksim.c(12): fatal error C1083: Cannot open include file: 'sys/mman.h': No such file or directory
For whatever reason, Visual Studio doesn't seem to like the inline declarations, and the Windows platform doesn't have the sys/mman.h include file (which I'm not surprised about). However, any reasonably modern Unix platform should be able to cope with it. You can compile and run the software on 'raptor', and edit the file on your Windows machine (just mount the raptor filestore, via the VPN if off-campus probably). Or install a Linux distribution in VMWare, VirtualBox or QEmu.
The Java version should work fine on Windows though (with the Sun/Oracle JVM). If you know C pretty well, Java should be somewhat simpler by comparison. For the task I'm asking you to complete, the difference between the C and Java code is pretty minimal — this is about low-level OS coding, not about using APIs provided by Java, POSIX or Windows.
Keywords: disk-scheduling , assess4-1112
Maintained by Fred Barnes, last modified Wed May 25 15:07:20 2016 |