COS431 Prelim 2 Study Guide
Exam Date: November 15, 2002
University of Maine Department of Computer Science
Prelim Time: 1 hour
Remember: You are allowed 1 3x5 index card of notes.
- Implementation of Processes:
- Compare and contrast a typical implementation of message passing and
semaphores. What is similar? What is different?message passing implemented
in the kernel?
- In class I said that a process is always on some list in the process
table. What lists are these, how might a process transition from one list
to another, and how do these transitions relate to the different states
a process can be in at any one time?
- The send and receive typically are "blocking" commands, in
that a process doing the sending/receiving is blocked until a corresponding
process sends/receives the message. Suppose we wanted to provide a "non-blocking"
send. That is, suppose we wanted to allow a process to send a message,
but not wait around until the other process receives it. What are the
ramifications of this? That is, how do the implementation data structures
and algorithms change? What about the number of sends that one process
can do before a process receives any of the messages?
- The Multi-process BRAIN02 Interpretor:
- Describe the BRAIN02 virtual machine in detail.
- Write a simple pipeline program in BRAIN02.
- Why are we implementing BRAIN02?
- How does a multi-process kernel using BRAIN02 differ from a multi-process
kernel in Minix? Hint: What is "the problem" in implementing context switching
on a real machine? This is a similar question to that asked for the first
test, but now you have a bit more insight because of your multi-process
- Analysis of Context Switching: You are performing a number of experiments
in your Brain02 project 3. Know the results of those experiments. Specifically,
know how context switching is affected by the time slice, the type of
synchronization mechanism used (see below), and the structure of the application
(frequency and repetition of sends/receives or Ps/Vs). Study the graphs
you are generating on project 3. I might ask you to sketch one or more
- Race Conditions: This is the concept underlying all of our problem
solutions. Know how to spot a Race condition when it can occur. Specifically,
I gave examples that you should know by heart - all based on the load/update/store
operation that is not performed atomically. Recall how two processes
could "race" to a shared variable by both loading it before
either has a chance to update and store it's value.
- Critical Sections: This concept was proposed to solve the race condition
problem by enforcing mutually exclusive access to shared dataImportant:
Know the four conditions that must hold in order for critical sections
to "work" properly, and know how to apply them in subsequent examples.
- Busy Waiting Solutions: Know each of the solutions given in the section
and specifically discussed in class, know the problems that arise with
each, and know how the four conditions are applied to evaluate the effectiveness
of a solution.
- Look at how the test and set lock instruction is utilized.
There are some variants of this in end of chapter exercises that might
make good test questions. Take a look at them.
- Know the advantages and disadvantages of strict alternation.
- Know how to trace through the entry and exit protocols of Peterson's
solution, and be able to explain why it works.
- Blocking Solutions: Know each of the blocking solutions discussed in
class, including Semaphores, Monitors, and Message Passing.
- Usage: Know how each method works, and be prepared to give detailed
examples. There most likely will be a question on Monitors as well
as on Semaphores. Careful, the points are subtle.
- Implementation: Be prepared to compare how semaphores and sends/receives
are implemented in detail. Specifically, know how the kernel uses
context switching to handle each side of the send/receive properly,
and how it uses this same context switching to implement semaphores.
- Monitors: Using your knowledge of semaphore and message passing
implementation, be prepared to suggest a similar implementation for
- Message Passing: I mentioned message passing above, but again here
in order to emphasize that it is the most important synchronization method
for communicating in the client server model.
- Equivalence of Primitives: The book has an excellent section to study
in order to properly understand each of the primitives that will be on
the prelim. Specifically, we looked at how message passing and semaphores
are related, and also at how semaphores and the rendezvous are related.
Be prepared to compare semaphores with message passing in detail. That
is, show how message passing solutions can be implemented using semaphores
and vice versa. Note: This is a different question than asking
you to compare semaphore and message passing implementations. Here I'm
asking you to compare the relative power of the two mechanisms for solving
similar types of problems. Both expressive power and efficiency is important
- Classical IPC Problems: We discussed the producer/consumer problem
in depth (a problem that I consider to be a classical IPC problem) as
well as a number of variations on the dining philosopher's problem.
Know specifically what the dining philosopher's problem is, and
how each of the examples we discussed solve or don't solve that problem.
We also discussed the readers/writers problem. Know it.
- Memory Management:
- Memory Management without Swapping or Paging: We went into some
depth on an implementation for our Brain02 multi-process language.
- Be prepared to discuss that implementation in detail, including
data structures, philosophy about how to manage free space, and how
memory management data structures relate to process management data
- Be prepared to discuss the issues concerning first fit, next fit,
best fit, and worst fit algorithms.
- Be prepared to discuss external vs. internal fragmentation.
- Multiprogramming with Fixed Partitions: this method was essentially
implemented in Brain project 3. Be prepared to discuss issues of simple
memory management here.
- Multi-programming with Variable Partitions: including different techniques
for splitting and managing partitions.
Last Updated: 11/7/02