Theory – Assn 3

-> View PDF

1. Define the following terms precisely and briefly:

a. Race Condition
Two or more processes are reading or writing shared data and the final result depends on the sequence of the events.

b. Mutual Exclusion
One way to avoid race condition by making sure if one process is in the critical region, other processes will be blocked from entering the region (i.e. blocked from accessing the shared memory).

c. Critical Region
It is part of the program where the shared memory can be accessed.

d. Semaphore
An integer-valued synchronization primitive with the following atomic actions:
– decrementing (down): If 0, the calling thread goes to sleep.
If >0, decreases the value by one.
– Incrementing (up): If 0, awakens a sleeping thread. If <max, increases the value by one.

2. What is the Priority Inversion Problem? How can we avoid it?

No preemption: Process L with lower priority holds a resource required by process H with higher priority. H arrives after L acquires the resource and acquires CPU (busy waiting). -> None of the processes can finish.
Solution: use sleep and wakeup calls instead

With preemption: Process L with lower priority holds a resource required by process H with higher priority. H arrives after L acquires the resource and cannot proceed. M with medium priority and no dependence to the shared resource arrives after H, acquires CPU because of its higher priority compared to L, and completes its job before H.
Solution: Increase priority of L when it claims the resource.

3. Is there any difference between mutex locks and binary semaphores?
They both guarantee the mutual exclusion. Mutex is a simplified version of semaphores without the ability to count.

Binary semaphores are initialized to 1, down is called when a process is entering the critical region and up when leaving. Binary semaphores also have a corresponding list of waiting processes.

Mutex has the states unlocked (represented by 0, critical region is available) and locked for any other values (critical region denied access).

4. There are several alternatives for ensuring mutual exclusion. List these methods in rows of a table, and compare their properties in a few columns.




Hardware Support

Busy Waiting


Disabling Interrupts


(malicious processes)



(does not yield CPU to independent processes)

Lock Variables


(mutual exclusion in accessing the lock)




Strict Alternation





(hard to program complicated cases)

Person’s Solution






The TSL Instruction hardware






5. Consider the scheduling algorithms First-Come-First-Served, Short Job First, Shortest Remaining Time Next, and Round Robin. Suppose a set of jobs arrive in the system at around the same time. Rank the above scheduling algorithms based on their performance with respect to each of the following criteria (one ranking per criterion):

a. Throughput – as the number of process that complete their execution per time unit;

b. Average waiting time – Waiting time: amount of time a process has been waiting in the ready queue;

c. Average Response Time – Response Time: amount of time it takes from when a request was submitted until the first response is produced (the process starts running);

d. Proportionality – the maximum ratio of the turnaround time to the expected running time;
RR < SRTN < SJF < FCFS (roughly)

e. Average Turnaround Time.

Theory – Assn 2

View PDF

1. What happens to the children, when the parent process is killed? What happens to them when the parent terminates?

In both cases, they continue running.

2. Does the child process see changes in variables made by the parent process before the fork() statement that creates the child? What about the value changes after the fork() statement?

The child process can see changes by the parent before the fork() statement, but cannot see the value change after the fork() statement.

3. Suppose the process P1 is running that it has to block for an I/O operation. A second process P2 in front of the ready queue. List the key context switch steps by referring to P1 and P¬2

a. Save context of P1 to PCB

1. Save registers, address space, I/O Information (e.g. list of opened files), accounting information;
2. Update PCB of P1 to reflect new state (Blocked);
3. Move PCB of P1 to the Blocked queue.

b. Flush data and instruction caches;

c. Decide which process to run -> P2

d. Reload P2 using its PCB

1. Update new PCB state, queues.
2. Reload registers, Update memory management data structures.

4. Why is a threading model useful?

A threading model is useful because they can run in parallel, increased performance and responsiveness, easier coordination and sharing of resources.

5. Which of the following components of program state are shared across threads in a multithreaded process?

a. Register values
b. Heap memory
c. Global variables
d. Stack memory

6. Explain why system calls are needed to set up shared memory between processes. Does sharing memory between multiple threads of the same process also require system calls to be set up?

The system calls are needed because each process has its own address space and kernel has the memory mapping of processes.

Threads do not need system call because they are already sharing the address space in a process.

7. Why a web server usually spawns off a thread per Web client request, but not a process per request?

Threads are light-weight, which means creating threads and switching between threads is faster compared to processes; also they can access shared resources more easily.

8. What are the models of inter-process communication? Compare the models by outlining their requirements, pros, and cons.

Shared Memory: Processes can exchange data in a shared region of the memory. It is efficient concerning the speed, but lack of protection.

Message Passing: Processes exchange data by creating communication links. It is more secure, but slower.

Attack from Inside the System

Trojan Horses

  • A seemingly innocent program contains code to perform an unexpected and undesirable function
  • The function includes:
    1. modifying, deleting, or encrypting the user file
    2. copying user files to a place where the cracker can retrieve later
    3. sending the files to the cracker or a temporary safe hiding place via email/FTP
  • To run a Trojan horse, the person must first executed the program. (often found in free games, MP3, or something attract users’ attention)
  • Once it starts, Trojan horse can do anything the user can do and it does not require the author of the Trojan horse to break into victim’s computer.
  • Unix path variable is another way to inserting Trojan horse to the machine

Login Spoofing

  • get the username and password by making a fake login page that tricks the users to enter their passwods
    –> Windows asks users to hit Ctrl+Alt+Del before the login page for this reason.

Logic Bombs

  • A piece of code written by one of a company’s (currently employed) programmer and was secretly inserted into the production operating system.
  • The programmer feeds it a daily password and if fired one day by the company, no password will be provided to the program and the logic bomb “goes off”
    –> Happened in payroll
  • “Going off” might mean cleaning disk, erasing files at random, or other hard-to-detect changes to key programs.
  • The company can call the police, but will never get the files back.

Trap Doors

  • Allow a system programmer to bypass the whole authentication
    –> i.e. To select a login name that no matter what the password the user type, the access is granted
  • This can be prevented by Code Review, which is to have the programmers explain their code line by line periodically.

Buffer Overflow

  • Particular for C programming
  • C compiler don’t have array bound checking, so it is possible to overwrite some byte of memory outside an array.
    • Suppose a dynamic array is copied to a static array (e.g. Name)
    • If the characters of the dynamic array exceeds the size of the static array, the name will overflow in the static and overwrite the address and corrupt it.
  • Prevention process
    1. feed it with a reasonable size first and see if it dumps core.
    2. Analyze core dump to see where the long stream is stored.
    3. Figure out the overwritten data from there.

Generic Security Attack

  • tiger/penetration team: a group of experts hired by the company to see if they can break in the system
  • Common successful attacks:
    1. Request memory pages, disk space, or tapes and just read them
    2. Try illegal system calls, or legal calls with illegal parameters, or legal calls and legal but not reasonable parameter.
    3. Start logging in and hit break keys (e.g. DEL) to kill the login checking program
    4. Try modifying complex operating system structure kept in user space
    5. Look at manual and do the “Do not do…”
    6. Convince the system programmer to add a trap door with your user name
    7. Bribe the secretary

Design Principles

  1. The system design should be public
  2. The default should be no access.
  3. Check for current authority
  4. Give each process the least privilege possible
  5. The protection mechanism should be simple, uniform, and built into the lowest layers of the system.
  6. The scheme chosen must be psychologically acceptable.

View PDF


Translation Lookaside Buffers

Let us now look at widely implemented schemes for speeding up paging and for handling large virtual address spaces, starting with the former. The starting point of most optimization techniques is that the page table i in memory. Potentially, this design has an enormous impact on performance. Consider, for example, a 1-byte instruction that copies one register to another. In the absence of paging, this instruction makes only one memory reference, to fetch the instruction. With paging, at least one additional memory reference will be needed, to access the page table. Since execution speed is generally limited by the rate at which the CPU can get instructions and data out of the memory, having to make two memory references per memory reference reduces performance by half. Under these conditions, no one would use paging.

Computer designers have known about this problem for years and have come up with a solution. Their solution is based on the observation that most program tend to make a large number of references to a small number of pages, and not the other way around. Thus only a small fraction of the page table entries are heavily read; the rest are barely used at all.

The solution that has been devised is to equip computers with a small hardware device for mapping virtual addresses to physical addresses without going through the page table. The device, called a TLB (Translation Lookaside Buffer) or sometimes an associative memory. It is usually inside the MMU and consists of a small number of entries. Each entry contains information about one page, including the virtual page number, a bit that is set when the page is modified, the protection code (read/write/execute permissions), and the physical page frame in which the page is located. These fields have a one-to-one correspondence with the fields in the page table, except for the virtual page number, which is not needed in the page table. Another bit indicates whether the entry is valid (i.e., in use) or not.


Virtual page



Page frame









































An example that might generate the TLB of the table above is a process in a loop that spans virtual pages 19, 20, and 21, so that these TLB entries have protection codes for reading and executing. The main data currently being used (say, an array being processed) are on page 129 and 130. Page 140 contains the indexes used in the array calculations. Finally, the stack is on pages 860 and 861.

Let us now see how the TLB functions. When a virtual address is presented to the MMU for translation, the hardware first checks to see if its virtual page number is present in the TLB by comparing it to all the entries simultaneously (i.e., in parallel). If a valid match is found and the access does not violate the protection bits, the page frame is taken directly from the TLB, without going to the page table. If the virtual page number is present in the TLB, but the instruction is trying to write on a read-only page, a protection fault is generated.

The interesting case is what happens when the virtual page number is not in the TLB. The MMU detects the miss and does an ordinary page table lookup. It then evicts one of the entries from the TLB and replaces it with the page table result in a TLB hit rather than a miss. When an entry is purged from the TLB, the modified bit is copied back into the page table entry in memory. The other values are already there, except the reference bit. When the TLB is loaded from the page table, all the fields are taken from memory.

Software TLB Management

Up until now, we have assumed that every machine with paged virtual memory has page tables recognized by the hardware, plus a TLB. In this design,, TLB management and handling TLB faults are done entirely by the MMU hardware. Traps to the operating system occur only when a page is not in memory.

In the past, this assumption was true. However, many modern RISC machines, including the SPARC, MPS,a nd HP PA, do nearly all of this page management in software. On these machines, the TLB entries are explicitly loaded by the operating system. When a TLB miss occurs, instead of the MMU just going to the page tables to find and fetch the needed page reference, it just generates a TLB fault and tosses the problem into the lap of the operating system. The system must find the page, remove an entry from the TLB, enter the new one, and restart the instruction that faulted. And, of course, all of this must be done in a handful of instructions because TLB misses occur much more frequently than page faults.

Surprisingly enough, if the TLB is reasonably large (say, 64 entries) to reduce the miss rate, software management of the TLB turns out to be acceptably efficient. The main gain here is a much simpler MMU, which frees up a considerable amount of area on the CPU chip for caches and other features that can improve performance. Software TLB management is discussed by Uhlig et all. (1994).

Various strategies have been developed to improve performance on machines that do TLB management in software. One approach attacks both reducing TLB misses and reducing the cost of a TLB miss when it does occur. To reduce TLB misses, sometimes the operating system can use
its intuition to figure out which pages are likely to be used next and to preload entries for them in the TLB. For example, when a client process sends a message to a server process on the same machine, it is very likely that the server will have to run soon. Knowing this, while processing the trap to do the send, the system can also check to see where the server’s code, data, and stack pages are and map them in before they get a chance to cause TLB faults.

The normal way to process a TLB miss, whether in hardware or in software, is to go to the page table and perform the indexing operations to locate the page referenced. The problem with doing this search in software is that the pages holding the page table may not be in the TLB, which will cause additional TLB faults during the processing. These faults can be reduced by maintaining a large software cache of TLB entries in a fixed location whose page is always kept in the TLB. By first checking the software cached, the operating system can substantially reduce TLB misses.

When software TLB management is used, it is essential to understand the difference between two kinds of misses. A soft miss occurs when the page referenced is not in the TLB, but is in memory. All that is needed here is for the TLB to be updated. No disk I/O is needed. Typically a soft miss takes 10-20 machine instructions to handle and can be completed in a few nanoseconds. In contrast, a hard miss occurs when the page itself is not in memory (and of course, also not in the TLB). A disk access is required to bring in the page, which takes several milliseconds. A hard miss is easily a million times slower than a software miss.

From Modern Operating System by Andrew S. Tanenbaum Chapter 3