Protected: Parallel Program Executor

This content is password protected. To view it please enter your password below:

Advertisements

Attack from Inside the System

Trojan Horses

  • A seemingly innocent program contains code to perform an unexpected and undesirable function
  • The function includes:
    1. modifying, deleting, or encrypting the user file
    2. copying user files to a place where the cracker can retrieve later
    3. sending the files to the cracker or a temporary safe hiding place via email/FTP
  • To run a Trojan horse, the person must first executed the program. (often found in free games, MP3, or something attract users’ attention)
  • Once it starts, Trojan horse can do anything the user can do and it does not require the author of the Trojan horse to break into victim’s computer.
  • Unix path variable is another way to inserting Trojan horse to the machine

Login Spoofing

  • get the username and password by making a fake login page that tricks the users to enter their passwods
    –> Windows asks users to hit Ctrl+Alt+Del before the login page for this reason.

Logic Bombs

  • A piece of code written by one of a company’s (currently employed) programmer and was secretly inserted into the production operating system.
  • The programmer feeds it a daily password and if fired one day by the company, no password will be provided to the program and the logic bomb “goes off”
    –> Happened in payroll
  • “Going off” might mean cleaning disk, erasing files at random, or other hard-to-detect changes to key programs.
  • The company can call the police, but will never get the files back.

Trap Doors

  • Allow a system programmer to bypass the whole authentication
    –> i.e. To select a login name that no matter what the password the user type, the access is granted
  • This can be prevented by Code Review, which is to have the programmers explain their code line by line periodically.

Buffer Overflow

  • Particular for C programming
  • C compiler don’t have array bound checking, so it is possible to overwrite some byte of memory outside an array.
    • Suppose a dynamic array is copied to a static array (e.g. Name)
    • If the characters of the dynamic array exceeds the size of the static array, the name will overflow in the static and overwrite the address and corrupt it.
  • Prevention process
    1. feed it with a reasonable size first and see if it dumps core.
    2. Analyze core dump to see where the long stream is stored.
    3. Figure out the overwritten data from there.

Generic Security Attack

  • tiger/penetration team: a group of experts hired by the company to see if they can break in the system
  • Common successful attacks:
    1. Request memory pages, disk space, or tapes and just read them
    2. Try illegal system calls, or legal calls with illegal parameters, or legal calls and legal but not reasonable parameter.
    3. Start logging in and hit break keys (e.g. DEL) to kill the login checking program
    4. Try modifying complex operating system structure kept in user space
    5. Look at manual and do the “Do not do…”
    6. Convince the system programmer to add a trap door with your user name
    7. Bribe the secretary

Design Principles

  1. The system design should be public
  2. The default should be no access.
  3. Check for current authority
  4. Give each process the least privilege possible
  5. The protection mechanism should be simple, uniform, and built into the lowest layers of the system.
  6. The scheme chosen must be psychologically acceptable.
  7. KEEP THE DESIGN SIMPLE!

View PDF

TLB

Translation Lookaside Buffers

Let us now look at widely implemented schemes for speeding up paging and for handling large virtual address spaces, starting with the former. The starting point of most optimization techniques is that the page table i in memory. Potentially, this design has an enormous impact on performance. Consider, for example, a 1-byte instruction that copies one register to another. In the absence of paging, this instruction makes only one memory reference, to fetch the instruction. With paging, at least one additional memory reference will be needed, to access the page table. Since execution speed is generally limited by the rate at which the CPU can get instructions and data out of the memory, having to make two memory references per memory reference reduces performance by half. Under these conditions, no one would use paging.

Computer designers have known about this problem for years and have come up with a solution. Their solution is based on the observation that most program tend to make a large number of references to a small number of pages, and not the other way around. Thus only a small fraction of the page table entries are heavily read; the rest are barely used at all.

The solution that has been devised is to equip computers with a small hardware device for mapping virtual addresses to physical addresses without going through the page table. The device, called a TLB (Translation Lookaside Buffer) or sometimes an associative memory. It is usually inside the MMU and consists of a small number of entries. Each entry contains information about one page, including the virtual page number, a bit that is set when the page is modified, the protection code (read/write/execute permissions), and the physical page frame in which the page is located. These fields have a one-to-one correspondence with the fields in the page table, except for the virtual page number, which is not needed in the page table. Another bit indicates whether the entry is valid (i.e., in use) or not.

Valid

Virtual page

Modified

Protection

Page frame

1

140

1

RW

31

1

20

0

R X

38

1

130

1

RW

29

1

129

1

RW

62

1

19

0

R X

50

1

21

0

R X

45

1

860

1

RW

14

1

861

1

RW

75

An example that might generate the TLB of the table above is a process in a loop that spans virtual pages 19, 20, and 21, so that these TLB entries have protection codes for reading and executing. The main data currently being used (say, an array being processed) are on page 129 and 130. Page 140 contains the indexes used in the array calculations. Finally, the stack is on pages 860 and 861.

Let us now see how the TLB functions. When a virtual address is presented to the MMU for translation, the hardware first checks to see if its virtual page number is present in the TLB by comparing it to all the entries simultaneously (i.e., in parallel). If a valid match is found and the access does not violate the protection bits, the page frame is taken directly from the TLB, without going to the page table. If the virtual page number is present in the TLB, but the instruction is trying to write on a read-only page, a protection fault is generated.

The interesting case is what happens when the virtual page number is not in the TLB. The MMU detects the miss and does an ordinary page table lookup. It then evicts one of the entries from the TLB and replaces it with the page table result in a TLB hit rather than a miss. When an entry is purged from the TLB, the modified bit is copied back into the page table entry in memory. The other values are already there, except the reference bit. When the TLB is loaded from the page table, all the fields are taken from memory.

Software TLB Management

Up until now, we have assumed that every machine with paged virtual memory has page tables recognized by the hardware, plus a TLB. In this design,, TLB management and handling TLB faults are done entirely by the MMU hardware. Traps to the operating system occur only when a page is not in memory.

In the past, this assumption was true. However, many modern RISC machines, including the SPARC, MPS,a nd HP PA, do nearly all of this page management in software. On these machines, the TLB entries are explicitly loaded by the operating system. When a TLB miss occurs, instead of the MMU just going to the page tables to find and fetch the needed page reference, it just generates a TLB fault and tosses the problem into the lap of the operating system. The system must find the page, remove an entry from the TLB, enter the new one, and restart the instruction that faulted. And, of course, all of this must be done in a handful of instructions because TLB misses occur much more frequently than page faults.

Surprisingly enough, if the TLB is reasonably large (say, 64 entries) to reduce the miss rate, software management of the TLB turns out to be acceptably efficient. The main gain here is a much simpler MMU, which frees up a considerable amount of area on the CPU chip for caches and other features that can improve performance. Software TLB management is discussed by Uhlig et all. (1994).

Various strategies have been developed to improve performance on machines that do TLB management in software. One approach attacks both reducing TLB misses and reducing the cost of a TLB miss when it does occur. To reduce TLB misses, sometimes the operating system can use
its intuition to figure out which pages are likely to be used next and to preload entries for them in the TLB. For example, when a client process sends a message to a server process on the same machine, it is very likely that the server will have to run soon. Knowing this, while processing the trap to do the send, the system can also check to see where the server’s code, data, and stack pages are and map them in before they get a chance to cause TLB faults.

The normal way to process a TLB miss, whether in hardware or in software, is to go to the page table and perform the indexing operations to locate the page referenced. The problem with doing this search in software is that the pages holding the page table may not be in the TLB, which will cause additional TLB faults during the processing. These faults can be reduced by maintaining a large software cache of TLB entries in a fixed location whose page is always kept in the TLB. By first checking the software cached, the operating system can substantially reduce TLB misses.

When software TLB management is used, it is essential to understand the difference between two kinds of misses. A soft miss occurs when the page referenced is not in the TLB, but is in memory. All that is needed here is for the TLB to be updated. No disk I/O is needed. Typically a soft miss takes 10-20 machine instructions to handle and can be completed in a few nanoseconds. In contrast, a hard miss occurs when the page itself is not in memory (and of course, also not in the TLB). A disk access is required to bring in the page, which takes several milliseconds. A hard miss is easily a million times slower than a software miss.

From Modern Operating System by Andrew S. Tanenbaum Chapter 3