Development of operating systems
In early computers, the user typed programs onto punched tape or cards, from which they were read into the computer. The computer subsequently assembled or compiled the programs and then executed them, and the results were then transmitted to a printer. It soon became evident that much valuable computer time was wasted between users and also while jobs (programs to be executed) were being read or while the results were being printed. The earliest operating systems consisted of software residing in the computer that handled “batches” of user jobs—i.e., sequences of jobs stored on magnetic tape that are read into computer memory and executed one at a time without intervention by user or operator. Accompanying each job in a batch were instructions to the operating system (OS) detailing the resources needed by the job—for example, the amount of CPU time, the files and the storage devices on which they resided, the output device, whether the job consisted of a program that needed to be compiled before execution, and so forth. From these beginnings came the key concept of an operating system as a resource allocator. This role became more important with the rise of multiprogramming, in which several jobs reside in the computer simultaneously and share resources—for example, being allocated fixed amounts of CPU time in turn. More sophisticated hardware allowed one job to be reading data while another wrote to a printer and still another performed computations. The operating system was the software that managed these tasks in such a way that all the jobs were completed without interfering with one another.
Further work was required of the operating system with the advent of interactive computing, in which the user enters commands directly at a terminal and waits for the system to respond. Processes known as terminal handlers were added to the system, along with mechanisms like interrupts (to get the attention of the operating system to handle urgent tasks) and buffers (for temporary storage of data during input/output to make the transfer run more smoothly). A large computer can now interact with hundreds of users simultaneously, giving each the perception of being the sole user. The first personal computers used relatively simple operating systems, such as some variant of DOS (disk operating system), with the main jobs of managing the user’s files, providing access to other software (such as word processors), and supporting keyboard input and screen display. Perhaps the most important trend in operating systems today is that they are becoming increasingly machine-independent. Hence, users of modern, portable operating systems like UNIX, Microsoft Corporation’s Windows NT, and Linux are not compelled to learn a new operating system each time they purchase a new, faster computer (possibly using a completely different processor).
Deadlock and synchronization
Among the problems that need to be addressed by computer scientists in order for sophisticated operating systems to be built are deadlock and process synchronization. Deadlock occurs when two or more processes (programs in execution) request the same resources and are allocated them in such a way that a circular chain of processes is formed, where each process is waiting for a resource held by the next process in the chain. As a result, no process can continue; they are deadlocked. An operating system can handle this situation with various prevention or detection and recovery techniques. For example, resources might be numbered 1, 2, 3, and so on. If they must be requested by each process in this order, it is impossible for a circular chain of deadlocked processes to develop. Another approach is simply to allow deadlocks to occur, detect them by examining nonactive processes and the resources they are holding, and break any deadlock by aborting one of the processes in the chain and releasing its resources.
Process synchronization is required when one process must wait for another to complete some operation before proceeding. For example, one process (called a writer) may be writing data to a certain main memory area, while another process (a reader) may be reading data from that area and sending it to the printer. The reader and writer must be synchronized so that the writer does not overwrite existing data with new data until the reader has processed it. Similarly, the reader should not start to read until data has actually been written to the area. Various synchronization techniques have been developed. In one method, the operating system provides special commands that allow one process to signal to the second when it begins and completes its operations, so that the second knows when it may start. In another approach, shared data, along with the code to read or write them, are encapsulated in a protected program module. The operating system then enforces rules of mutual exclusion, which allow only one reader or writer at a time to access the module. Process synchronization may also be supported by an interprocess communication facility, a feature of the operating system that allows processes to send messages to one another.
Designing software as a group of cooperating processes has been made simpler by the concept of “threads.” A single process may contain several executable programs (threads) that work together as a coherent whole. One thread might, for example, handle error signals, another might send a message about the error to the user, while a third thread is executing the actual task of the process. Modern operating systems provide management services (e.g., scheduling, synchronization) for such multithreaded processes.
Virtual memory
Another area of operating-system research has been the design of virtual memory. Virtual memory is a scheme that gives users the illusion of working with a large block of contiguous memory space (perhaps even larger than real memory), when in actuality most of their work is on auxiliary storage (disk). Fixed-size blocks (pages) or variable-size blocks (segments) of the job are read into main memory as needed. Questions such as how much actual main memory space to allocate to users and which page should be returned to disk (“swapped out”) to make room for an incoming page must be addressed in order for the system to execute jobs efficiently. Some virtual memory issues must be continually reexamined; for example, the optimal page size may change as main memory becomes larger and quicker.
Comments are closed.