Multiprocessor design
Creating a multiprocessor from a number of uniprocessors (one CPU) requires physical links and a mechanism for communication among the processors so that they may operate in parallel. Tightly coupled multiprocessors share memory and hence may communicate by storing information in memory accessible by all processors. Loosely coupled multiprocessors, including computer networks (see the section Network protocols), communicate by sending messages to each other across the physical links. Computer scientists investigate various aspects of such multiprocessor architectures. For example, the possible geometric configurations in which hundreds or even thousands of processors may be linked together are examined to find the geometry that best supports computations. A much studied topology is the hypercube, in which each processor is connected directly to some fixed number of neighbours: two for the two-dimensional square, three for the three-dimensional cube, and similarly for the higher dimensional hypercubes. Computer scientists also investigate methods for carrying out computations on such multiprocessor machines—e.g., algorithms to make optimal use of the architecture, measures to avoid conflicts as data and instructions are transmitted among processors, and so forth. The machine-resident software that makes possible the use of a particular machine, in particular its operating system (see below Operating systems), is in many ways an integral part of its architecture.
Network protocols
Another important architectural area is the computer communications network, in which computers are linked together via computer cables, infrared light signals, or low-power radiowave transmissions over short distances to form local area networks (LANs) or via telephone lines, television cables, or satellite links to form wide-area networks (WANs). By the 1990s, the Internet, a network of networks, made it feasible for nearly all computers in the world to communicate. Linking computers physically is easy; the challenge for computer scientists has been the development of protocols—standardized rules for the format and exchange of messages—to allow processes running on host computers to interpret the signals they receive and to engage in meaningful “conversations” in order to accomplish tasks on behalf of users. Network protocols also include flow control, which keeps a data sender from swamping a receiver with messages it has no time to process or space to store, and error control, which involves error detection and automatic resending of messages to compensate for errors in transmission. For some of the technical details of error detection and error correction, see the article information theory.
The standardization of protocols has been an international effort for many years. Since it would otherwise be impossible for different kinds of machines running diverse operating systems to communicate with one another, the key concern has been that system components (computers) be “open”—i.e., open for communication with other open components. This terminology comes from the open systems interconnection (OSI) communication standards, established by the International Organization for Standardization. The OSI reference model specifies protocol standards in seven “layers,” as shown in the figure. The layering provides a modularization of the protocols and hence of their implementations. Each layer is defined by the functions it relies upon from the next lower level and by the services it provides to the layer above it. At the lowest level, the physical layer, rules for the transport of bits across a physical link are defined. Next, the data-link layer handles standard-size “packets” of data bits and adds reliability in the form of error detection and flow control. Network and transport layers (often combined in implementations) break up messages into the standard-size packets and route them to their destinations. The session layer supports interactions between application processes on two hosts (machines). For example, it provides a mechanism with which to insert checkpoints (saving the current status of a task) into a long file transfer so that, in case of a failure, only the data after the last checkpoint need to be retransmitted. The presentation layer is concerned with such functions as transformation of data encodings, so that heterogeneous systems may engage in meaningful communication. At the highest, or application, level are protocols that support specific applications. An example of such an application is the transfer of files from one host to another. Another application allows a user working at any kind of terminal or workstation to access any host as if the user were local.
Distributed computing
The building of networks and the establishment of communication protocols have led to distributed systems, in which computers linked in a network cooperate on tasks. A distributed database system, for example, consists of databases (see the section Information systems and databases) residing on different network sites. Data may be deliberately replicated on several different computers for enhanced availability and reliability, or the linkage of computers on which databases already reside may accidentally cause an enterprise to find itself with distributed data. Software that provides coherent access to such distributed data then forms a distributed database management system.
Client-server architecture
The client-server architecture has become important in designing systems that reside on a network. In a client-server system, one or more clients (processes) and one or more servers (also processes, such as database managers or accounting systems) reside on various host sites of a network. Client-server communication is supported by facilities for interprocess communication both within and between hosts. Clients and servers together allow for distributed computation and presentation of results. Clients interact with users, providing an interface to allow the user to request services of the server and to display the results from the server. Clients usually do some interpretation or translation, formulating commands entered by the user into the formats required by the server. Clients may provide system security by verifying the identity and authorization of the users before forwarding their commands. Clients may also check the validity and integrity of user commands; for example, they may restrict bank account transfers to certain maximum amounts. In contrast, servers never initiate communications; instead they wait to respond to requests from clients. Ideally, a server should provide a standardized interface to clients that is transparent, i.e., an interface that does not require clients to be aware of the specifics of the server system (hardware and software) that is providing the service. In today’s environment, in which local area networks are common, the client-server architecture is very attractive. Clients are made available on individual workstations or personal computers, while servers are located elsewhere on the network, usually on more powerful machines. In some discussions the machines on which client and server processes reside are themselves referred to as clients and servers.
Middleware
A major disadvantage of a pure client-server approach to system design is that clients and servers must be designed together. That is, to work with a particular server application, the client must be using compatible software. One common solution is the three-tier client-server architecture, in which a middle tier, known as middleware, is placed between the server and the clients to handle the translations necessary for different client platforms. Middleware also works in the other direction, allowing clients easy access to an assortment of applications on heterogeneous servers. For example, middleware could allow a company’s sales force to access data from several different databases and to interact with customers who are using different types of computers.