Computational methods and numerical analysis

The mathematical methods needed for computations in engineering and the sciences must be transformed from the continuous to the discrete in order to be carried out on a computer. For example, the computer integration of a function over an interval is accomplished not by applying integral calculus to the function expressed as a formula but rather by approximating the area under the function graph by a sum of geometric areas obtained from evaluating the function at discrete points. Similarly, the solution of a differential equation is obtained as a sequence of discrete points determined, in simplistic terms, by approximating the true solution curve by a sequence of tangential line segments. When discretized in this way, many problems can be recast in the form of an equation involving a matrix (a rectangular array of numbers) that is solvable with techniques from linear algebra. Numerical analysis is the study of such computational methods. Several factors must be considered when applying numerical methods: (1) the conditions under which the method yields a solution, (2) the accuracy of the solution, and, since many methods are iterative, (3) whether the iteration is stable (in the sense of not exhibiting eventual error growth), and (4) how long (in terms of the number of steps) it will generally take to obtain a solution of the desired accuracy.

The need to study ever-larger systems of equations, combined with the development of large and powerful multiprocessors (supercomputers) that allow many operations to proceed in parallel by assigning them to separate processing elements, has sparked much interest in the design and analysis of parallel computational methods that may be carried out on such parallel machines.

Data structures and algorithms

A major area of study in computer science has been the storage of data for efficient search and retrieval. The main memory of a computer is linear, consisting of a sequence of memory cells that are numbered 0, 1, 2,… in order. Similarly, the simplest data structure is the one-dimensional, or linear, array, in which array elements are numbered with consecutive integers and array contents may be accessed by the element numbers. Data items (a list of names, for example) are often stored in arrays, and efficient methods are sought to handle the array data. Search techniques must address, for example, how a particular name is to be found. One possibility is to examine the contents of each element in turn. If the list is long, it is important to sort the data first—in the case of names, to alphabetize them. Just as the alphabetizing of names in a telephone book greatly facilitates their retrieval by a user, the sorting of list elements significantly reduces the search time required by a computer algorithm as compared to a search on an unsorted list. Many algorithms have been developed for sorting data efficiently. These algorithms have application not only to data structures residing in main memory but even more importantly to the files that constitute information systems and databases.

Although data items are stored consecutively in memory, they may be linked together by pointers (essentially, memory addresses stored with an item to indicate where the “next” item or items in the structure are found) so that the items appear to be stored differently than they actually are. An example of such a structure is the linked list, in which noncontiguously stored items may be accessed in a prespecified order by following the pointers from one item in the list to the next. The list may be circular, with the last item pointing to the first, or may have pointers in both directions to form a doubly linked list. Algorithms have been developed for efficiently manipulating such lists—searching for, inserting, and removing items.

Pointers provide the ability to link data in other ways. Graphs, for example, consist of a set of nodes (items) and linkages between them (known as edges). Such a graph might represent a set of cities and the highways joining them or the layout of circuit elements and connecting wires on a VLSI chip. Typical graph algorithms include solutions to traversal problems, such as how to follow the links from node to node (perhaps searching for a node with a particular property) in such a way that each node is visited only once. A related problem is the determination of the shortest path between two given nodes. (For background on the mathematical theory of networks, see the article graph theory.) A problem of practical interest in designing any network is to determine how many “broken” links can be tolerated before communications begin to fail. Similarly, in VLSI chip design it is important to know whether the graph representing a circuit is planar, that is, whether it can be drawn in two dimensions without any links crossing each other.

Related Posts

© 2024 Basic Computer Science - Theme by WPEnjoy · Powered by WordPress