Parallel Computing

Although distributed computing is a distinct method for harnessing the unused power of networked computers, it bears close resemblance to another multiple processor computing architecture: parallel computing, which is the practice of employing multiple processors at the same location to spread break down the computing task.  In fact, because of the close similarity between the two, many authors will fail to distinguish between the two computing strategies.  For a clear distinction between the two tactics, we once again look to Leopold’s book:

“Parallel computing splits an application up into tasks that are executed at the same time whereas distributed computing splits an application up into tasks that are executed at different locations using different resources” (Leopold 3).

Parallel computing is a computational method that is extremely similar to distributed computing.  It is, for the most part of this discussion, outside of the scope of the scope of this website.  The basics behind parallel computing are explained fantastically in Claudia Leopold’s text entitled Parallel and Distributed Computing: A survey of Models Paradigms, and Approaches.  The basic practice of parallel computing splits an application or process into subtasks that are to be solved at the same time (sometimes in a “tightly coupled manner”).  Each task must be able to be considered individually by any given machine running “homogeneous architectures” which may or may not have shared memory.

Related Posts

© 2024 Basic Computer Science - Theme by WPEnjoy · Powered by WordPress