Computer Sciences and Information Technology
Topic:

Parallel Processing and the Future
Assignment:
– Draw a graph to demonstrate how parallel processing works, and provide an explanation.

Assignment Expectations:
– Minimum 3 pages, single spaced, excluding cover page and references.
– Address all questions in this assignment.
– Demonstrates writing proficiency at the academic level of the course. Assignment is well organized and follows the structure of a well-written paper.
– Uses relevant and credible sources to support assertions.
– Uses in-text citations. References are properly formatted in Help write my thesis – APA style

Introduction
Parallel processing can be described as a computing methodology where two or more CPUs or processers are run together to handle different sections of a given task (Waivio, 2007). By breaking up the different parts of the task to be shared among multiple processors, it helps reduce the overall time taken to run a specific program. Systems with two or more CPUs or can undertake parallel processing and also the multi-core processors common on most computers today. Parallel processing is known to enhance performance and efficiency when performing different tasks while also reducing power consumption. Waivio (2007) suggests that today parallel processing is being used to perform different complex computations and data-intensive tasks. Using a graph, this discussion gives a brief description of parallel processing, types of parallel processing, the benefits, and an analysis of its future outlook.
How Parallel Processing Works
Although modern microprocessors tend to be small, they are usually very powerful as they can interpret instructions in their millions within a matter of seconds. However, some computational problems tend to be very complex to the extent that a powerful microprocessor would take such a long time to solve them (Waivio, 2007). As such, computer scientist often applies different tactics to solve this problem, one of them being the use of powerful microprocessors. Unfortunately, building powerful microprocessors can be a very expensive and intense production process that could take years to accomplish. To overcome these challenges, computer scientist prefers to employ parallel processing.
Parallel processing involves using a minimum of two processors to handle the parts of a specific task. Computer scientist breaks down a complex task into different parts through the use of special software that is designed to perform this task (Waivio, 2007) specifically. Each of these parts is then assigned to dedicated processors. Each processor then solves every part of the computational problem, after which the software is used again to reassemble the data to conclude the original complex task (Waivio, 2007). During parallel processing, each processor operates normally while performing operations as instructed. Moreover, the processors usually rely on the special software for maintaining communication with each other allowing them to stay in sync with changes occurring in data values.
Serial Processing
Traditionally computer processing was done in serial computation where a given task or problem was broken down to discrete series of instructions. These instructions were executed sequentially, and they had to be executed on single processors. Unfortunately, only one instruction execute at any one given moment in time. This process has been shown clearly in the diagram below.

Figure 1: Serial Processing – Retrieved from (Barney, 2019)
Parallel Processing

Figure 2: Parallel processing – Retrieved from (Barney, 2019)
As shown in the diagram above, parallel processing involves using a minimum of two processors to handle the parts of a specific problem. A complex problem is broken down into different parts or instructions through the use of special software that is designed to perform this task specifically. Each of these instructions is then assigned to dedicated processors, as shown above. Each processor then solves every part of the computational problem, after which the software is used again to reassemble the data to conclude the original complex task.
The difference between serial and parallel processing is that while parallel processing completes numerous tasks by use of two or more processors, sequential or serial processing only completes a single task at any given time, and it uses only one processor (Pissaloux, 2013). This means that when a computer has to complete multiple tasks under serial processing, then it will have to accomplish one task after another. As such, compared to parallel processors, serial processing takes a longer time to complete a given complex task.
Types of parallel processing
Parallel processing exists in different types, with the most commonly used types being the SIMD and MIMD. The Single Instruction Multiple Data (SIMD) is a type of parallel processing where a computer has more than two processors following the same instruction sets with each processor handling different data (Pissaloux, 2013). The SIMD parallel processing is commonly used to handle and analyze large data-sets that are based on a similarly specified benchmark. On the other hand, Multiple Instruction Multiple Data (MIMD) usually involves each computer having two or more processors getting data from different data streams (Pissaloux, 2013).
The Benefits of Parallel Processing
There are numerous benefits associated with parallel processing key among them being to enhance performance and efficiency when performing different tasks while also reducing power consumption. Parallel processing limits help overcome the limits a single CPU computing by enhancing the available memory and overall performance (Danelutto, 2004). This means that although modern microprocessors tend to be small, they are usually very powerful as they can interpret instructions in their millions within a matter of seconds. However, some computational problems tend to be very complex to the extent that a powerful microprocessor would take such a long time to solve them (Danelutto, 2004). Using parallel processing helps to solve this problem.
By breaking down a complex task into different parts through the use of special software that is designed to perform this task specifically, parallel processing helps to solve the problems that wouldn’t be solved on just one CPU. Moreover, these tasks are solving within a reasonable timeframe (Waivio, 2007). With each processor solving every part of the computational problem, the specialized software is used to reassemble the data to conclude the original complex task (Waivio, 2007). This allows for the solving of large problems faster and more efficiently. Waivio (2007) adds that with most computes applying parallel processing today, it is becoming easier to have more smart devices like the iPhone 4S which has two cores
The Future of Parallel Processing
The future of parallel processing looks very promising due to the numerous benefits that come with it. Just a while ago, parallel computers were only available in research laboratories where they were mainly used for applications that are computation intensive such as numerical simulations involving numerical simulations. However, that has changed with many parallel computers being available in today’s market( Waivio, 2007). In the future, it will be applied to execute data-intensive applications in various disciplines such as commerce, science, and engineering.
As processors in symmetric multiprocessing systems increase, it means that the overall time that will be required to propagate data from one system to other sections of the system also increases. Pissaloux (2013) notes that although the performance and efficiency of a given process increase with an increase in the number of parallel processers in place, the performance benefits that come with adding more processors begin to diminish. Long propagation times have been a serious problem affecting parallel processes, and it can be overcome using message-passing systems (Pissaloux, 2013). As such, future systems will require programs that can share data and send messages to one another to announce that specific operands have specific assigned new values. Instead of broadcasting the new value of an operand to all system parts, the new value has to be communicated to the programs requiring the knowledge of new values. Moreover, in the future, a network that can support message transfers between different programs will be used instead of using shared memory (Pissaloux, 2013). As a consequence, it allows the simplification of numerous processors so that they can work together efficiently within one system.
Today new applications are emerging that demand faster computers. Parallel processing is mostly being used in commercial applications where a computer running on such applications is required to process large data volumes in a manner that is very sophisticated (Waivio, 2007). These applications may include graphics, decision support, medical diagnosis, and virtual reality, among others. As such, we can conclude that commercial applications will have a huge impact on defining how future parallel processing and computer architecture will be designed. Also, scientific applications will remain to be important users of parallel computing technology.
Conclusion
From the preceding, today, parallel processing is being used to perform different complex computations and data-intensive tasks because it can reduce the overall time taken to run a specific program. Parallel processing involves using a minimum of two processors to handle the parts of a specific task. Computer scientist breaks down a complex task into different parts through the use of special software that is designed to perform this task specifically. Each of these parts is then assigned to dedicated processors. With each processor solving every part of the computational problem, the specialized software is used to reassemble the data to conclude the original complex task. This allows for the solving of large problems faster and more efficiently. Future systems will require programs that can share data and send messages to one another to announce that specific operands have specific assigned new values. Parallel processing exists in different types, with the common being the SIMD and MIMD. The SIMD is a type of parallel processing where a computer has more than two processors following the same instruction sets with each processor handling different data, and it is commonly used to handle and analyze large data-sets that are based on a similarly specified benchmark. On the other hand, MIMD usually involves each computer having two or more processors getting data from different data streams. In conclusion, the future of parallel processing looks very promising due to the numerous benefits that come with it

References
Barney, B. (2019). Introduction to Parallel Computing. Retrieved from https://computing.llnl.gov/tutorials/parallel_comp/
Danelutto, M. (2004). Parallel processing. Berlin: Springer.
Pissaloux, E. (2013). Parallel vision processing and dedicated parallel architectures. Proceedings International Parallel and Distributed Processing Symposium. DOI: 10.1109/ipdps.2003.1213422
Waivio, N. (2007). Parallel test description and analysis of parallel test system speedup through Amdahl’s law. 2007 IEEE Autotestcon. DOI: 10.1109/autest.2007.4374292

Published by
Essays
View all posts