Computer Science

Concurrency Vs Parallelism

Concurrency refers to the ability of a system to handle multiple tasks at the same time, often through interleaving execution. Parallelism, on the other hand, involves simultaneously executing multiple tasks by utilizing multiple resources. In essence, concurrency is about dealing with many things at once, while parallelism is about doing many things at once.

Written by Perlego with AI-assistance

3 Key excerpts on "Concurrency Vs Parallelism"

Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
  • Software Engineering for Embedded Systems
    eBook - ePub

    Software Engineering for Embedded Systems

    Methods, Practical Techniques, and Applications

    • Robert Oshana, Mark Kraeling, Robert Oshana, Mark Kraeling(Authors)
    • 2019(Publication Date)
    • Newnes
      (Publisher)

    ...22 shows the scalable nature of data parallelism. Fig. 22 Data parallelism is scalable with the data size. In the example given in Fig. 23 an image is decomposed into sections or “chunks” and partitioned to multiple cores to process in parallel. The “image in” and “image out” management tasks are usually performed by one of the cores (an upcoming case study will go into this in more detail). Fig. 23 Data parallel approach. 4.3 Task Parallelism Task parallelism distributes different applications, processes, or threads to different units. This can be done either manually or with the help of the operating system. The challenge with task parallelism is how to divide the application into multiple threads. For systems with many small units, such as a computer game, this can be straightforward. However, when there is only one heavy and well-integrated task the partitioning process can be more difficult and often faces the same problems associated with data parallelism. Fig. 24 is an example of task parallelism. Instead of partitioning data to different cores the same data are processed by each core (task), but each task is doing something different on the data. Fig. 24 Task parallel approach. Task parallelism is about functional decomposition. The goal is to assign tasks to distinct functions in the program. This can only scale to a constant factor. Each functional task, however, can also be data parallel. Fig. 25 shows this. Each of these functions (atmospheric, ocean, data fusion, surface, wind) can be allocated to a dedicated core, but only the scalability is constant. Fig. 25 Function allocation in a multicore system (scalability limited). 5 Multicore Programming Models A “programming model” defines the languages and libraries that create an abstract view of a machine. For multicore programming the programming model should consider the following: • Control—this part of the programming model defines how parallelism is created and how dependencies (orderings) are enforced...

  • Intelligent Data Analysis for e-Learning
    eBook - ePub

    Intelligent Data Analysis for e-Learning

    Enhancing Security and Trustworthiness in Online Learning Systems

    • Jorge Miguel, Santi Caballé, Fatos Xhafa(Authors)
    • 2016(Publication Date)
    • Academic Press
      (Publisher)

    ...This multidisciplinary research group was formed by expert researchers on programming languages, instruction set architectures, interconnection protocols, circuit design, computer architecture, massively parallel computing, embedded hardware and software, compilers, scientific programming, and numerical analysis. One major conclusion of this report was that, since real world applications are naturally parallel and hardware is naturally parallel, what we need is a programming model, software system, and a supporting architecture that are naturally parallel. In addition, the main reasons and advantages for parallel computing and the reasons why parallelism is a topic of interest are [ 157 ]: 1. The real world is inherently parallel, thus it is natural to express computing about the real world in a parallel way, or at least in a way that does not preclude parallelism. 2. Parallelism makes more computational performance available than is available in any single processor, although getting this performance from parallel computers is not straightforward. 3. There are limits to sequential computing performance that arise from fundamental physical limits. 4. Parallel computing is more cost-effective for many applications than using sequential models. To exemplify the study, this chapter uses the MapReduce model and Hadoop framework, as parallel programming architectures and disciplines (see also Chapter 2 for further information). To this end, we will consider hardware and computer architecture topics, which are closely related to the particular MapReduce cluster implementation and deployment, while our purpose is to efficiently process massive data and use the analysis results to eventually enhance security in e-Learning. 5.2.1 Parallel Processing for P2P Student Activity In Chapter 3, we presented a trustworthiness-based approach for the design of secure learning tasks in e-Learning groups...

  • Parallel Programming for Modern High Performance Computing Systems

    ...Furthermore, volunteer based computing systems are discussed that can be lower cost alternative approaches suitable for selected problems. In these cases, however, reliability of computations as well as privacy might be a concern. Finally, for completeness, a grid based approach is discussed as a way to integrate clusters into larger computing systems. Chapter 3 first describes main concepts related to parallelization. These include data partitioning and granularity, communication, allocation of data, load balancing and how these elements may impact execution time of a parallel application. Furthermore, the chapter introduces important metrics such as speed-up and parallel efficiency that are typically measured in order to evaluate the quality of parallelization. The chapter presents main parallel processing paradigms, their concepts, control and data flow and potential performance issues and optimizations. These are abstracted from programming APIs and are described in general terms and are then followed by implementations in following chapters. Chapter 4 introduces basic and important parts of selected popular APIs for programming parallel applications. For each API a sample application is presented. Specifically, the following APIs are presented: 1.  Message Passing Interface (MPI) for parallel applications composed of processes that can exchange messages between each other...