Computer Science
Concurrent Programming
Concurrent programming is a programming paradigm that allows multiple tasks to be executed simultaneously. It involves designing and implementing programs that can handle multiple threads of execution, each of which can run independently and concurrently with other threads. This approach can improve the performance and responsiveness of software applications.
Written by Perlego with AI-assistance
Related key terms
1 of 5
12 Key excerpts on "Concurrent Programming"
- eBook - PDF
- Arvind Kumar Bansal(Author)
- 2013(Publication Date)
- Chapman and Hall/CRC(Publisher)
275 C H A P T E R 8 Concurrent Programming Paradigm BACKGROUND CONCEPTS Abstract concepts in computation (Section 2.4); Abstractions and information exchange (Chapter 4); Control abstractions (Section 4.2); Discrete structure concepts (Section 2.2); Grammar (Section 3.2); Graphs (Section 2.3.6), Principle of locality (Section 2.4.8), Nondeterministic computation (Section 4.7); Operating system concepts (Section 2.5); Program and components (Section 1.4) . Concurrency is concerned about dividing a task into multiple subtasks and executing each subtask as independently as possible. There are two potential advantages of exploiting concurrency: (1) efficient execution of programs; and (2) efficient utilization of multiple resources, since each subtask can potentially use a different resource. With the available multiprocessor and multicore technology, concurrent execution of programs has tremen-dous potential of speeding up the execution. The goal of exploiting concurrency is in the speedup of large grand challenge software, such as weather modeling genome sequencing, designing aircrafts with minimal drag, reasoning about nuclear particles, and air-traffic control. In recent years, because of the availability of multicore processors, concurrent execution is also available on personal computers. With the availability of multiple processors, it is natural for processors to map multiple tasks on different processors for efficient execution. Parallelization can be incorporated at many levels in solving a problem by (1) designing a new algorithm more suitable for parallel execution of a task, (2) taking an existing algo-rithm and identifying subtasks that can be done concurrently, (3) taking an existing sequen-tial program and developing a smart compilation process for incorporating parallelism, and (4) writing a parallel program with concurrency constructs. - Adele Kuzmiakova(Author)
- 2023(Publication Date)
- Arcler Press(Publisher)
This is a fairly wide concept that includes the usage of computer applications as well as the production of computer software Fundamentals of Concurrent, Parallel, and Distributed Computing 3 (Burckhardt, 2014). Additionally, it also includes the creation of computer hardware. This chapter concentrates on the most recent of these business endeavors, which is the creation of computer software. In addition, there are forms of computing that have developed throughout the period to be used in computing. Concurrent Programming, parallel processing, and distributed computing are the three paradigms included here (Fujimoto, 2000). 1.2. OVERVIEW OF CONCURRENT COMPUTING The capability of separate portions or components of a program, method, or issue to be performed in or out of the linear interpolation without impacting the conclusion is referred to as simultaneous computer science or parallelism. This permits contemporaneous units to be executed in parallel, which may considerably enhance total execution speed in inter and multi- core computers. Multi-threading, in more technical words, is the capacity of a program, method, or problem to be decomposed into fact necessary or partly pieces or units (Figure 1.1) (Teich et al., 2011). Figure 1.1. Circuito de Rio. Source: https://computingstudy.wordpress.com/concurrent-parallel-and-dis- tributed-systems/. Fuzzy systems, processes calculi, the concurrent accidental – access memory device model, the actors perfect, as well as the Reo Coordinating Language are some of the statistical equations that have been created for broad concurrent processing. “While concurrent policy coherence had been contemplated for decades, the computer programming of concurrency started with Edsger Dijkstra’s famous 1965 article that presented the deadlock issue,” writes Leslie Lamport (2015). Within centuries thereafter, there has been a massive increase in interest in concurrently, especially in distributed- eBook - PDF
- John C. Mitchell(Author)
- 2002(Publication Date)
- Cambridge University Press(Publisher)
PART 4 Concurrency and Logic Programming 14 Concurrent and Distributed Programming A concurrent program defines two or more sequences of actions that may be executed simultaneously. Concurrent programs may be executed in two general ways: Multiprogramming. A single physical processor may run several processes si-multaneously by interleaving the steps of one process with steps of another. Each individual process will proceed sequentially, but actions of one process may occur between two adjacent steps of another. Multiprocessing. Two or more processors may share memory or be connected by a network, allowing processes on one processor to interact with processes running simultaneously on another. Concurrency is important for a number of reasons. Concurrency allows different tasks to proceed at different speeds. For example, multiprogramming allows one program to do useful work while another is waiting for input. This makes more effi-cient use of a single processor. Concurrency also provides programming concepts that are important in user interfaces, such as window systems that display independent windows simultaneously and for networked systems that need to send and receive data to other computers at different times. Multiprocessing makes more raw pro-cessing power available to solve a computational problem and introduces additional issues such as unreliability in network communication and the possibility of one pro-cessor proceeding while another crashes. Interaction between sequential program segments, whether they are on the same processor or different processors, raises significant programming challenges. Concurrent Programming languages provide control and communication abstrac-tions for writing concurrent programs. In this chapter, we look at some general issues in Concurrent Programming and three language examples: the actor model, Concur-rent ML, and Java concurrency. - eBook - ePub
Real-Time Embedded Systems
Open-Source Operating Systems Perspective
- Ivan Cibrario Bertolotti, Gabriele Manduchi(Authors)
- 2017(Publication Date)
- CRC Press(Publisher)
3Real-Time Concurrent Programming Principles
CONTENTS
3.1 The Role of Parallelism 3.2 Definition of Process 3.3 Process State 3.4 Process Life Cycle and Process State Diagram 3.5 Multithreading 3.6 SummaryThis chapter lays the foundation of real-time Concurrent Programming theory by introducing what is probably its most central concept, that is, the definition of process as the abstraction of an executing program. This definition is also useful to clearly distinguish between sequential and concurrent programming, and to highlight the pitfalls of the latter.3.1 The Role of Parallelism
Most contemporary computers are able to perform more than one activity at the same time, at least apparently. This is particularly evident with personal computers, in which users ordinarily interact with many different applications at the same time through a graphics user interface. In addition, even if this aspect is often overlooked by the users themselves, the same is true also at a much finer level of detail. For example, contemporary computers are usually able to manage user interaction while they are reading and writing data to the hard disk, and are actively involved in network communication. In most cases, this is accomplished by having peripheral devices interrupt the current processor activity when they need attention. Once it has finished taking care of the interrupting devices, the processor goes back to whatever it was doing before.A key concept here is that all these activities are not performed in a fixed, predetermined sequence , but they all seemingly proceed in parallel , or concurrently - Matthew J. Sottile, Timothy G. Mattson, Craig E Rasmussen(Authors)
- 2009(Publication Date)
- Chapman and Hall/CRC(Publisher)
Consider two streams of operations that are, for the intents of this dis-cussion, independent and unrelated. For example, a user application and an operating system daemon. The beauty of modern multitasking operating sys-tems is that an abstraction is presented to the user that gives the appearance of these two tasks executing at the same time — they are concurrent . On the other hand, on most single processor systems, they are actually executing one at a time by interleaving instructions from each stream so that each is allowed to progress a small amount in a relatively short period of time. The speed of processors makes this interleaving give the appearance of the proc-esses running at the same time, when in fact they are not. Of course, this 24 Introduction to Concurrency in Programming Languages simplified view of the computer ignores the fact that operations such as I/O can occur for one stream in hardware outside of the CPU while the other stream executes on the CPU. This is, in fact, a form of parallelism. We define a concurrent program as one in which multiple streams of in-structions are active at the same time. One or more of the streams is available to make progress in a single unit of time. The key to differentiating parallelism from concurrency is the fact that through time-slicing or multitasking one can give the illusion of simultaneous execution when in fact only one stream makes progress at any given time. In systems where we have multiple processing units that can perform op-erations at the exact same time, we are able to have instruction streams that execute in parallel . The term parallel refers to the fact that each stream not only has an abstract timeline that executes concurrently with others, but these timelines are in reality occurring simultaneously instead of as an illusion of simultaneous execution based on interleaving within a single timeline.- eBook - ePub
C++ High Performance
Master the art of optimizing the functioning of your C++ code, 2nd Edition
- Björn Andrist, Viktor Sehr(Authors)
- 2020(Publication Date)
- Packt Publishing(Publisher)
- Sharing state between multiple threads in a safe manner is hard. Whenever we have data that can be read and written to at the same time, we need some way of protecting that data from data races. You will see many examples of this later on.
- Concurrent programs are usually more complicated to reason about because of the multiple parallel execution flows.
- Concurrency complicates debugging. Bugs that occur because of data races can be very hard to debug since they are dependent on how threads are scheduled. These kinds of bugs can be hard to reproduce and, in the worst-case scenario, they may even cease to exist when running the program using a debugger. Sometimes an innocent debug trace to the console can change the way a multithreaded program behaves and make the bug temporarily disappear. You have been warned!
Before we start looking at Concurrent Programming using C++, a few general concepts related to concurrent and parallel programming will be introduced.Concurrency and parallelism
Concurrency and parallelism are two terms that are sometimes used interchangeably. However, they are not the same and it is important to understand the differences between them. A program is said to run concurrently if it has multiple individual control flows running during overlapping time periods. In C++, each individual control flow is represented by a thread. The threads may or may not execute at the exact same time, though. If they do, they are said to execute in parallel. For a concurrent program to run in parallel, it needs to be executed on a machine that has support for parallel execution of instructions; that is, a machine with multiple CPU cores.At first glance, it might seem obvious that we always want concurrent programs to run in parallel if possible, for efficiency reasons. However, that is not necessarily always true. A lot of synchronization primitives (such as mutex locks) covered in this chapter are required only to support the parallel execution of threads. Concurrent tasks that are not run in parallel do not require the same locking mechanisms and can be a lot easier to reason about. - eBook - PDF
- Zbigniew J. Czech(Author)
- 2017(Publication Date)
- Cambridge University Press(Publisher)
1 Concurrent Processes 1.1 BASIC CONCEPTS A sequential program describes how to solve a computational problem in a sequen- tial computer. An example is the traveling salesman problem in which the number of cities and the distances between each pair of cities are given. These are the input data on which the output data are determined, which from the solution to the prob- lem. The solution is a closed route of the minimum length of the salesman passing through every city exactly once. More precisely, a sequential program is a sequence of instructions that solves the problem by transforming the input data into the output data. It is assumed that a sequential program is executed by a single processor. If more processors are to be used to solve the problem, it must be partitioned into a number of subproblems that may be solved in parallel. The solution to the original problem is a composition of solutions to the subproblems. The subproblems are solved by separate components that are the parts of a concurrent program. Each component is a traditional sequential program called a computational task, or, in short, task. A concurrent program consists of a number of tasks describing computa- tion that may be executed in parallel. The concurrent program defines how the tasks cooperate with each other applying partial results of computation, and how they syn- chronize their actions. Tasks are executed in a parallel computer under supervision of the operating sys- tem. A single task is performed as a sequential (serial) process, that is as a sequence of operations, by a conventional processor that we call a virtual processor. In a sequen- tial process, resulting from execution of a single instruction sequence, the next oper- ation commences only after completion of the previous operation. Thus, the order of operations is clearly defined. Let o i and o i denote the events of beginning and end of an operation o i . - eBook - PDF
Human Movement Understanding
From Computational Geometry to Artificial Intelligence
- P. Morasso, V. Tagliasco(Authors)
- 1986(Publication Date)
- North Holland(Publisher)
- with particular regard to their motivations and limitations. 1. THE NOTION OF CONCURRENCY IN COMPUTER SCIENCE Concurrency is one of the major building blocks of human purposive behavior and also one of the major sources of its complexity. However, it is a subtle notion that still lacks a satisfactory formallzation even in man made machines like digital computers. In this chapter we address some aspects of concurrency as they have been maturing in Computer Science. The approach is quite informal and covers just a fraction of problems, methods, and techniques. In a sense, this chapter is at the same time a discussion section for all the previous chapters and an introduction to the following chapter. Computer Science (CS) is the science that deals with systems for the automatic solution of problems. How to formalize problems, and how to reason about them is also part of the discip line. Therefore, CS is a wide science: it includes theoretical topics (theory of computation, formal languages etc.), methodological topics (architectural design, software engineering, operating sys- tems etc.), technological topics and so on. As any Computer Scientist knows, most of these topics lead apparently Independent lives. Computer science involves a great amount of human creatitdfy as regerds both usiq and designing computers, and we are mostly interested in which kind of languages and models are used by man when dealing with a heterogeneous, sparse, highly complex', typically concurrent and asynchronous system like a computer or a system of computers. 322 G. Marino, P. Morass0 and R. Zaccaria In the following we attempt to single out some subtle connections between the basic con- cepts of time and paral/e/ism in CS and purposive behavior of a robot. The strongest connection between CS and Anthropomorphic Robotics may be found in the history of the (linguistic, seman- tic) formalizations of computation and computing systems. - David Riley, Kenny A. Hunt(Authors)
- 2014(Publication Date)
- Chapman and Hall/CRC(Publisher)
321 C H A P T E R 11 Concurrent Activity Multitasking? I can’t do two things at once. I can’t even do one thing at a time. —HELENA BONHAM CARTER OBJECTIVES • To be able to explain the difference between parallelism and concur-rency, and the role of supercomputers and distributed computing in modern problem solving • To recognize basic constraints that prohibit simultaneous execution 322 ◾ Computational Thinking for the Modern Problem Solver Events do not often occur in a nice, neat, one-at-a-time sequence in real life. Sometimes the wind blows at the same time the rain falls. Other times you might choose to walk and talk simultaneously. Aircraft often bank and accelerate all at once. Not surprisingly, computer programs can also be expected to multitask as well. When software can perform multiple tasks at once, we call it concurrent execution , or just concurrency . The discussion of concurrency has been delayed until this point, because humans are more adept at a single line of reasoning. As a result, thinking in nonconcurrent ways and writing nonconcurrent software tends to be easier than their concurrent counterparts. This chapter explores some of these challenges, along with both the advantages and the pitfalls of concurrency. 11.1 PARALLELISM OR CONCURRENCY? As computer hardware marches faster and faster, a significant perfor-mance barrier looms. Electricity travels by moving electrons at roughly the speed of light. As mentioned in the previous chapter, miniaturization has led to circuits in which the width of electrical pathways can be mea-sured in molecules. Reducing circuit sizes (and, therefore, increasing their performance and storage capacity) is becoming increasingly expensive. At some future date such miniaturization will no longer be cost effective. An alternative to miniaturization becoming popular among computer processor manufacturers is to include several processors in the same inte-grated circuit.- eBook - PDF
Concurrency
State Models and Java Programs
- Jeff Magee, Jeff Kramer(Authors)
- 2014(Publication Date)
- Wiley(Publisher)
3 Concurrent Execution The execution of a concurrent program consists of multiple processes active at the same time. As discussed in the last chapter, each process is the execution of a sequential program. A process progresses by submitting a sequence of instructions to a processor for execution. If the computer has multiple processors then instruc- tions from a number of processes, equal to the number of physical processors, can be executed at the same time. This is sometimes referred to as parallel or real concurrent execution. However, it is usual to have more active processes than processors. In this case, the available processors are switched between processes. Figure 3.1 depicts this switching for the case of a single processor supporting three processes, A, B and C. The solid lines represent instructions from a process being executed on the processor. With a single processor, each process makes progress but, as depicted in Figure 3.1, instructions from only one process at a time can be executed. A Time B C Figure 3.1 Process switching. The switching between processes occurs voluntarily or in response to interrupts. Interrupts signal external events such as the completion of an I/O operation or a clock tick to the processor. As can be seen from Figure 3.1, processor switching does not affect the order of instructions executed by each process. The processor executes a sequence of instructions which is an interleaving of the instruction 38 Chapter 3: Concurrent Execution sequences from each individual process. This form of concurrent execution using interleaving is sometimes referred to as pseudo-concurrent execution since instruc- tions from different processes are not executed at the same time but are interleaved. - eBook - PDF
- Ann McHoes, , Ida M. Flynn, , Ann McHoes, Ida M. Flynn(Authors)
- 2017(Publication Date)
- Cengage Learning EMEA(Publisher)
Hardware and software mechanisms are used to synchronize the many processes, but care must be taken to avoid the typical problems of synchronization: missed waiting customers, the synchronization between producers and consumers, and the mutual exclusion of readers and writers. Continuing innovations in concurrent processing, including threads and multi-core processors, are fundamentally changing how oper-ating systems use these new technologies. Research in this area is expected to grow significantly over time. In the next chapter, we look at the module of the operating system that manages the printers, disk drives, tape drives, and terminals—the Device Manager. Copyright 2018 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-300 201 Key Terms Key Terms busy waiting: a method by which processes, waiting for an event to occur, continuously test to see if the condition has changed, thereby, remaining in unproductive, resource-consuming wait loops. COBEGIN: a command used with COEND to indicate to a multiprocessing compiler the beginning of a section where instructions can be processed concurrently. COEND: a command used with COBEGIN to indicate to a multiprocessing compiler the end of a section where instructions can be processed concurrently. compiler: a computer program that translates programs from a high-level programming language (such as C or Ada) into machine language. concurrent processing: execution by a single processor of a set of processes in such a way that they appear to be happening at the same time. critical region: a part of a program that must complete execution before other processes can begin. explicit parallelism: a type of Concurrent Programming that requires the programmer to explicitly state which instructions can be executed in parallel. implicit parallelism: a type of Concurrent Programming in which the compiler automati-cally detects which instructions can be performed in parallel. - No longer available |Learn more
- (Author)
- 2014(Publication Date)
- Learning Press(Publisher)
________________________ WORLD TECHNOLOGIES ________________________ Chapter- 1 Parallel Computing Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (in parallel). There are several different forms of parallel computing: bit-level, instruction level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multicore processors. Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, and grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. Parallel computer programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance. The maximum possible speed-up of a program as a result of parallelization is observed as Amdahl's law. Background Traditionally, computer software has been written for serial computation. To solve a problem, an algorithm is constructed and implemented as a serial stream of instructions.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.











