Computer Science
Distributed Programming
Distributed programming is a method of designing and implementing software systems that are composed of multiple independent components running on different computers and communicating with each other over a network. It involves the use of various techniques and technologies to ensure that the components work together seamlessly and efficiently.
Written by Perlego with AI-assistance
Related key terms
1 of 5
10 Key excerpts on "Distributed Programming"
- No longer available |Learn more
- (Author)
- 2014(Publication Date)
- Learning Press(Publisher)
________________________ WORLD TECHNOLOGIES ________________________ Chapter- 3 Distributing Computing Distributed computing is a field of computer science that studies distributed systems. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal. A computer program that runs in a distributed system is called a distributed program , and Distributed Programming is the process of writing such programs. Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one computer. Introduction The word distributed in terms such as distributed system, Distributed Programming, and distributed algorithm originally referred to computer networks where individual computers were physically distributed within some geographical area. The terms are nowadays used in a much wider sense, even referring to autonomous processes that run on the same physical computer and interact with each other by message passing. While there is no single definition of a distributed system, the following defining properties are commonly used: • There are several autonomous computational entities, each of which has its own local memory. • The entities communicate with each other by message passing. A distributed system may have a common goal, such as solving a large computational problem. Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users. Other typical properties of distributed systems include the following: • The system has to tolerate failures in individual computers. - No longer available |Learn more
- (Author)
- 2014(Publication Date)
- College Publishing House(Publisher)
________________________ WORLD TECHNOLOGIES ________________________ Chapter 3 Distributing Computing Distributed computing is a field of computer science that studies distributed systems. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal. A computer program that runs in a distributed system is called a distri-buted program , and Distributed Programming is the process of writing such programs. Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one computer. Introduction The word distributed in terms such as distributed system, Distributed Programming, and distributed algorithm originally referred to computer networks where individual computers were physically distributed within some geographical area. The terms are nowadays used in a much wider sense, even referring to autonomous processes that run on the same physical computer and interact with each other by message passing. While there is no single definition of a distributed system, the following defining pro-perties are commonly used: • There are several autonomous computational entities, each of which has its own local memory. • The entities communicate with each other by message passing. A distributed system may have a common goal, such as solving a large computational problem. Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users. Other typical properties of distributed systems include the following: • The system has to tolerate failures in individual computers. - No longer available |Learn more
- (Author)
- 2014(Publication Date)
- Learning Press(Publisher)
________________________ WORLD TECHNOLOGIES ________________________ Chapter 1 Distributed Computing Distributed computing is a field of computer science that studies distributed systems. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal. A computer program that runs in a distributed system is called a distributed program , and Distributed Programming is the process of writing such programs. Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one computer. Introduction The word distributed in terms such as distributed system, Distributed Programming, and distributed algorithm originally referred to computer networks where individual computers were physically distributed within some geographical area. The terms are nowadays used in a much wider sense, even referring to autonomous processes that run on the same physical computer and interact with each other by message passing. While there is no single definition of a distributed system, the following defining properties are commonly used: • There are several autonomous computational entities, each of which has its own local memory. • The entities communicate with each other by message passing. Here, the computational entities are called computers or nodes . A distributed system may have a common goal, such as solving a large computational problem. Alternatively, each computer may have its own user with individual needs, and ________________________ WORLD TECHNOLOGIES ________________________ the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users. Other typical properties of distributed systems include the following: • The system has to tolerate failures in individual computers. - eBook - PDF
Integrated Project Support Environments
The Aspect Project
- Alan W. Brown(Author)
- 2013(Publication Date)
- Academic Press(Publisher)
We have concentrated on how distributed programs should be structured so that they can be partitioned for execution. Two basic approaches may be identified [28]: (i) distribute fragments of a single program across machines, and use normal intra-program communication mechanisms for interaction; (ii) write a separate program for each machine and devise a means of inter-program interaction. In this section we consider only (i) as it is more within the 'spirit' of distributed programs than (ii). The basic characteristic of this approach is that the application software is viewed as a single program, distributed across the target system. The main advantage of this approach, over (ii) above, is that all interfaces between the distributed program fragments can be type-checked by the compiler. Therefore, the type checking of the distributed program is that of the source language. Within this approach two general strategies can be identified [28]: post-partitioning and pre-partitioning. Post-partitioning. As the name implies, this strategy is based on partitioning the program after it has been written. The program is designed without regard to a target architecture: the programmer produces an appropriate solution to the problem at hand and has the full language at his/her disposal. It is left to other software tools, provided by the programming support environment, to: Programming and Debugging Distributed Target Systems 211 • describe the target configuration (which may be chosen by the designer or forced upon him/her); • partition the program into components for distribution; and • allocate the components to individual nodes. The argument behind this strategy is threefold. First, most languages provide no facilities for configuration management so it is considered inappropriate for a program to contain configuration information. Second, the strategy promotes portable software - the same program can be mapped onto different hardware configurations. - Adele Kuzmiakova(Author)
- 2023(Publication Date)
- Arcler Press(Publisher)
DISTRIBUTED COMPUTING CHAPTER5 CONTENTS 5.1. Introduction .................................................................................... 144 5.2. Association to Computer System Modules....................................... 145 5.3. Motivation ...................................................................................... 147 5.4. Association to Parallel Multiprocessor/Multicomputer Systems ........ 149 5.5. Message-Passing Systems Vs. Shared Memory Systems .................... 157 5.6. Primitives for Distributed Communication ...................................... 159 5.7. Synchronous Vs. Asynchronous Executions ..................................... 165 5.8. Design Problems and Challenges .................................................... 168 References ............................................................................................. 176 Concurrent, Parallel and Distributed Computing 144 5.1. INTRODUCTION The distributed system is a collection of autonomous entities that work together to solve an issue that cannot be solved separately. Distributed systems have usually been around since the beginning of time. There exists communication between mobile sentient agents in nature or between a group of fish to the group of birds to complete ecosystems of microbes. The concept of distributed systems as a valuable and extensively deployed technology is now becoming a reality, thanks to the broad utilization of the Internet and the emergence of the globalized world (Figure 5.1). Figure 5.1. Distributed computing system. Source: https://www.sciencedirect.com/topics/computer-science/distributed- computing. A distributed system can be described in several ways in computing systems: • You know you are using one when the crash of a computer you have never heard of prevents you from doing work (Alboaie).- eBook - PDF
- Gerard Tel(Author)
- 2000(Publication Date)
- Cambridge University Press(Publisher)
As al-ready mentio ned, the present text concentrates on algorithms for distributed systems. Section 1.3 explains why the design of distributed algorithms dif-1 2 1 Introduction: Distributed Systems fers from the design of centralized algorithms, sketches the research field of distributed algorithms, and outlines the remainder of the book. 1.1 What is a Distributed System? In this chapter we shall use the term distributed system to mean an in-terconnected collection of autonomous computers, processes, or processors. The computers, processes, or processors are referred to as the nodes of the distributed system. (In the subsequent chapters we shall use a more techni-cal notion, see Definition 2.6 .) To be qualified as autonomous , the nodes must at least be equipped with their own private control; thus, a paral le l computer of the single-instruction, multiple-data (SIMD) model does not qualify as a distributed system. To be qualified as interconnected , the nodes must be able to exchange information. As (software) processes can play the role of nodes of a system, the defi-nition includes software systems built as a collection of communicating pro-cesses, even when running on a single hardware installation. In most cases, however, a distributed system will at least contain several processors, inter-connected by communication hardware. More restrictiv e definitions of distributed systems are also found in the literature. Tanenbaum [Tan96] , for example, considers a system to be dis-tributed only if the existence of autonomous nodes is transparent to users of the system. A system distributed in this sense behaves like a virtual, stand-alone computer system, but the implementation of this transparency requires the development of intricate distributed control algorithms. 1 . 1 . 1 Motivation Distributed computer systems may be preferred over sequential systems, or their use may simply be unavoidable, for various reasons, some of which are discussed below. - eBook - PDF
Communication and Control in Electric Power Systems
Applications of Parallel and Distributed Processing
- Mohammad Shahidehpour, Yaoyu Wang(Authors)
- 2004(Publication Date)
- Wiley-IEEE Press(Publisher)
2.2.2 Distributed Systems Quite similar to a parallel system, a distributed system is the physical arrangement for distributed processing. But unlike a parallel system, a distributed system is usually a computer network that is geographically distributed over a larger area. The computers of a distributed system are not necessarily the same and can be heterogeneous. A distributed system can be used for information acquisition; for instance, a distributed system could be a network of sensors for environmental measurements, where a set of geographically distributed sensors would obtain the information on the state of the environment and may also process it cooperatively. A distributed system can be used for the computation and control of large- scale systems such as airline reservation and weather prediction systems. The correct and timely routing of messages traveling in the data communication network of a distributed system are controlled in a distributed manner through the cooperation of computers that reside on the distributed system. The communication links among computers of a distributed system are usually very long and data communications over a distributed system are relayed several times and can be disturbed by various communication noises. In general, a distributed system is designed to be able to operate properly in the presence of limited, sometimes unreliable, communication links, and often in the absence of a central control mechanism. The time delays of data communication among distributed computers can be very difficult to predict, and this is especially true with a distributed processing, which has rigid time requirements such as voltage/VAR control in a power system. 2.2.3 Comparison of Parallel and Distributed Processing Parallel processing employs more than one processor concurrently to solve a common problem. Historically the only purpose for parallel processing is to obtain a faster solution. - eBook - PDF
- Peter Eades, Kang Zhang(Authors)
- 1996(Publication Date)
- World Scientific(Publisher)
For programming heterogeneous distributed systems, PEDS provides a visual language, with which a range of specification tools are combined and established. These tools are specially designed to utilize various software packages and to construct distributed programs. By organizing tools at multiple levels, PEDS can specify and construct distributed applications efficiently utilizing a variety of distributed resources. 1 Introduction In recent years, due to the decreasing costs of hardware and networking fa-cilities, organizations have made great investments in networks of computers, which create the potential for more information and resource sharing, speedup of computations and reliability. Designing efficient distributed computations is, however, a challenging task 11,20 . Programming on a heterogeneous distributed system is much harder than writing sequential code for a uni-processor sys-tem. This is particularly true when a programmer attempts to utilize many resources on a distributed system to solve a large and complex problem. A distributed computing system may be denned as one in which multiple autonomous processors, possibly of different kinds, are inter-connected by a communication network to interact in a co-operative way to achieve an overall goal 1 . The major characteristics of any distributed computing systems are • the support for an arbitrary number of systems and application processes; • modular design of physical architecture; and • a message-passing facility through a common communication system. 163 164 D.-Q. Zhang, K. Zhang and J. Cao In a heterogeneous distributed system, processors and software resources available are of different types. It is often difficult for a user to interface coop-erative processes which are implemented with different software resources and located on different processors. - eBook - PDF
Distributed Computer Control Systems 1981
Proceedings of the Third IFAC Workshop, Beijing, China, 15-17 August 1981
- William E. Miller(Author)
- 2014(Publication Date)
- Pergamon(Publisher)
All these trends of development will evidently cause the three types of multiple-computer systems to merge together to yield new types of highly cost-effective computer systems, of which distributed computer control systems may be the typical examples. COMPARISON OF DISTRIBUTED COMPUTER SYSTEMS AND MULTIPROCESSORS For better understanding the attributes of the distributed computer systems, a comparative study of the definitions taken from different sources would be helpful. The first definition originated from the IEEE Computer Society when announcing the First International Conference on Distributed Com-puting in 1979. The following statement plays the role of a definition: There is a multiplicity of interconnected processing resources able to cooperate under system-wide control on a single problem with minimal reliance on centralized procedures, data, and hardware. 11 The second definition was given by the Science Research Council's Computing Science Committee of Great Britain: A distributed computing system is considered to be one in which there are a number of auto-nomous but interacting computers cooperating on a common problem. The third definition is cited from Hensen's paper (1978): A multiplicity of processors that are physi-cally and logically interconnected to form a single system in which overall execution con-trol is exercised through the cooperation of decentralized system elements... The fourth, a more detailed research and deve· lopmentdefinition has been expressed by Ens-low (1978) in five components: .A multiplicity of general-purpose resource components, including both physical and logi-cal resources, that can be assigned to speci-fic tasks on a dynamic basis. .A physical distribution of these physical and logical components of the system interac-ting through a communication network. . A high-level operating system that unifies and integrates the control of the distributed components. - eBook - PDF
Large Scale and Big Data
Processing and Management
- Sherif Sakr, Mohamed Gaber, Sherif Sakr, Mohamed Gaber(Authors)
- 2014(Publication Date)
- Auerbach Publications(Publisher)
As per data, in the time of Big Data, or the Era of Tera as denoted by Intel [13], distributed pro-grams typically cope with Web-scale data in the order of hundreds and thousands 23 Distributed Programming for the Cloud of gigabytes, terabytes, or petabytes. Also, Internet services such as e-commerce and social networks deal with sheer volumes of data generated by millions of users every day [83]. As per resources, cloud datacenters already host tens and hundreds of thousands of machines (e.g., Amazon EC2 is estimated to host almost half a million machines [46]), and projections for scaling up machine counts to extra folds have already been set forth. As pointed out in Section 1.3, upon scaling up the number of machines, what pro-grammers/users expect is escalated performance. Specifically, programmers expect from distributed execution of their programs on n nodes, vs. on a single node, an n -fold improvement in performance. Unfortunately, this never happens in reality due to several reasons. First, as shown in Figure 1.13, parts of programs can never be parallelized (e.g., initialization parts). Second, load imbalance among tasks is highly likely, especially in distributed systems like clouds. One of the reasons for load imbalance is the heterogeneity of the cloud as discussed in the previous section. As depicted in Figure 1.13b, load imbalance usually delays programs, wherein a pro-gram becomes bound to the slowest task. Particularly, even if all tasks in a program finish, the program cannot commit before the last task finishes (which might greatly linger!). Lastly, other serious overheads such as communication and synchronization can highly impede scalability. Such overheads are significantly important when mea-suring speedups obtained by distributed programs compared with sequential ones. A standard law that allows measuring speedups attained by distributed programs and, additionally, accounting for various overheads is known as Amdahl’s law .
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.









