Mathematics

Prim's Algorithm

Prim's Algorithm is a greedy algorithm used to find the minimum spanning tree of a weighted undirected graph. It starts with a single vertex and adds the closest vertex to the tree until all vertices are included. The algorithm guarantees to find the minimum spanning tree.

Written by Perlego with AI-assistance

10 Key excerpts on "Prim's Algorithm"

  • Book cover image for: Simulation for Applied Graph Theory Using Visual C++
    5.2.2 Prim’s Algorithm Prim’s algorithm is a greedy algorithm used to obtain the minimum spanning tree. With this algorithm, the size of the minimum spanning tree, which initially consists of only a single node, is increasing continuously by adding one edge at a time until it spans all the nodes. In Prim’s algorithm, the initial set of node selection for the minimum spanning tree is set as V new = { v i }, where v i is an arbitrary node selection of the original tree. On the other hand, the initial set of the edges will be an empty set, E new = { ϕ }. The number of elements of V new and E new will keep on increasing as a node and an edge will be added in each iteration. At every iteration, the selection of the edge E = { e ij } must follow the condition that the node v i (the initial node of the edge) is in V new but v j (the terminal node of the edge) is not in the V new . One logical reason for considering this condition is to avoid creating a cycle. Among all the considered edges, the one with the least minimum weight is selected to be added into E new . Automatically, the node v j during edge selection for E new will be added to V new . Suppose the original tree has n nodes; this procedure is repeated until all the nodes are in V new and a total of n − 1 edges are selected in E new . Figure 5.4a through g illustrates the execution of Prim’s algorithm. 137 Computing the Minimum Spanning Tree 1 1 5 5 5 2 4 3 6 4 6 6 6 3 5 2 (a) 1 1 5 2 4 3 4 6 3 5 2 (g) 1 1 5 2 4 3 4 6 2 (f) 1 1 4 3 4 6 2 (e ) 1 1 3 4 6 (d) 1 1 3 (c) 1 (b) FIGURE 5.4 Execution of Prim’s algorithm. (a) The graph, (b) v 1 is selected, (c) v 3 and e 3 are selected, (d) v 6 and e 36 are selected, (e) v 4 and e 64 are selected, (f) v 2 and e 32 are selected, and (g) the minimum spanning tree. 138 Simulation for Applied Graph Theory Using Visual C++ The original graph is illustrated in (a), where there are 6 nodes and 10 edges with the weight w ij .
  • Book cover image for: Graphs and Networks
    • S. R. Kingan(Author)
    • 2022(Publication Date)
    • Wiley
      (Publisher)
    573). The step-by-step details of the application of Prim’s Algorithm to the graph in Figure 5.2 are as follows: ● At Step 1, the algorithm chooses a starting vertex at random. Let us assume that vertex 𝑣 1 is chosen. After Steps 2 and 3, S = {𝑣 1 } and T = 𝜙. ● At the first iteration of the loop at Step 4, edges 𝑣 1 𝑣 6 , and 𝑣 1 𝑣 2 each have one end in S and one end not in S. Since 𝑣 1 𝑣 2 has lower weight than 𝑣 1 𝑣 6 , it is added to T and 𝑣 2 is added to S. ● At the second iteration, S = {𝑣 1 , 𝑣 2 } and T = {𝑣 1 𝑣 2 }. Edges 𝑣 1 𝑣 6 , 𝑣 2 𝑣 6 and 𝑣 2 𝑣 3 have one end in S and one end outside of S. Of these, 𝑣 2 𝑣 6 has the lowest weight, so 𝑣 2 𝑣 6 is added to T and 𝑣 6 is added to S. ● At the third iteration, S = {𝑣 1 , 𝑣 2 , 𝑣 6 } and T = {𝑣 1 𝑣 2 , 𝑣 2 𝑣 6 }. Edges 𝑣 2 𝑣 3 , 𝑣 5 𝑣 6 , and 𝑣 3 𝑣 6 have one end in S and one end outside of S. Of these, 𝑣 3 𝑣 6 has the lowest weight, so 𝑣 3 𝑣 6 is added to T and 𝑣 3 is added to S. ● At the fourth iteration, S = {𝑣 1 , 𝑣 2 , 𝑣 3 , 𝑣 6 }, and T = {𝑣 1 𝑣 2 , 𝑣 2 𝑣 6 , 𝑣 3 𝑣 6 }. Edges 𝑣 5 𝑣 6 , 𝑣 3 𝑣 5 and 𝑣 3 𝑣 4 all have one end in S and one end outside of S. Of these, 𝑣 5 𝑣 6 has the lowest weight, so it is added to T and 𝑣 5 is added to S. ● At the fifth iteration, S = {𝑣 1 , 𝑣 2 , 𝑣 3 , 𝑣 5 , 𝑣 6 } and T = {𝑣 1 𝑣 2 , 𝑣 2 𝑣 6 , 𝑣 3 𝑣 6 , 𝑣 5 𝑣 6 }. Edges 𝑣 4 𝑣 5 and 𝑣 3 𝑣 4 have one end in S and one end outside of S. Of these, 𝑣 4 𝑣 5 has the lower weight, so 𝑣 4 𝑣 5 is added to T and 𝑣 4 is added to S. After this iteration S = V so the loop terminates. 124 5 Graph Algorithms When the algorithm is done, T = {𝑣 1 𝑣 2 , 𝑣 2 𝑣 6 , 𝑣 3 𝑣 6 , 𝑣 4 𝑣 5 , 𝑣 5 𝑣 6 } is a minimum spanning tree. Note that in this example, the minimum spanning tree is the same using both Prim’s Algorithm and Kruskal’s Algorithm. This is not necessarily always the case when edges have the same weights.
  • Book cover image for: Data Structures and Algorithms in C++
    • Michael T. Goodrich, Roberto Tamassia, David M. Mount(Authors)
    • 2011(Publication Date)
    • Wiley
      (Publisher)
    The development of efficient algorithms for the minimum spanning tree prob- lem predates the modern notion of computer science itself. In this section, we discuss two classic algorithms for solving the MST problem. These algorithms are both applications of the greedy method, which, as was discussed briefly in the previous section, is based on choosing objects to join a growing collection by itera- tively picking an object that minimizes some cost function. The first algorithm we discuss is Kruskal’s algorithm, which “grows” the MST in clusters by considering edges in order of their weights. The second algorithm we discuss is the Prim-Jarn ´ ık algorithm, which grows the MST from a single root vertex, much in the same way as Dijkstra’s shortest-path algorithm. As in Section 13.5.2, in order to simplify the description of the algorithms, we assume, in the following, that the input graph G is undirected (that is, all its edges are undirected) and simple (that is, it has no self-loops and no parallel edges). Hence, we denote the edges of G as unordered vertex pairs (u, z). Before we discuss the details of these algorithms, however, let us give a crucial fact about minimum spanning trees that forms the basis of the algorithms. 646 Chapter 13. Graph Algorithms A Crucial Fact About Minimum Spanning Trees The two MST algorithms we discuss are based on the greedy method, which in this case depends crucially on the following fact. (See Figure 13.17.) Figure 13.17: The crucial fact about minimum spanning trees. Proposition 13.24: Let G be a weighted connected graph, and let V 1 and V 2 be a partition of the vertices of G into two disjoint nonempty sets. Furthermore, let e be an edge in G with minimum weight from among those with one endpoint in V 1 and the other in V 2 . There is a minimum spanning tree T that has e as one of its edges. Justification: Let T be a minimum spanning tree of G. If T does not contain edge e, the addition of e to T must create a cycle.
  • Book cover image for: Data Structure Practice
    eBook - PDF

    Data Structure Practice

    for Collegiate Programming Contests and Education

    • Yonghui Wu, Jiande Wang(Authors)
    • 2016(Publication Date)
    • CRC Press
      (Publisher)
    The reason for using Prim algorithm is as follows: 1. Based on the quality of a derivation plan, the weighted connected graph is a dense graph. 2. The upper limit of the edge’s weight is 7. In priority queue Q , vertices that aren’t in the span-ning tree are sorted based on distances. All vertices with distance i are stored in linear list Q [ i ]. It can improve the time complexity. 12.3.3 Slim Span Given an undirected weighted graph G , you should find one of the spanning trees specified as follows. Algorithms of Minimum Spanning Trees ◾ 415 The graph G is an ordered pair ( V , E ), where V is a set of vertices { v 1 , v 2 , … , v n } and E is a set of undirected edges { e 1 , e 2 , … , e m }. Each edge e ∈ E has its weight w ( e ). A spanning tree T is a tree (a connected subgraph without cycles) that connects all the n ver-tices with n − 1 edges. The slimness of a spanning tree T is defined as the difference between the largest weight and the smallest weight among the n − 1 edges of T . For example, a graph G in Figure 12.2a has four vertices { v 1 , v 2 , v 3 , v 4 } and five undirected edges { e 1 , e 2 , e 3 , e 4 , e 5 }. The weights of the edges are w ( e 1 ) = 3, w ( e 2 ) = 5, w ( e 3 ) = 6, w ( e 4 ) = 6, and w ( e 5 ) = 7, as shown in Figure 12.2b. There are several spanning trees for G . Four of them are depicted in Figure 12.3a–d. The span-ning tree T a in Figure 12.3a has three edges whose weights are 3, 6, and 7. The largest weight is 7 and the smallest weight is 3, so that the slimness of the tree T a is 4. The slimnesses of spanning trees T b , T c , and T d shown in Figure 12.3b–d are 3, 2, and 1, respectively. You can easily see the slimness of any other spanning tree is greater than or equal to 1; thus, the spanning tree T d in Figure 12.3d is one of the slimmest spanning trees whose slimness is 1. Your job is to write a program that computes the smallest slimness.
  • Book cover image for: Combinatorial Optimization
    • William J. Cook, William H. Cunningham, William R. Pulleyblank, Alexander Schrijver(Authors)
    • 2011(Publication Date)
    Prim's Algorithm can be restated, using an observation from the proof of Theorem 2.5, as follows. Prim's Algorithm Initialize H = (V{H),T) as ({r}, 0); While H is not a spanning tree Add to T a minimum-cost edge from 6(V(H)). Here is a straightforward implementation of this algorithm. We keep V{H) as a characteristic vector x. (That is, x u = 1 if u € V(H), and x u = 0 if u $ V(H).) At each step we run through E, checking for each f = uv whether / 6 S(V(H)) by checking whether and if so comparing c¡ to the current minimum encountered. So e can be chosen in O(m) time. Then x is updated by putting x v = 1 where v is the end of e for which x v was 0. This will be done n — 1 times, so we have a running time of O(nm). Now we describe the improvement to this running time found by Prim and Dijkstra. We keep, for each v $ V(H), an edge h(v) joining « t o a node of H such that c/,(„) is minimum. Then e can be chosen as the h(v) that has smallest cost. The advantage of this is that only O(n) elementary steps are required to choose e. The disadvantage is that the values h(v) need to be changed after each step. Say that w was added to V(H) and v remains in VV(H). Then h(v) may have to be changed, but only if there is an edge f = wv with Cf < c/,(„). We can do all of these changes by going through L w once, which is again 0(n) work. So we do 0(n) elementary steps per step of the algorithm and get a running time of 0(n 2 ), an improvement on MINIMUM SPANNING TREES 15 0(nm). Further improvements are presented, for example, in the monograph of Tarjan [1983]. Now we turn to the implementation of Kruskal's Algorithm. Notice that, once an edge e = vw becomes unavailable to add to F, that is, H contains a path from v to w, it remains so. This means that finding the next edge to be added can be done simply by considering the edges in order of cost. That is, Kruskal's Algorithm can be restated, as follows.
  • Book cover image for: Applied Combinatorics
    • Alan Tucker(Author)
    • 2012(Publication Date)
    • Wiley
      (Publisher)
    (a) Prove that the edges can be ordered so that Prim’s algorithm will yield any given minimum spanning tree. (b) Prove that the edges can be ordered so that Kruskal’s algorithm will yield any given minimum spanning tree. (c) Prove that with ordered edges, both algorithms give the same tree. 15. Given an undirected, connected n-vertex graph G with lengths assigned to each edge, we form a graph G N whose vertices correspond to minimum spanning trees of G with two vertices v 1 , v 2 adjacent if the corresponding minimum spanning trees T 1 , T 2 differ by one edge—that is, T 1 = T 2 −e  + e  (for some e  , e  ). (a) Produce an 8-vertex network G such that G N is a chordless 4-circuit. (b) Prove that if T 1 and T 2 are minimum spanning trees which differ by k edges, that is |T 1 ∩ T 2 |= n − k , then in G N there is a path of length k between the corresponding vertices. 16. Write a computer program to implement (as efficiently as possible) the following: (a) Kruskal’s algorithm (b) Prim’s algorithm 4.3 Network Flows 135 4.3 NETWORK FLOWS In this section we interpret the integer k(e) associated with edge e in a network as a capacity. We seek to maximize a “flow” from vertex a to vertex z such that the flow in each edge does not exceed that edge’s capacity. Many transport problems are of this general form—for example, maximizing the flow of oil from Houston to New York through a large pipeline network (here the capacity of an edge represents the capacity in barrels per minute of a section of pipeline), or maximizing the number of telephone calls possible between New York and Los Angeles through the cables in a telephone network. It is convenient to assume initially that all networks are directed. We define an a–z flow f (e) in a directed network N to be an integer-valued function f (e) defined on each edge e— f (e) is the flow in e— together with a source vertex a and a sink vertex z satisfying the following conditions.
  • Book cover image for: Quick Reference to Data Structures and Computer Algorithms, A
    eBook - ePub
    • Divya Joseph, Raji Ramakrishnan Nair, Divya Joseph, Alen Joseph(Authors)
    • 2019(Publication Date)
    • BPB Publications
      (Publisher)
    The simplest method is to draw each and every spanning tree of the graph and find the cost for each spanning tree. The tree, which gives us the minimum - cost spanning tree for the given graph. But this method is too time consuming.
    We have other methods available to find the minimum – cost spanning tree, without finding out all possible spanning trees. For this, we will be using the greedy method. There are two algorithms, which follow the greedy approach, to find the minimum – cost spanning tree and they are as follows:
    1. Prim’s algorithm
    2. Kruskal’s algorithm

    6.2.1 Prim’s Algorithm

    Algorithm Prim(E, cost, n, t) // E is the set of edges in G. cost[1: n, 1:n] is the cost // adjacency matrix of an n vertex graph such that cost[i,j] is // either a positive real number or ∞ if no edge (i,j) exits. //A minimum spanning tree is computed and stored as a set of //edges in the array t[1 :n-1,1:2]. (t[i, 1], t[i,2]) is an edge in // the minimum-cost spanning tree. The final cost is returned. { Let (k,l) be an edge of minimum cost in E; mincost:= cost[k,l]; for i :=1 to n do // lnitialize near. if (cost[i,l] < cost[i,k]) then near[i]:=l; else near [i]:=k; For i:=2 to n-1 do {//Find n-2 additional edges for t. Let j be an index such that near[j] ≠0 and cost[j,near[j]] is minimum; t[i,1]:=j;t[i:2]:=near[j]; mincost:=mincost+cost[j,near[j]]; near[j]:=0; for k :=1 to n do // Update near[]. if(near[k] ≠0) and (cost[k,near[k]]> cost[k,j])) then near[k]:=j; } return mincost; }
    Explanation of Prim’s Algorithm
    Let us see how prim’s method works. Here, we have 7 verities. From the given graph, we have to find the edge which has got the minimum cost, here edge (1,6) has minimum cost 10. Selection of minimum cost edge in E is done in statement 9. In statement 10, mincost = cost[k,l] = cost[1,6] = 10
    The set of edges in array t[1:n-1, 1:2] gives the set of edges in the minimum – cost spanning tree.
    ∴ t [1, 1] : = k = 1 ; t [1,2] = l = 6 ; In lines 12 to 14 initialization of ‘near’ is happening. The next edge with the minimum cost has to be added in the set of edges in t.
  • Book cover image for: Graphs
    eBook - PDF

    Graphs

    Theory and Algorithms

    • K. Thulasiraman, M. N. S. Swamy(Authors)
    • 2011(Publication Date)
    Let G' be the graph obtained by contracting the edges of T r Again, by Theorem 11.5, the edge e i+l is contained in a minimum weight spanning tree T' min of G'. By Theorem 11.6, T, U T' mm is a minimum weight spanning tree of G. More specifically T i+l = T t U {e i+1 } is contained in a minimum weight spanning tree of G and the correctness of Kruskal's algorithm follows. • We next present another algorithm due to Prim [11.49] to construct a minimum-weight spanning tree of a weighted connected graph. As we shall see soon, this algorithm is also based on Theorems 11.5 and 11.6 and can be viewed as a variant of Kruskal's algorithm. Algorithm 11.7. Minimum Weight Spanning Tree (Prim) 50. G is the given nontrivial η-vertex weighted connected graph. 51. Set i = 1 and E 0 = 0. Select any vertex, say v, of G and set V x = {v}. 52. Select an edge e, = (p, q) of minimum weight such that e, has exactly OPTIMUM BRANCHINGS 327 one end vertex, say p, in V r Define V iJI _ x = V i J {q}, E i , = U {e,}, and Γ, = (£,_, U e,). S3. If i < η - 1, set i = / + 1 and return to S2. Otherwise, let T m i n = T n _ 1 and HALT. • As in Kruskal's algorithm, Prim's Algorithm also constructs a sequence of acyclic subgraphs 7* 2 ,..., and terminates with T n _ 1 , a minimum weight spanning tree of G. The subgraph T l + 1 is constructed from T, by adding an edge of minimum weight with exactly one end vertex in T t . This construc-tion ensures that all T,'s are connected. If G' denotes the graph obtained by contracting the edges of T,, and v' denotes the vertex of G', which corresponds to the vertex set of 7,·, then e, + 1 is in fact a minimum weight edge incident on v' in G'. This observation and Theorems 11.5 and 11.6 can be used (as in the proof of Theorem 11.7) to prove that each T, is contained in a minimum weight spanning tree of G. This would then establish that the spanning tree produced by Prim's Algorithm is a minimum weight spanning tree of G.
  • Book cover image for: Algorithm Design and Applications
    • Michael T. Goodrich, Roberto Tamassia(Authors)
    • 2014(Publication Date)
    • Wiley
      (Publisher)
    Analyzing the Prim-Jarn ´ ık Algorithm Let n and m denote the number of vertices and edges of the input graph G, re- spectively. The implementation issues for the Prim-Jarn ´ ık algorithm are similar to those for Dijkstra’s algorithm. If we implement the priority queue Q as a heap that supports the locator-based priority queue methods (see Section 5.5), we can extract the vertex u in each iteration in O(log n) time. In addition, we can update each D[z ] value in O(log n) time, as well, which is a computation considered at most once for each edge (u, z ). The other steps in each iteration can be implemented in constant time. Thus, the total running time is O((n + m) log n), which is O(m log n). Hence, we can summarize as follows: Theorem 15.6: Given a simple connected weighted graph G with n vertices and m edges, the Prim-Jarn ´ ık algorithm constructs a minimum spanning tree for G in O(m log n) time. We illustrate the Prim-Jarn ´ ık algorithm in Figures 15.9 and 15.10. 436 Chapter 15. Minimum Spanning Trees 15.4 Bar ˚ uvka’s Algorithm Each of the two minimum spanning tree algorithms we have described previously has achieved its efficient running time by utilizing a priority queue Q, which could be implemented using a heap (or even a more sophisticated data structure). This usage should seem natural, for minimum spanning tree algorithms involve applica- tions of the greedy method—and, in this case, the greedy method must explicitly be optimizing certain priorities among the vertices of the graph in question. It may be a bit surprising, but as we show in this section, we can actually design an effi- cient minimum spanning tree algorithm without using a priority queue. Moreover, what may be even more surprising is that the insight behind this simplification comes from the oldest known minimum spanning tree algorithm—the algorithm of Bar˚ uvka.
  • Book cover image for: Binary Digital Image Processing
    eBook - PDF
    • Stéphane Marchand-Maillet, Yazid M. Sharaiha(Authors)
    • 1999(Publication Date)
    • Academic Press
      (Publisher)
    Chapter 3 ALGORITHMIC GRAPH THEORY Graph theory is an important mathematical approach which can be used for mapping complex problems onto simple representations and models. It is based on a robust mathematical background which allows for the definition of optimal solution techniques for such problems. Efficient algorithms can thus be derived which can solve a particular problem based on its graph representation. Graph theory is an area of research by itself and finds applications in many fields such as operational research and theoretical computer science. It also finds applications in image processing, where the discrete nature of image represen- tations makes its use consistent. In fact, it is often the case that graph theory is implicitly used when developing a solution to a particular problem related to image processing. The aim here is to relate image processing concepts to al- gorithmic graph theory in order to take advantage of this approach for further developments. In this chapter, we first present in Section 3.1 terminology and definitions used in graph theory. We summarise well-known algorithms which exist for the solution of problems defined in the context of graph theory. Typically, classes of equivalent problems are defined and solutions are presented for abstract repre- sentations of each class. Section 3.2 presents algorithms for the solution of two classic optimisation problems. More precisely, a number of algorithms are pre- sented for the shortest path problem and the minimum weighted spanning tree problem. Finally, Section 3.3 relates these results to digital image processing concepts presented in Chapters 1 and 2. 3.1 Definitions Major definitions used in algorithmic graph theory are reviewed here. The concept of a graph is introduced via the definition of vertices and arcs. These concepts are then extended to that of paths and trees on a graph.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.