Computer Science

Tree data structure

A tree data structure is a hierarchical way of organizing data using nodes and edges. It consists of a root node, which has child nodes connected by edges, forming a branching structure. Each node can have multiple children, and the structure is commonly used in computer science for organizing and representing data in a way that allows for efficient searching and retrieval.

Written by Perlego with AI-assistance

3 Key excerpts on "Tree data structure"

Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
  • Mesh Generation
    eBook - ePub

    Mesh Generation

    Application to Finite Elements

    • Pascal Frey, Paul Louis George(Authors)
    • 2013(Publication Date)
    • Wiley-ISTE
      (Publisher)

    ...To this end, we look at: • general data structures allowing to store, retrieve or analyze sets of objects, • structures allowing a selective access to some entities already stored. The access can be performed according to several criteria of selection. We find here, for instance, the approaches where the smallest item (in some sense), the first or the last recorded, the neighbor(s) of a given item, etc. is sought. Here we will find the data structures like stack (LIFO), queue (FIFO), priority queues, array with sorting and binary searching trees. • data structures like dictionaries that can provide answers to questions like “does this item exist?” and allow items to be inserted or suppressed. We will find here BST and hash coding techniques. In Section 2.6, we discuss how to use data structures in two and three dimensions for fast storing and retrieving of items such as points, segments (edges) or polygons. Section 2.7 is devoted to the computer implementation of topological data. After this overview of basic data structures and algorithms, we discuss robustness problems inherent to any implementation of a mathematical expression in a computer. The degree of the problems and the notion of predicate are then analyzed as well as the cost in terms of the number of operations and of memory requirements (Sections 2.8 and Sections 2.9). To conclude, we mention some applications where the previously described material can be used, in the specific context of the development of mesh generation and modification algorithms (Sections 2.10). 2.2 Elementary structures In this section, we describe tables (arrays), pointers, lists, stacks and queues. These structures are briefly introduced below on deliberately simple examples. Table or array The table or the array is most certainly the simplest and the most efficient data structure for numerous applications...

  • Machine Learning
    eBook - ePub

    Machine Learning

    An Algorithmic Perspective, Second Edition

    ...CHAPTER 12 Learning with Trees We are now going to consider a rather different approach to machine learning, starting with one of the most common and powerful data structures in the whole of computer science: the binary tree. The computational cost of making the tree is fairly low, but the cost of using it is even lower: 𝒪 (log N), where N is the number of datapoints. This is important for machine learning, since querying the trained algorithm should be as fast as possible since it happens more often, and the result is often wanted immediately. This is sufficient to make trees seem attractive for machine learning. However, they do have other benefits, such as the fact that they are easy to understand (following a tree to get a classification answer is transparent, which makes people trust it more than getting an answer from a ‘black box’ neural network). For these reasons, classification by decision trees has grown in popularity over recent years. You are very likely to have been subjected to decision trees if you’ve ever phoned a helpline, for example for computer faults. The phone operators are guided through the decision tree by your answers to their questions. The idea of a decision tree is that we break classification down into a set of choices about each feature in turn, starting at the root (base) of the tree and progressing down to the leaves, where we receive the classification decision. The trees are very easy to understand, and can even be turned into a set of if-then rules, suitable for use in a rule induction system. In terms of optimisation and search, decision trees use a greedy heuristic to perform search, evaluating the possible options at the current stage of learning and making the one that seems optimal at that point. This works well a surprisingly large amount of the time. 12.1   Using Decision Trees As a student it can be difficult to decide what to do in the evening...

  • Data Classification
    eBook - ePub

    Data Classification

    Algorithms and Applications

    ...Chapter 4 Decision Trees: Theory and Algorithms Victor E. Lee John Carroll University University Heights, OH [email protected] Lin Liu Kent State University Kent, OH [email protected] Ruoming Jin Kent State University Kent, OH [email protected] 4.1 Introduction One of the most intuitive tools for data classification is the decision tree. It hierarchically partitions the input space until it reaches a subspace associated with a class label. Decision trees are appreciated for being easy to interpret and easy to use. They are enthusiastically used in a range of business, scientific, and health care applications [ 12, 15, 71 ] because they provide an intuitive means of solving complex decision-making tasks. For example, in business, decision trees are used for everything from codifying how employees should deal with customer needs to making high-value investments. In medicine, decision trees are used for diagnosing illnesses and making treatment decisions for individuals or for communities. A decision tree is a rooted, directed tree akin to a flowchart. Each internal node corresponds to a partitioning decision, and each leaf node is mapped to a class label prediction. To classify a data item, we imagine the data item to be traversing the tree, beginning at the root. Each internal node is programmed with a splitting rule, which partitions the domain of one (or more) of the data’s attributes. Based on the splitting rule, the data item is sent forward to one of the node’s children. This testing and forwarding is repeated until the data item reaches a leaf node. Decision trees are nonparametric in the statistical sense: they are not modeled on a probability distribution for which parameters must be learned...