Self-Learning and Adaptive Algorithms for Business Applications
eBook - ePub

Self-Learning and Adaptive Algorithms for Business Applications

A Guide to Adaptive Neuro-Fuzzy Systems for Fuzzy Clustering Under Uncertainty Conditions

Zhengbing Hu, Yevgeniy V. Bodyanskiy, Oleksii Tyshchenko

Partager le livre
  1. 120 pages
  2. English
  3. ePUB (adapté aux mobiles)
  4. Disponible sur iOS et Android
eBook - ePub

Self-Learning and Adaptive Algorithms for Business Applications

A Guide to Adaptive Neuro-Fuzzy Systems for Fuzzy Clustering Under Uncertainty Conditions

Zhengbing Hu, Yevgeniy V. Bodyanskiy, Oleksii Tyshchenko

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

In today's data-driven world, more sophisticated algorithms for data processing are in high demand, mainly when the data cannot be handled with the help of traditional techniques. Self-learning and adaptive algorithms are now widely used by such leading giants that as Google, Tesla, Microsoft, and Facebook in their projects and applications. In this guide designed for researchers and students of computer science, readers will find a resource for how to apply methods that work on real-life problems to their challenging applications, and a go-to work that makes fuzzy clustering issues and aspects clear. Including research relevant to those studying cybernetics, applied mathematics, statistics, engineering, and bioinformatics who are working in the areas of machine learning, artificial intelligence, complex system modeling and analysis, neural networks, and optimization, this is an ideal read for anyone interested in learning more about the fascinating new developments in machine learning.

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que Self-Learning and Adaptive Algorithms for Business Applications est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  Self-Learning and Adaptive Algorithms for Business Applications par Zhengbing Hu, Yevgeniy V. Bodyanskiy, Oleksii Tyshchenko en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Business et R&D. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Année
2019
ISBN
9781838671730
Sous-sujet
R&D

CHAPTER 1

REVIEW OF THE PROBLEM AREA

1.1. LEARNING AND SELF-LEARNING PROCEDURES

A learning process always involves a system which can be described by a vector of parameters to be learned
math-eqn
and input data which are a sequence of observations
math-eqn
, where an n-dimensional feature vector gives observation for each object
math-eqn
. We can define a function describing a system’s error by Eq. 1.1 (Tsypkin, 1970):
math-eqn
where
math-eqn
is some predefined objective function and
math-eqn
is the density distribution for
math-eqn
in
math-eqn
.
For a continuous function
math-eqn
, the purpose of learning procedure is to achieve an optimum state for the system
math-eqn
when the functional (Eq. 1.1) attains an extremum value
math-eqn
. Usually, the state
math-eqn
cannot be determined accurately due to the lack of information, as, in general, the distribution density
math-eqn
is also unknown.
If some information about the desired reaction for the system is available for a subset
math-eqn
, it is called a learning set (a training set), and responses
math-eqn
corresponding to each element are called a learning signal (a training signal). Thus the purpose of learning procedure, in this case, is to minimize the differences between a system’s actual output
math-eqn
and desired output
math-eqn
. The objective function
math-eqn
which minimizes the functional (Eq. 1.1) can be defined (in the purest form) as
math-eqn
Analytical or numerical minimization of these objective functions leads to a variety of supervised learning algorithms.
If the training signal
math-eqn
is unavailable, construction of the objective function becomes less trivial. In the most general terms, the self-learning process aims to minimize a divergence between the actual density function p(x) and the approximated one
math-eqn
based on the system’s performance:
math-eqn
Since neither
math-eqn
nor
math-eqn
can be measured directly, it leads to a wide variety of objective functions and algorithms based on these functions.
The main feature of intelligent systems is their ability of learning and self-learning, i.e., the ability to make generalizations based on available and incoming data. This fact allows them to be used for solving problems automatically under specific conditions, such as lack of a priori information about the data nature and the subject area.
Learning procedures can be described in the form of stochastic difference and differential equations for tunable parameters of a system. In some cases, these equations have an exact solution, but numerical methods are commonly used to ensure an asymptotic convergence to an optimal solution. This leads to the fact that most of the learning procedures are iterative.
Most of the learning procedures can be attributed to either of these two basic classes: supervised learning and unsupervised learning (self-learning). In the supervised learning case, the data contain both input information and examples of desired system responses to the input data that make it possible to train the system by comparing its output signal with samples. In case of unsupervised learning, the system has no information about desired outputs, and its task is to detect patterns in a dataset when any data element is not a solution.

1.2. CLUSTERING

Clustering (automatic classification) is one of the primary tasks in data mining, and it implies isolation of similar observations in a dataset in the most general case. In the formal form, the clustering problem is formulated as follows: given a data sample
math-eqn
consisting
math-eqn
observations
math-eqn
where each observation is an n-dimensional feature vector,
math-eqn
. It is often convenient to have a data sample in a matrix form
math-eqn
. These forms are similar.
A solution for the clustering problem is to find a partition matrix
math-eqn
where
math-eqn
stands for an observation’s membership level
math-eqn
to the
math-eqn
–th cluster,
math-eqn
. A general formulation of the problem does not regulate whether some clusters are set beforehand or found by an algorithm.
The feature that differentiates this problem statement from the classification is that no membership tag is specified for a group for any data subset, i.e., clustering is an unsupervised task.
Decision for clustering problems is fundamentally ambiguous. The various reasons for this are as follows:
(1) There is no best or universal quality criterion or objective function for a clustering problem. However, there is a vast number of heuristic criteria, as well as some algorithms without a clearly expressed criterion, and all of them can give different results.
(2) Some clusters are usually previously unknown and are set based on some subjective considerations.
(3) A clustering result strongly depends on a metric which is typically subjective and determined by an expert.
(4) Evaluation of the clustering quality is also subjective.

1.2.1. Clustering Methods

Although there are a lot of clustering approaches, this book mainly focuses on prototype-based methods (Borgelt, 2005; Xu & Wunsch, 2009). These methods select a small number of the most typical (averaged) observations (also called prototypes or centroids) from a sample (or generating data based on it) and divide the rest of the sample into clusters, based on their proximity.
According to the clustering problem statement, the sample should be divided into clusters, with similar observations placed in one cluster, and each cluster differs from the other as much as possible. From a mathematical point of view, this statement can be interpreted as minimizing intra-cluster distances in some metrics. Using prototypes makes it possible to minimize the distance between each observation and each cluster pro...

Table des matiĂšres