Heterogeneous Computing Architectures
eBook - ePub

Heterogeneous Computing Architectures

Challenges and Vision

Olivier Terzo, Karim Djemame, Alberto Scionti, Clara Pezuela, Olivier Terzo, Karim Djemame, Alberto Scionti, Clara Pezuela

Partager le livre
  1. 316 pages
  2. English
  3. ePUB (adapté aux mobiles)
  4. Disponible sur iOS et Android
eBook - ePub

Heterogeneous Computing Architectures

Challenges and Vision

Olivier Terzo, Karim Djemame, Alberto Scionti, Clara Pezuela, Olivier Terzo, Karim Djemame, Alberto Scionti, Clara Pezuela

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

Heterogeneous Computing Architectures: Challenges and Vision provides an updated vision of the state-of-the-art of heterogeneous computing systems, covering all the aspects related to their design: from the architecture and programming models to hardware/software integration and orchestration to real-time and security requirements. The transitions from multicore processors, GPU computing, and Cloud computing are not separate trends, but aspects of a single trend-mainstream; computers from desktop to smartphones are being permanently transformed into heterogeneous supercomputer clusters. The reader will get an organic perspective of modern heterogeneous systems and their future evolution.

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que Heterogeneous Computing Architectures est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  Heterogeneous Computing Architectures par Olivier Terzo, Karim Djemame, Alberto Scionti, Clara Pezuela, Olivier Terzo, Karim Djemame, Alberto Scionti, Clara Pezuela en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Informatik et Systemarchitektur. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Éditeur
CRC Press
Année
2019
ISBN
9780429680038
Édition
1
1
Heterogeneous Data Center Architectures: Software and Hardware Integration and Orchestration Aspects
A. Scionti, F. Lubrano, and O. Terzo
LINKS Foundation — Leading Innovation & Knowledge for Society, Torino, Italy
S. Mazumdar
Simula Research Laboratory, Lysaker, Norway
CONTENTS
1.1Heterogeneous Computing Architectures: Challenges and Vision
1.2Backgrounds
1.2.1Microprocessors and Accelerators Organization
1.3Heterogeneous Devices in Modern Data Centers
1.3.1Multi-/Many-Core Systems
1.3.2Graphics Processing Units (GPUs)
1.3.3Field-Programmable Gate Arrays (FPGAs)
1.3.4Specialised Architectures
1.4Orchestration in Heterogeneous Environments
1.4.1Application Descriptor
1.4.2Static Energy-Aware Resource Allocation
1.4.2.1Modelling Accelerators
1.4.3Dynamic Energy-Aware Workload Management
1.4.3.1Evolving the Optimal Workload Schedule
1.5Simulations
1.5.1Experimental Setup
1.5.2Experimental Results
1.6Conclusions
Machine learning (ML) and deep learning (DL) algorithms are emerging as the new driving force for the computer architecture evolution. With an ever-larger adoption of ML/DL techniques in Cloud and high-performance computing (HPC) domains, several new architectures (spanning from chips to entire systems) have been pushed on the market to better support applications based on ML/DL algorithms. While HPC and Cloud remained for long time distinguished domains with their own challenges (i.e., HPC looks at maximising FLOPS, while Cloud at adopting COTS components), an ever-larger number of new applications is pushing for their rapid convergence. In this context, many accelerators (GP-GPUs, FPGAs) and customised ASICs (e.g., Google TPUs, Intel Neural Network Processor—NNP) with dedicated functionalities have been proposed, further enlarging the data center heterogeneity landscape. Also Internet of Things (IoT) devices started integrating specific acceleration functions, still aimed at preserving energy. Application acceleration is common also outside ML/DL applications; here, scientific applications popularised the use of GP-GPUs, as well as other architectures, such as Accelerated Processing units (APUs), Digital Signal Processors (DSPs), and many-cores (e.g., Intel XeonPhi). On one hand, training complex deep learning models requires powerful architectures capable of crunching large number of operations per second and limiting power consumption; on the other hand, flexibility in supporting the execution of a broader range of applications (HPC domain requires the support for double-precision floating-point arithmetic) is still mandatory. From this viewpoint, heterogeneity is also pushed down: chip architectures sport a mix of general-purpose cores and dedicated accelerating functions.
Supporting such large (at scale) heterogeneity demands for an adequate software environment able to maximise productivity and to extract maximum performance from the underlying hardware. Such challenge is addressed when one looks at single platforms (e.g., Nvidia provides CUDA programming framework for supporting a flexible programming environment, OpenCL has been proposed as a vendor independent solution targeting different devices—from GPUs to FPGAs); however, moving at scale, effectively exploiting heterogeneity remains a challenge. Whenever large number of heterogeneous resources (with such variety) must be managed, it poses new challenges also. Most of the tools and frameworks (e.g., OpenStack) for managing the allocation of resources to process jobs still provide a limited support to heterogeneous hardware.
The chapter contribution is two fold: i) presenting a comprehensive vision on hardware and software heterogeneity, covering the whole spectrum of a modern Cloud/HPC system architecture; ii) presenting ECRAE, i.e., an orchestration solution devised to explicitly deal with heterogeneous devices.
1.1 Heterogeneous Computing Architectures: Challenges and Vision
Cloud computing represents a well-established paradigm in the modern computing domain showing an ever-growing adoption over the years. The continuous improvement in processing, storage and interconnection technologies, along with the recent explosion of machine learning (ML) and more specifically of deep learning (DL) technologies has pushed Cloud infrastructures beyond their traditional role. Besides traditional services, we are witnessing at the diffusion of (Cloud) platforms supporting the ingestion and processing of massive data sets coming from ever-larger sensors networks. Also, the availability of a huge amount of cheap computing resources attracted the scientific community in using Cloud computing resources to run complex HPC-oriented applications without requiring access to expensive supercomputers [175,176]. Figure 1.1 shows the various levels of the computing continuum, within which heterogeneity manifest itself.
fig1_1.tif
FIGURE 1.1: The heterogeneity over the computing continuum: From the IoT level (smart-connected sensors) to the data center (compute nodes and accelerators).
In this context data centers have a key role in providing necessary IT resources to a broad range of end-users. Thus, unlike past years, they became more heterogeneous, starting to include processing technologies, as well as storage and interconnection systems that were prerogative of only high-performance systems (i.e., high-end clusters, supercomputers). Cloud providers have been pushed to embed several types of accelerators, ranging from well known (GP-)GPUs to reconfigurable devices (mainly FPGAs) to more exotic hardware (e.g., Google TPU, Intel Neural Network Processor—NNP). Also, CPUs became more variegated in terms of micro-architectural features and ISAs. With ever-more complex chip-manufacturing processes, heterogeneity reached IoT devices too: here, hardware specialisation minimises energy consumption. Such vast diversity in hardware systems, along with the demand for more energy efficiency at any level and the growing complexity of applications, make mandatory rethinking the approach used to manage Cloud resources.
Among the others, recently, FPGAs emerged as a valuable candidate providing an adequate level of performance, energy efficiency and programmability for a large variety of applications. Thanks to the optimal trade-off between performance and flexibility (high-level synthesis—HLS—compilers ease mapping between high-level code and circuit synthesis), most of the (scientific) applications nowadays benefit from FPGA acceleration. Energy (power) consumption is the main concern for Cloud providers, since the rapid growth of the demand for computational power has led to the creation of large-scale data centers and consequently to the consumption of enormous amounts of electricity power resulting in high operational costs and carbon dioxide emissions. On the other hand, in IoT world, reducing energy consumption is mandatory to keep working devices for longer time, even in case of disconnection from the power supply grid. Data centers need proper (energy) power management strategies to cope with the growing energy consumption, that involves high economic cost and environmental issues. New challenges emerge in providing adequate management strategies. Specifically, resource allocation is made more effective by dynamically selecting the most appropriate resource for a given task to run, taking into account availability of specialised processing elements. Furt...

Table des matiĂšres