Technology & Engineering

Rendering

Rendering refers to the process of generating an image from a model by means of computer software. In the context of technology and engineering, rendering is commonly used in computer graphics, 3D modeling, and animation to create realistic visual representations of objects and environments. It involves techniques such as shading, texturing, and lighting to produce lifelike images.

Written by Perlego with AI-assistance

4 Key excerpts on "Rendering"

  • Book cover image for: Essential Skills for 3D Modeling, Rendering, and Animation
    CHAPTER 4 Lighting, Materials, Textures, and UVs

    WHAT IS “Rendering”?

    You will often hear the term “Rendering” in connection to 3D computer graphics. It is tossed around a lot, often without any real subjective explanation. After a while (say, 10 years), you start to get what Rendering is, but you might be embarrassed that you can’t clearly define it in dictionary-like terms. Rendering is the act of converting any geometry (2D or 3D) into pixels on your screen or saved in an image file. Although this sounds incredibly simple, the calculations needed to perform this operation are incredibly complex, and have been in the process of development for almost 50 years. It takes many calculations just to make a single pixel appear on your screen based on geometry in a 2D or 3D computer graphics package.
    The important thing to understand first, in terms of 3D geometry and Rendering, is that the stuff you see on your screen isn’t really the same thing as the geometry. It is the visual representation of math stuff going on underneath the hood. Your surfaces and polygons exist without pixels— they are all just plotted points in space and mathematical equations. There are no pixels associated with them in their raw state, just a bunch of numbers in a format based on a Cartesian grid. It is important to remember that because if you do, you will start to peel back the mystical connection between what you see on your screen and what the computer sees. In Figure 4.1 , you can see the text description of a sphere in Maya. In Figure 4.2 , you can see that sphere appear on your screen.
    FIGURE 4.1 3D is mostly numbers.
    FIGURE 4.2 Those numbers converted into an image on the screen.
    So how does this stuff get to the screen? Well, that is where Rendering comes into play. Unless you are some kind of mathematical genius who can see grid numbers in 3D space in your head, you will need some visual method of making sense of all these plotted points for geometry. That is where the Rendering “engine” comes into play. It is called an engine because it does a lot of work. It draws lines between those points in space (as NURBS curves or polygon edges) and shows them to you by calculating pixels to the monitor in RGB data. It also fills in all those spaces in between the lines, which are being calculated as triangles, with pixels too. This is why we can see a 3D image that seems to have depth and volume— we call this shading
  • Book cover image for: Media Production, Delivery and Interaction for Platform Independent Systems
    • Oliver Schreer, Jean-François Macq, Omar Aziz Niamut, Javier Ruiz-Hidalgo, Ben Shirley, Georg Thallinger, Graham Thomas, Oliver Schreer, Jean-François Macq, Omar Aziz Niamut, Javier Ruiz-Hidalgo, Ben Shirley, Georg Thallinger, Graham Thomas(Authors)
    • 2013(Publication Date)
    • Wiley
      (Publisher)
    Figure 8.2 ) by switching to different video streams. These are either provided ‘live’ with the main programme or ‘online’ via a feedback channel requesting data from additional internet channels or prerecorded on storage media (Blu-Ray discs for sports or concert features etc.).
    Figure 8.2 Rendering for end terminals (Multi-Angle Player (Youswitch Inc, 2013)), left: concert, right: sports. Reproduced by permission of Youswitch Inc.
    Video Rendering is considered in this chapter as translation of a model of the real world for projection on 2D displays (Borsum et al., 2010). This is basically the process to generate device dependent pixel data from device independent image data, which is the rasterisation of geometric information (Poynton, 2003). However, the term Rendering, originally used in computer graphic modelling (Ulichney, 1993), is nowadays combined with techniques from computer vision and also includes operations such as scaling/filtering, colour adjustment, quantisation and colour space conversion. This also applies to operations in the data pipeline preparing the raw video data for the Rendering process. As a consequence, elements of the complete Rendering process are proposed to be distributed across the distribution and presentation chain balancing the computational load described in Wang et al. (2011), for example.
    While computer generated imagery (CGI) is becoming common for film and television production to create visual effects (VFX) or to create unique scenes, the increased level of detail required for flawless presentation on large screens still needs significant effort to produce content that looks natural. Here, the model that represents the video data and the transmission of this data to the renderer (data pipeline) are the weak points for broadcast video production. Furthermore, how the user can access this data for smooth content personalisation such as navigating a virtual camera, depends largely on swift updates of the model data required for the individual perspective selection.
  • Book cover image for: 3DTV
    eBook - ePub

    3DTV

    Processing and Transmission of 3D Video Signals

    • Anil Fernando, Stewart T. Worrall, Erhan Ekmekcioðlu(Authors)
    • 2013(Publication Date)
    • Wiley
      (Publisher)

    Chapter 5

    Rendering, Adaptation and 3D Displays

    Visualization of 3D media can be realized in a variety of ways, in contrast to the traditional 2D media. A number of 3D display technologies are available, all of which differ in terms of applied image processing and Rendering techniques and the requirements for input media size and format. Thus, the mode of displaying the reconstructed 3D scene also differs. A standard 3D display on the market can display two fixed views at a time, regardless of the viewing angle from the screen, whereas multi-view displays usually have the capability of displaying more than two views simultaneously, so that different viewers looking from different viewing angles would see different pictures. This necessitates a series of specific processing tasks within the display or the display driver dedicated to the 3D mode of viewing.
    This chapter explains the stages of 3D video Rendering and outlines the details of various 3D display technologies. Adaptation, on the other hand, is another subject that needs special attention in the context of 3D and multi-view applications. This chapter also emphasizes several elements of various adaptation schemes devised for 3D media systems, while outlining the inherent differences from the 2D media adaptation.

    5.1 Why Rendering?

    Rendering refers to the process of generating a display image from a binary model. A binary scene model contains objects with a particular data structure, such as the geometry, texture and lighting comprising the description of the scene. This data is processed within the Rendering engine to output a digital image, or graphics. Rendering is an inherent part of the majority of multimedia applications, such as movies, television, streaming media, video games, etc. In its simplest form, a video renderer engine integrated to a media player carries out the task of displaying a reconstructed two-dimensional image sequence in real time. Nevertheless, in the wider sense, video Rendering is usually composed of a group of computation-intensive and memory-demanding tasks, where purpose-built hardware structures called Graphical Processing Units (GPU) have been designed and integrated within modern computing architectures. Rendering algorithms can be grouped under real-time or non-real-time processes, depending upon the type of application. Especially in 3D graphics case, processing can be done offline, such as in movie post-production. On the other hand, as in video games, the Rendering engine must operate in real time.
  • Book cover image for: CAD Fundamentals for Architecture
    A CAD 3D model of a typical street. The camera allows you to navigate and select endless perspective views.
    Material and light
    The ability to add material and lighting to a model is now at a photorealistic stage. For architects, interior designers and clients this provides a wonderful opportunity to be specific about material and lighting choices. Sketch Rendering is very important in the design process but the ability to check how one material reads with another is invaluable. In lighting terms it is now possible to track how sunlight enters a built structure accurately and in real time. These models are having a significant influence on the design process of contemporary building.
    Rendering
    Although perspective is key, Rendering is equally important to all the views available in the architectural palette. Isometric views or a rendered section help to describe a design more accurately. The process of Rendering follows a similar format in all CAD programs, although you will have different settings to give you different quality results. The basics of the render interface have changed little in 30 years but it has evolved to become more accurate and, to a certain extent, more complex. The way that game Rendering has moved from Atari’s Pong to the current Xbox LIVE experience is just one example of how far it has developed.
    Output
    In Rendering, resolution issues are often ignored, which can lead to problems both in production and realization. In the early stages of image production a reverse principle applies – you should keep the resolution of your image at a low level; quick updates of a view are invaluable when developing the composition of a render.
    However, at later stages, an image that is of a low resolution often has limited use. If you work on the principle of outputting any image at the maximum quality you can attain (within reason) you will be able to use it at different print sizes. Other settings within the render environment will also be discussed here, which will allow you to tweak visual outputs, making them crisper or more detailed.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.