Introduction To Rendering
Rendering is the final stage in the 3D computer graphics production process.
Though the wider context of rendering begins with shading and texturing objects and lighting your scene, the rendering process ends when surfaces, materials, lights, and motion are processed into a final image or image sequence.
Visualization vs. the final render
As you build your scenes (shade and texture objects, light scenes, position cameras, and so on), you’ll want to visualize your scene many times before you produce the final rendered image or image sequence. This process may involve (depending on your particular project) creating and setting up additional cameras.
Visualize a scene during early iterations to detect and correct image quality problems or to estimate and reduce the amount of time the final render takes before you spend time performing the final render.
When you are satisfied with the results of your scene during test renders, you can perform the final render.
You can visualize and final render a single frame, part of an animation (multiple frames), or an entire animation in Autodesk® Maya®.
The key to successful rendering
The key to rendering is finding a balance between the visual complexity required and the rendering speed that determines how many frames can be rendered in a given period of time.
Rendering involves a large number of complex calculations which can keep your computer busy for a long time. Rendering pulls data together from every sub-system within Maya and interprets its own data relevant to tessellation, texture mapping, shading, clipping, and lighting.
Producing rendered images always involves making choices that affect the quality (anti-aliasing and sampling) of the images, the speed with which the images are rendered, or both.
The highest quality images typically take the most time to render. The key to working efficiently is to produce good-enough quality images in as little time as possible in order to meet production deadlines. In other words, choose only the most economical values for options that let you produce images of acceptable quality for your particular project.
A rendered image can be understood in terms of a number of visible features. Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together.
Shading – how the color and brightness of a surface varies with lighting
Texture-mapping – a method of applying detail to surfaces
Bump-mapping – a method of simulating small-scale bumpiness on surfaces
Fogging/participating medium – how light dims when passing through non-clear atmosphere or air
Shadows – the effect of obstructing light
Soft shadows – varying darkness caused by partially obscured light sources
Reflection – mirror-like or highly glossy reflection
Transparency (optics), transparency (graphic) or opacity – sharp transmission of light through solid objects
Translucency – highly scattered transmission of light through solid objects
Refraction – bending of light associated with transparency
Diffraction – bending, spreading, and interference of light passing by an object or aperture that disrupts the ray
Indirect illumination – surfaces illuminated by light reflected off other surfaces, rather than directly from a light source (also known as global illumination)
Caustics (a form of indirect illumination) – reflection of light off a shiny object, or focusing of light through a transparent object, to produce bright highlights on another object
Depth of field – objects appear blurry or out of focus when too far in front of or behind the object in focus
Motion blur – objects appear blurry due to high-speed motion, or the motion of the camera
Non-photorealistic rendering – rendering of scenes in an artistic style, intended to look like a painting or drawing.
Many rendering algorithms have been researched, and software used for rendering may employ a number of different techniques to obtain a final image.
Tracing every particle of light in a scene is nearly always completely impractical and would take a stupendous amount of time. Even tracing a portion large enough to produce an image takes an inordinate amount of time if the sampling is not intelligently restricted.
Therefore, a few loose families of more-efficient light transport modelling techniques have emerged:
rasterization, including scanline rendering, geometrically projects objects in the scene to an image plane, without advanced optical effects;
ray casting considers the scene as observed from a specific point of view, calculating the observed image based only on geometry and very basic optical laws of reflection intensity, and perhaps using Monte Carlo techniques to reduce artifacts;
ray tracing is similar to ray casting, but employs more advanced optical simulation, and usually uses Monte Carlo techniques to obtain more realistic results at a speed that is often orders of magnitude faster.
The fourth type of light transport technique, radiosity is not usually implemented as a rendering technique, but instead calculates the passage of light as it leaves the light source and illuminates surfaces. These surfaces are usually rendered to the display using one of the other three techniques.
Most advanced software combines two or more of the techniques to obtain good-enough results at reasonable cost.
Another distinction is between image order algorithms, which iterate over pixels of the image plane, and object order algorithms, which iterate over objects in the scene. Generally object order is more efficient, as there are usually fewer objects in a scene than pixels.
Understanding the current market of rendering engine.
Unbiased - Unbiased renderers like Maxwell, Indigo, and Luxrender are typically hailed as "physically accurate" render engines. Although "physically accurate" is something of a misnomer (nothing in CG is truly physically accurate), the term is meant to imply that an unbiased renderer calculates the path of light as accurately as is statistically possible within the confines of current-gen rendering algorithms.In other words, no systematic error or "bias" is willfully introduced. Any variance will manifest as noise, but given enough time an unbiased renderer will eventually converge on a mathematically "correct" result.
Biased - Biased renderers, on the other hand, make certain concessions in the interest of efficiency. Instead of chugging away until a sound result has been reached, biased renderers will introduce sample bias, and use subtle interpolation or blurring to reduce render time. Biased renderers can typically be fine-tuned more than their unbiased counterparts, and in the right hands, a biased renderer can potentially produce a thoroughly accurate result with significantly less CPU time.
So ultimately, the choice is between an unbiased engine, which requires more CPU time but fewer artist-hours to operate, or a biased renderer which gives the artist quite a bit more control but requires a larger time investment from the render technician.
Although there are always exceptions to the rule, unbiased renderers work quite well for still images, especially in the architectural visualization sector, however in motion graphics, film, and animation biased the efficiency of a biased renderer is usually preferable.
Search
If you've spent any time looking into the various render engines on the market, or read about stand-alone rendering solutions, chances are you've come across terms like biased & unbiased, GPU-acceleration, Reyes, and Monte-Carlo.
The latest wave of next-generation renderers has generated a tremendous amount of hype, but it can sometimes be tough to tell the difference between a marketing buzzword and an honest-to-god feature.
What Is the Difference Between Biased and Unbiased Rendering?
Digital rendering of architecture
Mina De La O/Getty Images
The discussion of what constitutes unbiased rendering versus biased rendering can get technical pretty quickly. We want to avoid that, so I'll try to keep it as basic as possible.
Unbiased - Unbiased renderers like Maxwell, Indigo, and Luxrender are typically hailed as "physically accurate" render engines. Although "physically accurate" is something of a misnomer (nothing in CG is truly physically accurate), the term is meant to imply that an unbiased renderer calculates the path of light as accurately as is statistically possible within the confines of current-gen rendering algorithms.In other words, no systematic error or "bias" is willfully introduced. Any variance will manifest as noise, but given enough time an unbiased renderer will eventually converge on a mathematically "correct" result.
Biased - Biased renderers, on the other hand, make certain concessions in the interest of efficiency. Instead of chugging away until a sound result has been reached, biased renderers will introduce sample bias, and use subtle interpolation or blurring to reduce render time. Biased renderers can typically be fine-tuned more than their unbiased counterparts, and in the right hands, a biased renderer can potentially produce a thoroughly accurate result with significantly less CPU time.
So ultimately, the choice is between an unbiased engine, which requires more CPU time but fewer artist-hours to operate, or a biased renderer which gives the artist quite a bit more control but requires a larger time investment from the render technician.
Although there are always exceptions to the rule, unbiased renderers work quite well for still images, especially in the architectural visualization sector, however in motion graphics, film, and animation biased the efficiency of a biased renderer is usually preferable.
How Does GPU Acceleration Factor in?
GPU acceleration is a relatively new development in rendering technology. Game-engines have depended on GPU based graphics for years and years, however, it's only fairly recently that GPU integration has been explored for use in non-real-time rendering applications where the CPU has always been king.
However, with widespread proliferation of NVIDIA's CUDA platform, it became possible to use the GPU in tandem with the CPU in offline rendering tasks, giving rise to an exciting new wave of rendering applications.
GPU-acclerated renderers can be unbiased, like Indigo or Octane, or biased like Redshift.
What Does It All Mean for the End-User?
First of all, it means there are more options than ever. Not so long ago, rendering was a bit of a black magic in the CG world, and only the most technically minded artists held the keys. Over the course of the past decade, the playing field has leveled a great deal and photo-realism has become perfectly attainable for a one person team (in a still image, at least).
Check out our recently published list of render engines get a feel for how many new solutions have emerged. Rendering technology has jumped way out of the box, and newer solutions like Octane or Redshift are so different from old standbys like Renderman that it almost doesn't even make sense to compare them.
WHAT IS VIRTUAL, MIXED & AUGMENTED REALITY?
Virtual reality (VR) is about simulating a reality based on 3D-models within a computer.
Mixed reality (MR) concerns the amalgamation of the actual world with a virtual one.
Augmented reality (AR) is adding information to the actual world. This additional
information can be presented to the user by means of a smartphone, tablet, beamer, pair of
smart glasses or a head-mounted display (HMD).
Visual 1 Virtual, mixed and augmented reality.
Development of the fi rst VR-glasses started in the 1960s. At the end of the 1990s, games
and new devices like Nintendo’s ‘Virtual Boy’ gave the appearance of a breakthrough in
VR. Eventually, technical constraints like weight, inadequate graphics and a shortage of
available content hindered development. In the past few years, advancements, especially in
smartphone technology, have raised expectations for an actual breakthrough in VR/AR.
It is widely expected that virtual and augmented reality may cause a revolution in technology.
At the very least, these technologies will lead to new branches and new ways to work and
communicate. Within companies, departments at the end of a process like services will come into direct contact with departments at the beginning of the process like research and
development. For example, some companies use VR/AR to train their service technicians in
maintaining machines that are still in development (see the Océ / Canon user case on page 7).
One aspect that always plays a role in adoption processes is the so-called horseless
carriage syndrome .Some say the adoption of VR and AR technologies can be compared to the emergence of the Internet and the smartphone. One aspect that always plays a role in adoption processes, is
the so-called ‘horseless carriage syndrome’. In other words, when people are faced with new
innovations, they revert to older more familiar technology applications. For example, the
fi rst automobiles looked like carriages and the fi rst websites were similar to brochures. Only
as time progresses, people can fully grasp the possibilities and unique applications of new
technologies.
The ‘horseless carriage syndrome’ off ers opportunities to fi rst movers since they already
experiment with these technologies and they can more quickly understand the potential for
new products and services.
“I think there is a world market for
about fi ve computers.”
- Thomas J. Watson Jr of IBM, 1943
VR and AR technologies enable new ways of interaction and communication. People can enter
environments that do not exist in the real world, but at the same time interact with them.
Additionally, information and interaction can be added to an actual environment, making it
richer by doing so.
The current status of the VR/AR market resembles the Internet around 1995.
In 2016, the fi rst headsets like the Oculus Rift and the HTC Vive were shipped to consumers worldwide. At this point, it can be argued that the international VR/AR market resembles the
Internet around 1995. A lack of available and compelling content is a reminder that VR/AR is
still an undeveloped market.
Rendering is the final stage in the 3D computer graphics production process.
Though the wider context of rendering begins with shading and texturing objects and lighting your scene, the rendering process ends when surfaces, materials, lights, and motion are processed into a final image or image sequence.
Visualization vs. the final render
As you build your scenes (shade and texture objects, light scenes, position cameras, and so on), you’ll want to visualize your scene many times before you produce the final rendered image or image sequence. This process may involve (depending on your particular project) creating and setting up additional cameras.
Visualize a scene during early iterations to detect and correct image quality problems or to estimate and reduce the amount of time the final render takes before you spend time performing the final render.
When you are satisfied with the results of your scene during test renders, you can perform the final render.
You can visualize and final render a single frame, part of an animation (multiple frames), or an entire animation in Autodesk® Maya®.
The key to successful rendering
The key to rendering is finding a balance between the visual complexity required and the rendering speed that determines how many frames can be rendered in a given period of time.
Rendering involves a large number of complex calculations which can keep your computer busy for a long time. Rendering pulls data together from every sub-system within Maya and interprets its own data relevant to tessellation, texture mapping, shading, clipping, and lighting.
Producing rendered images always involves making choices that affect the quality (anti-aliasing and sampling) of the images, the speed with which the images are rendered, or both.
The highest quality images typically take the most time to render. The key to working efficiently is to produce good-enough quality images in as little time as possible in order to meet production deadlines. In other words, choose only the most economical values for options that let you produce images of acceptable quality for your particular project.
A rendered image can be understood in terms of a number of visible features. Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together.
Shading – how the color and brightness of a surface varies with lighting
Texture-mapping – a method of applying detail to surfaces
Bump-mapping – a method of simulating small-scale bumpiness on surfaces
Fogging/participating medium – how light dims when passing through non-clear atmosphere or air
Shadows – the effect of obstructing light
Soft shadows – varying darkness caused by partially obscured light sources
Reflection – mirror-like or highly glossy reflection
Transparency (optics), transparency (graphic) or opacity – sharp transmission of light through solid objects
Translucency – highly scattered transmission of light through solid objects
Refraction – bending of light associated with transparency
Diffraction – bending, spreading, and interference of light passing by an object or aperture that disrupts the ray
Indirect illumination – surfaces illuminated by light reflected off other surfaces, rather than directly from a light source (also known as global illumination)
Caustics (a form of indirect illumination) – reflection of light off a shiny object, or focusing of light through a transparent object, to produce bright highlights on another object
Depth of field – objects appear blurry or out of focus when too far in front of or behind the object in focus
Motion blur – objects appear blurry due to high-speed motion, or the motion of the camera
Non-photorealistic rendering – rendering of scenes in an artistic style, intended to look like a painting or drawing.
Many rendering algorithms have been researched, and software used for rendering may employ a number of different techniques to obtain a final image.
Tracing every particle of light in a scene is nearly always completely impractical and would take a stupendous amount of time. Even tracing a portion large enough to produce an image takes an inordinate amount of time if the sampling is not intelligently restricted.
Therefore, a few loose families of more-efficient light transport modelling techniques have emerged:
rasterization, including scanline rendering, geometrically projects objects in the scene to an image plane, without advanced optical effects;
ray casting considers the scene as observed from a specific point of view, calculating the observed image based only on geometry and very basic optical laws of reflection intensity, and perhaps using Monte Carlo techniques to reduce artifacts;
ray tracing is similar to ray casting, but employs more advanced optical simulation, and usually uses Monte Carlo techniques to obtain more realistic results at a speed that is often orders of magnitude faster.
The fourth type of light transport technique, radiosity is not usually implemented as a rendering technique, but instead calculates the passage of light as it leaves the light source and illuminates surfaces. These surfaces are usually rendered to the display using one of the other three techniques.
Most advanced software combines two or more of the techniques to obtain good-enough results at reasonable cost.
Another distinction is between image order algorithms, which iterate over pixels of the image plane, and object order algorithms, which iterate over objects in the scene. Generally object order is more efficient, as there are usually fewer objects in a scene than pixels.
Understanding the current market of rendering engine.
Unbiased - Unbiased renderers like Maxwell, Indigo, and Luxrender are typically hailed as "physically accurate" render engines. Although "physically accurate" is something of a misnomer (nothing in CG is truly physically accurate), the term is meant to imply that an unbiased renderer calculates the path of light as accurately as is statistically possible within the confines of current-gen rendering algorithms.In other words, no systematic error or "bias" is willfully introduced. Any variance will manifest as noise, but given enough time an unbiased renderer will eventually converge on a mathematically "correct" result.
Biased - Biased renderers, on the other hand, make certain concessions in the interest of efficiency. Instead of chugging away until a sound result has been reached, biased renderers will introduce sample bias, and use subtle interpolation or blurring to reduce render time. Biased renderers can typically be fine-tuned more than their unbiased counterparts, and in the right hands, a biased renderer can potentially produce a thoroughly accurate result with significantly less CPU time.
So ultimately, the choice is between an unbiased engine, which requires more CPU time but fewer artist-hours to operate, or a biased renderer which gives the artist quite a bit more control but requires a larger time investment from the render technician.
Although there are always exceptions to the rule, unbiased renderers work quite well for still images, especially in the architectural visualization sector, however in motion graphics, film, and animation biased the efficiency of a biased renderer is usually preferable.
Search
If you've spent any time looking into the various render engines on the market, or read about stand-alone rendering solutions, chances are you've come across terms like biased & unbiased, GPU-acceleration, Reyes, and Monte-Carlo.
The latest wave of next-generation renderers has generated a tremendous amount of hype, but it can sometimes be tough to tell the difference between a marketing buzzword and an honest-to-god feature.
What Is the Difference Between Biased and Unbiased Rendering?
Digital rendering of architecture
Mina De La O/Getty Images
The discussion of what constitutes unbiased rendering versus biased rendering can get technical pretty quickly. We want to avoid that, so I'll try to keep it as basic as possible.
Unbiased - Unbiased renderers like Maxwell, Indigo, and Luxrender are typically hailed as "physically accurate" render engines. Although "physically accurate" is something of a misnomer (nothing in CG is truly physically accurate), the term is meant to imply that an unbiased renderer calculates the path of light as accurately as is statistically possible within the confines of current-gen rendering algorithms.In other words, no systematic error or "bias" is willfully introduced. Any variance will manifest as noise, but given enough time an unbiased renderer will eventually converge on a mathematically "correct" result.
Biased - Biased renderers, on the other hand, make certain concessions in the interest of efficiency. Instead of chugging away until a sound result has been reached, biased renderers will introduce sample bias, and use subtle interpolation or blurring to reduce render time. Biased renderers can typically be fine-tuned more than their unbiased counterparts, and in the right hands, a biased renderer can potentially produce a thoroughly accurate result with significantly less CPU time.
So ultimately, the choice is between an unbiased engine, which requires more CPU time but fewer artist-hours to operate, or a biased renderer which gives the artist quite a bit more control but requires a larger time investment from the render technician.
Although there are always exceptions to the rule, unbiased renderers work quite well for still images, especially in the architectural visualization sector, however in motion graphics, film, and animation biased the efficiency of a biased renderer is usually preferable.
How Does GPU Acceleration Factor in?
GPU acceleration is a relatively new development in rendering technology. Game-engines have depended on GPU based graphics for years and years, however, it's only fairly recently that GPU integration has been explored for use in non-real-time rendering applications where the CPU has always been king.
However, with widespread proliferation of NVIDIA's CUDA platform, it became possible to use the GPU in tandem with the CPU in offline rendering tasks, giving rise to an exciting new wave of rendering applications.
GPU-acclerated renderers can be unbiased, like Indigo or Octane, or biased like Redshift.
What Does It All Mean for the End-User?
First of all, it means there are more options than ever. Not so long ago, rendering was a bit of a black magic in the CG world, and only the most technically minded artists held the keys. Over the course of the past decade, the playing field has leveled a great deal and photo-realism has become perfectly attainable for a one person team (in a still image, at least).
Check out our recently published list of render engines get a feel for how many new solutions have emerged. Rendering technology has jumped way out of the box, and newer solutions like Octane or Redshift are so different from old standbys like Renderman that it almost doesn't even make sense to compare them.
WHAT IS VIRTUAL, MIXED & AUGMENTED REALITY?
Virtual reality (VR) is about simulating a reality based on 3D-models within a computer.
Mixed reality (MR) concerns the amalgamation of the actual world with a virtual one.
Augmented reality (AR) is adding information to the actual world. This additional
information can be presented to the user by means of a smartphone, tablet, beamer, pair of
smart glasses or a head-mounted display (HMD).
Visual 1 Virtual, mixed and augmented reality.
Development of the fi rst VR-glasses started in the 1960s. At the end of the 1990s, games
and new devices like Nintendo’s ‘Virtual Boy’ gave the appearance of a breakthrough in
VR. Eventually, technical constraints like weight, inadequate graphics and a shortage of
available content hindered development. In the past few years, advancements, especially in
smartphone technology, have raised expectations for an actual breakthrough in VR/AR.
It is widely expected that virtual and augmented reality may cause a revolution in technology.
At the very least, these technologies will lead to new branches and new ways to work and
communicate. Within companies, departments at the end of a process like services will come into direct contact with departments at the beginning of the process like research and
development. For example, some companies use VR/AR to train their service technicians in
maintaining machines that are still in development (see the Océ / Canon user case on page 7).
One aspect that always plays a role in adoption processes is the so-called horseless
carriage syndrome .Some say the adoption of VR and AR technologies can be compared to the emergence of the Internet and the smartphone. One aspect that always plays a role in adoption processes, is
the so-called ‘horseless carriage syndrome’. In other words, when people are faced with new
innovations, they revert to older more familiar technology applications. For example, the
fi rst automobiles looked like carriages and the fi rst websites were similar to brochures. Only
as time progresses, people can fully grasp the possibilities and unique applications of new
technologies.
The ‘horseless carriage syndrome’ off ers opportunities to fi rst movers since they already
experiment with these technologies and they can more quickly understand the potential for
new products and services.
“I think there is a world market for
about fi ve computers.”
- Thomas J. Watson Jr of IBM, 1943
VR and AR technologies enable new ways of interaction and communication. People can enter
environments that do not exist in the real world, but at the same time interact with them.
Additionally, information and interaction can be added to an actual environment, making it
richer by doing so.
The current status of the VR/AR market resembles the Internet around 1995.
In 2016, the fi rst headsets like the Oculus Rift and the HTC Vive were shipped to consumers worldwide. At this point, it can be argued that the international VR/AR market resembles the
Internet around 1995. A lack of available and compelling content is a reminder that VR/AR is
still an undeveloped market.
No comments
Post a Comment