Skip to content

The shaders are a core component of Computer Graphics and continuously revolutionize all forms of graphics, physical simulation, visualization, etc.

Notifications You must be signed in to change notification settings

sriahri/Shaders

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Shaders

The shaders are a core component of Computer Graphics and continuously revolutionize all forms of graphics, physical simulation, visualization, etc.

PassThrough Shader

The pass-through shader serves as a basic “Hello World” of the programmable graphics pipeline. This shader doesn’t alter the appearance of the loaded 3D objects but rather passes their geometric and preset color information (a predefined or hard coded color) directly through the stages of the pipeline, minimizing the amount of shader code required to complete the rendering process. The primary purpose of a pass-through shader is to maintain the integrity of the object’s surface characteristics without introducing any modifications. In the case of rendering the final output, the pass-through shader ensures that the loaded object is displayed in a solid color, preserving the inherent colors and attributes defined by the original mesh and hard-coded surface color. PassThrough Shader

Color Mapping Shader

The color-mapping shader provides a basic demonstration of how the surface of an object can be shaded using a linear interpolation of colors based on a given surface normal n^ provided by the geometry of the loaded mesh. The surface normal components n^=(x,y,z) are directly mapped to the color components (R,G,B) of the fragment color. The resulting multi-color blend is due to the orientation of the surface normals about the y^ axis. This process defines the overall operation of the shader as a basic color mapping. ColorMapping Shader

The Ambient-Diffuse-Specular (ADS) Lighting Model

The most common illumination model that has persisted through the development of computer graphics over the course of several years is based on a three-component form that is computationally efficient and easy to implement using modern shader languages. This lighting model has become known as the Ambient, Diffuse, Specular or (ADS) light shading model. The ADS lighting model provides the basis for understanding and implementing the simplest form of basic light effects. There are different methods for implementing this lighting model which impact both the performance and visual quality of the result. This includes both Gouraud and Phong lighting models that use the underlying ideas.

Gouraud Shading:

For each vertex, the light intensity provided by the light source is evaluated within the object surface using the vertex normal. Once this intensity value is determined for each vertex, the intensity values are linearly interpolated between vertices (color interpolation). Gouraud Shading

Phong Shading:

For each fragment, the surface normal is linearly interpolated based on the position of the fragment in the surface and each of the vertex normals connected to the current face. This will determine the light intensity value for each fragment in the surface (normal interpolation). Phong Shading

Phong Multi-Light Shading:

We take an implementation of the Phong Light Shader for one light source and implement multiple lights. Each light will use the Phong illumination model (ADS) and be used to light a scene with two loaded objects. This will result in a 3D scene with 1 display object (the mesh) and a ground (plane mesh) that will clearly show 4 different color lights. For an example of the overall scene setup and what the final result should look like, see the image below. This result includes 4 lights with colors: white, red, blue, and green. Phong Multi Light Shading We take the implementation of 4 lights that use the Phong illumination model to light 2 objects that compose the simple scene shown above. This will require defining the properties of all 4 lights in C++ code, passing this information to the vertex/fragment shaders through uniforms, and implementing the Phong lighting model for each light.

Surface Modeling:

Surface modeling incorporates the process of replicating surface color, curvature, roughness, depth, and reflectance properties that are required to make a three-dimensional material appear like a real-world material. From this definition, surface modeling requires a base lighting model that describes the relationship between light sources, reflections, diffusion, translucency, and the viewing angle or camera that defines how the surface will be rendered. To further define how more complex surfaces interact with their environment, the lighting model is taken as a basic foundation that can be extended to incorporate additional parameters. Typically, one of the most popular lighting models to build from is the Phong Reflection model. This provides a base surface illumination model that can support a more complex set of parameters that can be used to express more intricate surface definitions. From this, each aspect of surface modeling can then be modified to incorporate more complex surface properties including how the ambient, diffuse, and specular components are defined. This provides the basic foundation for most basic material shaders that combine different types of information stored into textures to create advanced lighting effects. Examples of the maps that provide these advanced effects include the use of base textures (left), normal maps (center), and specular maps (right).

Normal Mapping:

Normal mapping is a widely used technique in graphics shading that enhances the visual realism of 3D models by simulating fine surface details without the need for additional geometric complexity. Instead of altering the actual geometry of the object, normal mapping manipulates the shading calculations by perturbing the surface normals based on information stored in a texture map. In this technique, a normal map texture is created, encoding local surface details in RGB values. During rendering, the fragment shader samples this normal map to obtain perturbed normals, providing the illusion of intricate surface features such as bumps, grooves, or scratches. This results in improved lighting and shading effects, as the altered normals interact with light sources more convincingly. Normal Mapping

Specular Intensity:

Specular intensity defines a scalar relationship between both a scalar map and the resulting reflection vector used for the specular component of the Phong model. This provides a parameterization of the specular reflection illustrated on the surface of a material. In this example, a material simulation of metal-flake paint is approximated through the use of a Gaussian noise image used with a specular intensity value. Specular Intensity

Specular Mapping:

Specular mapping provides the basis of a material model based on the contribution of three components: (1) the base or diffuse texture, (2) the normal map, and (3) the specular texture. The material combines all three of these textures along with the Phong illumination model to provide the foundation of a material model that is commonly used in 3D modeling and game design. Specular Mapping

Parallax Mapping:

Parallax mapping is an advanced technique utilized in graphics shading to simulate depth and relief on flat surfaces, thereby enhancing the perception of three-dimensionality without significantly increasing the geometric complexity of a scene. This method achieves its effect by dynamically offsetting texture coordinates based on the observer’s viewing angle, creating the illusion of depth. Unlike traditional texture mapping, parallax mapping incorporates a depth map, which encodes depth information, often using grayscale values, corresponding to the texture coordinates. During rendering, the fragment shader samples both the color texture and the depth map to calculate the final texture coordinates, adjusting them based on the viewing angle to simulate parallax effects. Parallax Mapping

Displacement Mapping:

The displacement mapping shader provides an example of how to perform operations on the vertices of a mesh by using a height map to alter the coordinates of the vertices of a mesh. The provided tessellated mesh will be modified by the loaded heightmap texture to provide displacements to its vertices. The resulting displaced vertices are then rendered as a wireframe mesh in the fragment shader. Displacement Mapping

Material Display:

A basic scene that includes 3 materials with 3 lights. Each material will consist of diffuse (albedo), normal, and specular maps. The materials will be applied to spheres set up in a small scene where you can also position the 3 lights to see the three different materials clearly. This will cover how to load textures and implement the addition of the lights by passing their properties from C++ to GLSL shader code. Multi Material Scene

Environment mapping is a technique commonly used in computer graphics to simulate reflections on surfaces without the need for expensive ray tracing calculations. It involves wrapping a texture around an object to simulate the reflections of its surrounding environment. This texture, often called an environment map or cube map, contains pre-rendered images of the scene as seen from different directions. When applied to an object, the environment map provides an approximation of how light reflects off its surface, giving the illusion of dynamic reflections. By utilizing texture data, environment mapping efficiently approximates static reflections, enhancing the realism of virtual scenes while minimizing computational overhead compared to more complex methods like ray tracing.

Environment Mapping:

Environment mapping predates ray tracing and real-time reflective materials but at a substantially lower computational cost. To formulate a computationally inexpensive technique for replicating the effect of reflecting the environment in which an object resides, several assumptions can be imposed to simplify the required reflection calculations. These assumptions include that the reflected environment is static, lighting is precomputed, and care is taken to minimize how obvious these limitations are. While this limits the realism of the introduced reflection characteristics of the material, it provides an adequate approximation in most instances of reflective surfaces. More recent advancements have also incorporated how to change the formulation from a set of static textures to reflect environmental updates through techniques such as Render to Texture. Environment Mapping

Environment Material:

Environment material mapping using cube-maps is a technique used to simulate the reflection of surrounding environment onto an object’s surface with Phong material properties. A cube-map is essentially a collection of six 2D textures, each representing a different direction of a cube (front, back, left, right, top, and bottom). These textures are typically captured from a real-world environment or synthesized procedurally. When rendering a scene, the cube-map is projected onto the object’s surface, with the appropriate texture sampled based on the surface normals direction. This creates a convincing illusion of reflections, making the object appear to reflect its surroundings realistically, even though the surrounding geometry might not be present in the scene. Environment Material

Refraction Mapping:

Modeling reflections of an object’s surface through the use of an environment map can also form the basis for modeling other surface reflectance properties such as refraction for simulating transparency. Building on the environmental mapping technique introduced for reflections using a cube map, simulated refraction can be implemented through the use of an additional refraction vector. The primary difference between calculating the reflection direction and the refraction direction is that for refraction, the vector extends into the surface of the object (travels through the object), leading it to appear transparent due to how the background (cube-map) is sampled. Once this direction has been established, it will be used as a texture coordinate within the cube-map (the same as the reflection result above) to determine the resulting fragment color based on the provided environment map textures. Refraction Mapping

Distance Fog:

Distance fog, a visual effect often implemented in shaders, enhances realism and depth perception in virtual environments. It simulates the atmospheric scattering phenomenon, where distant objects appear increasingly hazy or washed out as they recede into the distance. Achieved by calculating the distance from the camera to each fragment or vertex in the scene, distance fog modifies the color or opacity of rendered objects based on their distance from the viewer. This technique adds atmospheric depth, conveying a sense of scale and atmosphere. The effect of fog can be described as the introduction of a uniform color value applied to an object’s surface as a function of distance. As the object’s distance within the fog is increased, the color value applied to the object’s surface also increases. This is true until the object becomes completely enveloped by the fog. This same principle can easily be applied within a fragment shader to provide the illusion of fog within an environment. This effect can provide a large aesthetic impact on the resulting image generated when combined with other advanced shaders (similar to the rendering provided within the introduction of this Module). The fog component is formulated as a linear color gradient based on four components: (1) the fog start (depth) value, (2) the fog end (depth) value, (3) the RGB color definition of the fog, and z the depth value of any given fragment. Distance Fog

Texture Blending:

Many surfaces can be composed of more than one individual material, incorporating other materials as a function of blending two or more textures together based on a function that defines how the two materials are combined. One of the primary motivations behind texture blending is terrain rendering. A naive approach to providing multiple materials on a terrain model is to modify the texture within image editing software to contain multiple materials. Immediately, several problems with this approach can be identified: (1) as the size of the terrain is scaled larger, the perceived quality of the surface will decrease as the texture is stretched across the larger region, (2) the texture memory required for an individual image (of any substantial quality) is extremely expensive, and (3) modifications of the texture require editing the original image within the external editing software. These problems illustrate why most implementations of terrain modeling employ texture blending through alpha maps and transparency. With the introduction of texture blending, each texture of the terrain can be represented by a reasonably sized tilable (repeats without sharp seams) texture that is replicated over the surface of the model. Then for the entire terrain region, a single alpha (or transparency) map can be used to mask which regions should be shared between the provided textures. Texture Blending

Normal Visualization:

Mesh geometry includes surface normals that can be visualized as lines extending from the vertex they belong to. This example shows one of the simplest geometry shader implementations for visualizing vertex normals. To visualize the normals, a line segment is generated at each vertex which extends from the vertex in the direction of the normal for a specified length. Normal Visualization

Geometry Inset:

The geometry inset example implements a common geometric operation that can be performed on polygon meshes. For each face within the loaded mesh, an inset is defined as a ‘shrinking’ operation that will inset the face within the original geometry, maintaining the same angle relationships. To demonstrate how this can be changed over time in the geometry shader, the implementation fixes the inset value to sin(t). This results in the faces transitioning between their original state and the inset state as shown in the images below: Geometry Inset

Spotlight:

Spotlights are restricted point lights, i.e. the light rays are only emitted in a restricted set of directions. Commonly we use a cone to define this restriction, but other shapes are possible. It is up to the fragment shader to determine if a fragment is inside the cone, i.e. the dot product between the light’s direction and the spot’s direction is less than some cutoff value, and lit it accordingly. Actually, instead of using an angle, we should provide the cosine of the angle to avoid having to compute the inverse cosine in a shader. Spotlight

Multi SpotLight Shader:

Implement spotlights (directional lights) that can be used to illuminate a target model object. To implement this, a small scene will be created composed of: (1) the ground plane, rendered with a textured surface, (2) the model object, rendered with an environmental material, and (3) three spotlights that shine on the model object. It contains the following: (1) a textured ground plane with 3 textures (diffuse, normal, specular) (2) a model object that has a cube-map material (3) the implementation of three spotlights. Spotlights are specifically characterized by: their direction (position and target), color, exponent, and cutoff. Multi SpotLight Shader

About

The shaders are a core component of Computer Graphics and continuously revolutionize all forms of graphics, physical simulation, visualization, etc.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published