How To Be CREATIVE

0

Category:

1.Ignore everybody.
The more original your idea is, the less good advice other people will be able to give you.
When I first started with the cartoon-on back-of-bizcard format, people thought Iwas nuts. Why wasnʼt I trying to do something more easy for markets to digest, i.e.,cutie-pie greeting cards or whatever?
You donʼt know if your idea is any good the moment itʼs created. Neither does anyone else.The most you can hope for is a strong gut feeling that it is. And trusting your feelings is not as easy as the optimists say it is. Thereʼs a reason why feelings scare us. And asking close friends never works quite as well as you hope, either. Itʼs not that they deliberately want to be unhelpful. Itʼs just they donʼt know your world one millionth as well as you know your world, no matter how hard they try, no matter how hard you try to explain.
Plus, a big idea will change you. Your friends may love you, but they donʼt want you to change. If you change, then their dynamic with you also changes. They like things the way they are, thatʼs how they love you—the way you are, not the way you may become.


2.The idea doesn’t have to be big. It just has to change the world.
The two are not the same thing.We all spend a lot of time being impressed
by folks weʼve never met.Somebody featured in the media whoʼs got a big company, a big product, a bigmovie, a big bestseller. Whatever. And we spend even more time trying unsuccessfully to keep up with them. Trying to start up our own companies, our own products, our own film projects, books and whatnot.
Iʼm as guilty as anyone. I tried lots of different things over the years, trying desperately to pry
my career out of the jaws of mediocrity. Some to do with business, some to do with art, etc. One evening, after one false start too many, I just gave up. Sitting at a bar, feeling a bit burned out by work and life in general, I just started drawing on the back of business cards for no reason. I didnʼt really need a reason. I just did it because it was there, because it amused me in a kind of random, arbitrary way.
Of course it was stupid. Of course it wasnʼt commercial. Of course it wasnʼt going to go anywhere. Of course it was a complete and utter waste of time. But in retrospect, it was this built-in futility that gave it its edge. Because it was the exact opposite of all the “Big Plans”

The sovereignty you have over your work will inspire far more people than the actual content ever will.

3.Put the hours in.
Doing anything worthwhile takes forever.90% of what separates successful people and failed people is time, effort, and stamina. I get asked a lot, “Your business card format is very simple. Arenʼt you worried about somebody ripping it off?”
Standard Answer: Only if they can draw more of them than me, better than me.
What gives the work its edge is the simple fact that Iʼve spent years drawing them. Iʼve drawn thousands. Tens of thousands of man-hours.
So if somebody wants to rip my idea off, go ahead. If somebody wants to overtake me in the business card doodle wars, go ahead. Youʼve got many long years in front of you. And unlike me, you wonʼt be doing it for the joy of it. Youʼll be doing it for some self-loathing, ill-informed, lame-ass mercenary reason. So the years will be even longer and far, far more painful. Lucky you.
If somebody in your industry is more successful than you, itʼs probably because he works harder at it than you do. Sure, maybe heʼs more inherently talented, more adept at networking, etc., but I donʼt consider that an excuse. Over time, that advantage counts for less and less. Which is why the world is full of highly talented, network-savvy, failed mediocrities.

Put the hours in; do it for long enough and magical, life-transforming things happen eventually.

The point is, an hour or two on the train is very manageable for me. The fact I have a job means I donʼt feel pressured to do something market-friendly. Instead, I get to do whatever the hell I want. I get to do it for my own satisfaction. And I think that makes the work more powerful in the long run. It also makes it easier to carry on with it in a calm fashion, day-in day-out, and not go crazy in insane, creative bursts brought on by money worries.
The day job, which I really like, gives me something productive and interesting to do among fellow adults. It gets me out of the house in the daytime. If I were a professional cartoonist,Iʼd just be chained to a drawing table at home all day, scribbling out a living in silence, interrupted only by frequent trips to the coffee shop. No, thank you.
Simply put, my method allows me to pace myself over the long haul, which is important.Stamina is utterly important. And stamina is only possible if itʼs managed well. People think all they need to do is endure one crazy, intense, job-free creative burst and their dreams will come true. They are wrong, they are stupidly wrong.

4.If your biz plan depends on you suddenly being “discovered” by some big shot, your plan will
probably fail.

Nobody suddenly discovers anything.Things are made slowly and in pain.I was offered a quite substantial publishing deal a year or two ago. Turned it down. The company sent me a contract. I looked it over.

5.You are responsible for your own experience.
Nobody can tell you if what youʼre doing is good, meaningful or worthwhile. The more
compelling the path, the lonelier it is. Every creative person is looking for “The Big Idea.” You know, the one that is going tocatapult them out from the murky depths of obscurity and on to the highest planes of incandescent lucidity.
The one thatʼs all love-at-first-sight with the Zeitgeist. The one thatʼs going to get them invited to all the right parties, metaphorical or otherwise.So naturally you ask yourself, if and when you finally come up with The Big Idea, after years of toil, struggle and doubt, how do you know whether or not it is “The One?”
Answer: You donʼt.
Thereʼs no glorious swelling of existential triumph. Thatʼs not what happens. All you get is this rather kvetchy voice inside you that seems to say, “This is totally stupid. This is utterly moronic. This is a complete waste of time. Iʼm going to do it anyway.”And you go do it anyway.Second-rate ideas like glorious swellings far more. Keeps them alive longer.

6.Everyone is born creative;everyone is given a box of crayons in kindergarten.
Then when you hit puberty they take the crayons away and replace them with books
on algebra etc. Being suddenly hit years later with the creative bug is just a wee voice telling you, “Iʼd like my crayons back, please.” So youʼve got the itch to do something. Write a screenplay, start a painting, write a book, turn your recipe for fudge brownies into a proper business, whatever. You donʼt know where the itch came from; itʼs almost like it just arrived on your doorstep, uninvited. Until now you were quite happy holding down a real job, being a regular person...Until now.
You donʼt know if youʼre any good or not, but youʼd think you could be. And the idea terrifies you. The problem is, even if you are good, you know nothing about this kind of business.
You donʼt know any publishers or agents or all these fancy-shmancy kind of folk. You have a friend whoʼs got a cousin in California whoʼs into this kind of stuff, but you havenʼt talked to your friend for over two years...
Besides, if you write a book, what if you canʼt find a publisher? If you write a screenplay, what
if you canʼt find a producer? And what if the producer turns out to be a crook? Youʼve always
worked hard your whole life; youʼll be damned if youʼll put all that effort into something if there ainʼt no pot of gold at the end of this dumb-ass rainbow...

They’re only crayons. You didn’t fear them in kindergarten, why fear them now?

Your wee voice doesnʼt want you to sell something. Your wee voice wants you to make something. Thereʼs a big difference. Your wee voice doesnʼt give a damn about publishers or Hollywood producers.
Go ahead and make something. Make something really special. Make something amazing that
will really blow the mind of anybody who sees it.
If you try to make something just to fit your uninformed view of some hypothetical market, you will fail. If you make something special and powerful and honest and true, you will succeed.
The wee voice didnʼt show up because it decided you need more money or you need to hang out with movie stars. Your wee voice came back because your soul somehow depends on it.Thereʼs something you havenʼt said, something you havenʼt done, some light that needs to be switched on, and it needs to be taken care of. Now.
So you have to listen to the wee voice or it will die…taking a big chunk of you along with it.

The creative person basically has two kinds of jobs.One is the sexy, creative kind. Second is the kind that pays the bills.

openGL Programming

0

Category:

The OpenGL Shading Language code that is intended for execution on one of the OpenGL programmable processors is called a SHADER. The term OPENGL SHADER is sometimes used to differentiate a shader written in the OpenGL Shading Language from a shader written in another shading language such as RenderMan. Because two programmable processors are defined in OpenGL, there are two types of shaders: VERTEX SHADERS and FRAGMENT SHADERS. OpenGL provides mechanisms for compiling shaders and linking them to form executable code called a PROGRAM. A program contains one or more EXECUTABLES that can run on the programmable processing units.

The OpenGL Shading Language has its roots in C and has features similar to RenderMan and other shading languages. The language has a rich set of types, including vector and matrix types to make code more concise for typical 3D graphics operations. A special set of type qualifiers manages the unique forms of input and output needed by shaders. Some mechanisms from C++, such as function overloading based on argument types and the capability to declare variables where they are first needed instead of at the beginning of blocks, have also been borrowed. The language includes support for loops, subroutine calls, and conditional expressions. An extensive set of built-in functions provides many of the capabilities needed for implementing shading algorithms. In brief,

The OpenGL Shading Language is a high-level procedural language.

As of OpenGL 2.0, it is part of standard OpenGL, the leading cross-platform, operating-environment-independent API for 3D graphics and imaging.

The same language, with a small set of differences, is used for both vertex and fragment shaders.

It is based on C and C++ syntax and flow control.

It natively supports vector and matrix operations since these are inherent to many graphics algorithms.

It is stricter with types than C and C++, and functions are called by value-return.

It uses type qualifiers rather than reads and writes to manage input and output.

It imposes no practical limits to a shader's length, nor does the shader length need to be queried.

The following sections contain some of the key concepts that you will need to understand in order to use the OpenGL Shading Language effectively.

Why Write Shaders?
Until recently, OpenGL has presented application programmers with a flexible but static interface for putting graphics on the display device. As described in Chapter 1, you could think of OpenGL as a sequence of operations that occurred on geometry or image data as it was sent through the graphics hardware to be displayed on the screen. Various parameters of these pipeline stages could be altered to select variations on the processing that occurred for that pipeline stage. But neither the fundamental operation of the OpenGL graphics pipeline nor the order of operations could be changed through the OpenGL API.

By exposing support for traditional rendering mechanisms, OpenGL has evolved to serve the needs of a fairly broad set of applications. If your particular application was well served by the traditional rendering model presented by OpenGL, you may never need to write shaders. But if you have ever been frustrated because OpenGL did not allow you to define area lights, or because lighting calculations are performed per-vertex rather than perfragment or, if you have run into any of the many limitations of the traditional OpenGL rendering model, you may need to write your own OpenGL shader.

The OpenGL Shading Language and its supporting OpenGL API entry points allows application developers to define the processing that occurs at key points in the OpenGL processing pipeline by using a high-level programming language specifically designed for this purpose. These key points in the pipeline are defined to be programmable in order to give developers complete freedom to define the processing that occurs. This lets developers utilize the underlying graphics hardware to achieve a much wider range of rendering effects.

To get an idea of the range of effects possible with OpenGL shaders, take a minute now and browse through the color images that are included in this book. This book presents a variety of shaders that only begin to scratch the surface of what is possible. With each new generation of graphics hardware, more complex rendering techniques can be implemented as OpenGL shaders and can be used in real-time rendering applications. Here's a brief list of what's possible with OpenGL shaders:

Increasingly realistic materialsmetals, stone, wood, paints, and so on

Increasingly realistic lighting effectsarea lights, soft shadows, and so on

Natural phenomenafire, smoke, water, clouds, and so on

Advanced rendering effectsglobal illumination, ray-tracing, and so on

Non-photorealistic materialspainterly effects, pen-and-ink drawings, simulation of illustration techniques, and so on

New uses for texture memorystorage of normals, gloss values, polynomial coefficients, and so on

Procedural texturesdynamically generated 2D and 3D textures, not static texture images

Image processingconvolution, unsharp masking, complex blending, and so on

Animation effectskey frame interpolation, particle systems, procedurally defined motion

User programmable antialiasing methods

General computationsorting, mathematical modeling, fluid dynamics, and so on

Many of these techniques have been available before now only through software implementations. If they were at all possible through OpenGL, they were possible only in a limited way. The fact that these techniques can now be implemented with hardware acceleration provided by dedicated graphics hardware means that rendering performance can be increased dramatically and at the same time the CPU can be off-loaded so that it can perform other tasks.

Vertex Processor
The VERTEX PROCESSOR is a programmable unit that operates on incoming vertex values and their associated data. The vertex processor usually performs traditional graphics operations such as the following:

Vertex transformation

Normal transformation and normalization

Texture coordinate generation

Texture coordinate transformation

Lighting

Color material application

Because of its general purpose programmability, this processor can also be used to perform a variety of other computations. Shaders that are intended to run on this processor are called vertex shaders. Vertex shaders can specify a completely general sequence of operations to be applied to each vertex and its associated data. Vertex shaders that perform some of the computations in the preceding list must contain the code for all desired functionality from the preceding list. For instance, it is not possible to have the existing fixed functionality perform the vertex and normal transformation but to have a vertex shader perform a specialized lighting function. The vertex shader must be written to perform all three functions.

The vertex processor does not replace graphics operations that require knowledge of several vertices at a time or that require topological knowledge. OpenGL operations that remain as fixed functionality in between the vertex processor and the fragment processor include perspective divide and viewport mapping, primitive assembly, frustum and user clipping, backface culling, two-sided lighting selection, polygon mode, polygon offset, selection of flat or smooth shading, and depth range.

Variables defined in a vertex shader can be qualified as ATTRIBUTE VARIABLES. These represent values that are frequently passed from the application to the vertex processor. Because this type of variable is used only for data from the application that defines vertices, it is permitted only as part of a vertex shader. Applications can provide attribute values between calls to glBegin and glEnd or with vertex array calls, so they can change as often as every vertex.

There are two types of attribute variables: built in and user defined. Standard attribute variables in OpenGL include things like color, surface normal, texture coordinates, and vertex position. The OpenGL calls glColor, glNormal, glVertex, and so on, and the OpenGL vertex array drawing commands can send standard OpenGL vertex attributes to the vertex processor. When a vertex shader is executing, it can access these data values through built-in attribute variables named gl_Color, gl_Normal, gl_Vertex, and so on.

Because this method restricts vertex attributes to the set that is already defined by OpenGL, a new interface allows applications to pass arbitrary per-vertex data. Within the OpenGL API, generic vertex attributes are defined and referenced by numbers from 0 up to some implementation-dependent maximum value. The command glVertexAttrib sends generic vertex attributes to OpenGL by specifying the index of the generic attribute to be modified and the value for that generic attribute.

Vertex shaders can access these generic vertex attributes through user-defined attribute variables. Another new OpenGL command, glBindAttribLocation, allows an application to tie together the index of a generic vertex attribute and the name with which to associate that attribute in a vertex shader.

UNIFORM VARIABLES pass data values from the application to either the vertex processor or the fragment processor. Uniform variables typically provide values that change relatively infrequently. A shader can be written so that it is parameterized with uniform variables. The application can provide initial values for these uniform variables, and the end user can manipulate them through a graphical user interface to achieve a variety of effects with a single shader. But uniform variables cannot be specified between calls to glBegin and glEnd, so they can change at most once per primitive.

The OpenGL Shading Language supports both built-in and user-defined uniform variables. Vertex shaders and fragment shaders can access current OpenGL state through built-in uniform variables containing the reserved prefix "gl_". Applications can make arbitrary data values available directly to a shader through user-defined uniform variables. glGetUniformLocation obtains the location of a user-defined uniform variable that has been defined as part of a shader. Data can be loaded into this location with another new OpenGL command, glUniform. Variations of this command facilitate loading of floating-point, integer, Boolean, and matrix values, as well as arrays of these.

Another new feature is the capability of vertex processors to read from texture memory. This allows vertex shaders to implement displacement mapping algorithms, among other things. (However, the minimum number of vertex texture image units required by an implementation is 0, so texture-map access from the vertex processor still may not be possible on all implementations that support the OpenGL Shading Language.) For accessing mipmap textures, level of detail can be specified directly in the shader. Existing OpenGL parameters for texture maps define the behavior of the filtering operation, borders, and wrapping.

Conceptually, the vertex processor operates on one vertex at a time (but an implementation may have multiple vertex processors that operate in parallel). The vertex shader is executed once for each vertex passed to OpenGL. The design of the vertex processor is focused on the functionality needed to transform and light a single vertex. Output from the vertex shader is accomplished partly with special output variables. Vertex shaders must compute the homogeneous position of the coordinate in clip space and store the result in the special output variable gl_Position. Values to be used during user clipping and point rasterization can be stored in the special output variables gl_ClipVertex and gl_PointSize.

Variables that define data that is passed from the vertex processor to the fragment processor are called VARYING VARIABLES. Both built-in and user-defined varying variables are supported. They are called varying variables because the values are potentially different at each vertex and perspective-correct interpolation is performed to provide a value at each fragment for use by the fragment shader. Built-in varying variables include those defined for the standard OpenGL color and texture coordinate values. A vertex shader can use a user-defined varying variable to pass along anything that needs to be interpolated: colors, normals (useful for per-fragment lighting computations), texture coordinates, model coordinates, and other arbitrary values.

There is actually no harm (other than a possible loss of performance) in having a vertex shader calculate more varying variables than are needed by the fragment shader. A warning may be generated if the fragment shader consumes fewer varying variables than the vertex shader produces. But you may have good reasons to use a somewhat generic vertex shader with a variety of fragment shaders. The fragment shaders can be written to use a subset of the varying variables produced by the vertex shader. Developers of applications that manage a large number of shaders may find that reducing the costs of shader development and maintenance is more important than squeezing out a tiny bit of additional performance.

The vertex processor output (special output variables and user-defined and built-in varying variables) is sent to subsequent stages of processing that are defined exactly the same as they are for fixed-function processing: primitive assembly, user clipping, frustum clipping, perspective divide, viewport mapping, polygon offset, polygon mode, shade mode, and culling.

2.3.2. Fragment Processor
The FRAGMENT PROCESSOR is a programmable unit that operates on fragment values and their associated data. The fragment processor usually performs traditional graphics operations such as the following:

Operations on interpolated values

Texture access

Texture application

Fog

Color sum

A wide variety of other computations can be performed on this processor. Shaders that are intended to run on this processor are called fragment shaders. Fragment shaders express the algorithm that executes on the fragment processor and produces output values based on the input values that are provided. A fragment shader cannot change a fragment's x/y position. Fragment shaders that perform some of the computations from the preceding list must perform all desired functionality from the preceding list. For instance, it is not possible to use the existing fixed functionality to compute fog but have a fragment shader perform specialized texture access and texture application. The fragment shader must be written to perform all three functions.

The fragment processor does not replace graphics operations that require knowledge of several fragments at a time. To support parallelism at the fragment-processing level, fragment shaders are written in a way that expresses the computation required for a single fragment, and access to neighboring fragments is not allowed. An implementation may have multiple fragment processors that operate in parallel.

The fragment processor can perform operations on each fragment that is generated by the rasterization of points, lines, polygons, pixel rectangles, and bitmaps. If images are first downloaded into texture memory, the fragment processor can also be used for pixel processing that requires access to a pixel and its neighbors. A rectangle can be drawn with texturing enabled, and the fragment processor can read the image from texture memory and apply it to the rectangle while performing traditional operations such as the following:

Pixel zoom

Scale and bias

Color table lookup

Convolution

Color matrix

The fragment processor does not replace the fixed functionality graphics operations that occur at the back end of the OpenGL pixel processing pipeline such as coverage, pixel ownership test, scissor test, stippling, alpha test, depth test, stencil test, alpha blending, logical operations, dithering, and plane masking.

The primary inputs to the fragment processor are the interpolated varying variables (both built in and user defined) that are the results of rasterization. User-defined varying variables must be defined in a fragment shader, and their types must match those defined in the vertex shader.

Values computed by fixed functionality between the vertex processor and the fragment processor are made available through special input variables. The window coordinate position of the fragment is communicated through the special input variable gl_FragCoord. An indicator of whether the fragment was generated by rasterizing a front-facing primitive is communicated through the special input variable gl_FrontFacing.

Just as in the vertex shader, existing OpenGL state is accessible to a fragment shader through built-in uniform variables. All of the OpenGL state that is available through built-in uniform variables is available to both vertex and fragment shaders. This makes it easy to implement traditional vertex operations such as lighting in a fragment shader.

User-defined uniform variables allow the application to pass relatively infrequently changing values to a fragment shader. The same uniform variable can be accessed by both a vertex shader and a fragment shader if both shaders declare the variable using the same data type.

One of the biggest advantages of the fragment processor is that it can access texture memory an arbitrary number of times and combine in arbitrary ways the values that it reads. A fragment shader is free to read multiple values from a single texture or multiple values from multiple textures. The result of one texture access can be used as the basis for performing another texture access (a DEPENDENT TEXTURE READ). There is no inherent limitation on the number of such dependent reads that are possible, so ray-casting algorithms can be implemented in a fragment shader.

The OpenGL parameters for texture maps continue to define the behavior of the filtering operation, borders, wrapping, and texture comparison modes. These operations are applied when a texture is accessed from within a shader. The shader is free to use the resulting value however it chooses. The shader can read multiple values from a texture and perform a custom filtering operation. It can also use a texture to perform a lookup table operation.

The fragment processor defines almost all the capabilities necessary to implement the fixed-function pixel transfer operations defined in OpenGL, including those in the imaging subset. This means that advanced pixel processing is supported with the fragment processor. Lookup table operations can be done with 1D texture accesses, allowing applications to fully control their size and format. Scale and bias operations are easily expressed through the programming language. The color matrix can be accessed through a built-in uniform variable. Convolution and pixel zoom are supported by accessing a texture multiple times to compute the proper result. Histogram and minimum/maximum operations are left to be defined as extensions because these prove to be quite difficult to support at the fragment level with high degrees of parallelism.

For each fragment, the fragment shader may compute color, depth, and arbitrary values (writing these values into the special output variables gl_FragColor, gl_FragDepth, and gl_FragData) or completely discard the fragment. If the fragment is not discarded, the results of the fragment shader are sent on for further processing. The remainder of the OpenGL pipeline remains as defined for fixed-function processing. Fragments are submitted to coverage application, pixel ownership testing, scissor testing, alpha testing, stencil testing, depth testing, blending, dithering, logical operations, and masking before ultimately being written into the frame buffer. The back end of the processing pipeline remains as fixed functionality because it is easy to implement in nonprogrammable hardware. Making these functions programmable is more complex because read/modify/write operations can introduce significant instruction scheduling issues and pipeline stalls. Most of these fixed functionality operations can be disabled, and alternative operations can be performed within a fragment shader if desired (albeit with possibly lower performance).

OpenGL Shading Language API
As of OpenGL 2.0, support for the OpenGL Shading Language is available as part of standard OpenGL. The following OpenGL entry points support the OpenGL Shading Language

glAttachShader Attach a shader object to a program object

glBindAttribLocation Specify the generic vertex attribute index to be used for a particular user-defined attribute variable in a vertex shader

glCompileShader Compile a shader

glCreateProgram Create a program object

glCreateShader Create a shader object

glDeleteProgram Delete a program object

glDeleteShader Delete a shader object

glDetachShader Detach a shader object from a program object

glDisableVertexAttribArray Disable a generic vertex attribute from being sent to OpenGL with vertex arrays

glEnableVertexAttribArray Enable a generic vertex attribute to be sent to OpenGL with vertex arrays

glGetActiveAttrib Obtain the name, size, and type of an active attribute variable for a program object

glGetActiveUniform Obtain the name, size, and type of an active uniform variable for a program object

glGetAttachedShaders Get the list of shader objects attached to a program object

glGetAttribLocation Return the generic vertex attribute index that is bound to a specified user-defined attribute variable

glGetProgram Query one of the parameters of a program object

glGetProgramInfoLog Obtain the information log for a program object

glGetShader Query one of the parameters of a shader object

glGetShaderInfoLog Obtain the information log for a shader object

glGetShaderSource Get the source code for a specific shader object

glGetUniform Query the current value of a uniform variable

glGetUniformLocation Query the location assigned to a uniform variable by the linker

glGetVertexAttrib Return current state for the specified generic vertex attribute

glGetVertexAttribPointer Return the vertex array pointer value for the specified generic vertex attribute

glIsProgram Determine if an object name corresponds to a program object

glIsShader Determine if an object name corresponds to a shader object

glLinkProgram Link a program object to create executable code

glShaderSource Load source code strings into a shader object

glUniform Set the value of a uniform variable

glUseProgram Install a program object's executable code as part of current state

glValidateProgram Return validation information for a program object

glVertexAttrib Send generic vertex attributes to OpenGL one vertex at a time

glVertexAttribPointer Specify location and organization of generic vertex attributes to be sent to OpenGL with vertex arrays

Key Benefits
The following key benefits are derived from the choices that were made during the design of the OpenGL Shading Language.


Tight integration with OpenGL The OpenGL Shading Language was designed for use in OpenGL. It is designed in such a way that an existing, working OpenGL application can easily be modified to take advantage of the capabilities of programmable graphics hardware. Built-in access to existing OpenGL state, reuse of API entry points that are already familiar to application developers, and a close coupling with the existing architecture of OpenGL are all key benefits of using the OpenGL Shading Language for shader development.



Runtime compilation Source code stays as source code, in its easiest-to-maintain form, for as long as possible. An application passes source code to any conforming OpenGL implementation that supports the OpenGL Shading Language, and it will be compiled and executed properly. There is no need for a multitude of binaries for a multitude of different platforms.[1]

[1] At the time of this writing, the OpenGL ARB is still considering the need for an API that allows shaders to be specified in a form other than source code. The primary issues are the protection of intellectual property that may be embedded in string-based shader source code and the performance that would be gained by allowing shaders to be at least partially precompiled. When such an API is defined, shader portability may be reduced, but application developers will have the option of getting better code security and better performance.




No reliance on cross-vendor assembly language Both DirectX and OpenGL have widespread support for assembly language interfaces to graphics programmability. High-level shading languages could be (and have been) built on top of these assembly language interfaces, and such high-level languages can be translated into these assembly language interfaces completely outside the environment of OpenGL or DirectX. This does have some advantages, but relying on an assembly language interface as the primary interface to hardware programmability restricts innovation by graphics hardware designers. Hardware designers have many more choices for acceleration of an expressive high-level language than they do for a restrictive assembly language. It is much too early in the development of programmable graphics hardware technology to establish an assembly language standard for graphics programmability. C, on the other hand, was developed long before any CPU assembly languages that are in existence today, and it is still a viable choice for application development.



Unconstrained opportunities for compiler optimization plus optimal performance on a wider range of hardware As we've learned through experience with CPUs, compilers are much better at quickly generating efficient code than humans are. By allowing high-level source code to be compiled within OpenGL, rather than outside of OpenGL, individual hardware vendors have the best possible opportunity to deliver optimal performance on their graphics hardware. In fact, compiler improvements can be made with each OpenGL driver release, and the applications won't need to change any application source code, recompile the application, or even relink it. Furthermore, most of the current crop of assembly language interfaces are string based. This makes them inefficient for use as intermediate languages for compilation because two string translations are required. First, the string-based, high-level source must be translated into string-based assembly language, and then that string-based assembly language must be passed to OpenGL and translated from string-based assembly language to machine code.



A truly open, cross-platform standard No other high-level graphics shading language has been approved as part of an open, multivendor standard. Like OpenGL itself, the OpenGL Shading Language will be implemented by a variety of different vendors for a variety of different environments.



One high-level language for all programmable graphics processing The OpenGL Shading Language is used to write shaders for both the vertex processor and the fragment processor in OpenGL, with very small differences in the language for the two types of shaders. In the future, it is intended that the OpenGL Shading Language will bring programmability to other areas of OpenGL as well. Areas that have already received some discussion include programmability for packing/unpacking arbitrary image formats and support for programmable tessellation of higher-order surfaces in the graphics hardware.



Support for modular programming By defining compilation and linking as two separate steps, shader writers have a lot more flexibility in how they choose to implement complex shading algorithms. Rather than implement a complex algorithm as a single, monolithic shader, developers are free to implement it as a collection of shaders that can be independently compiled and attached to a program object. Shaders can be designed with common interfaces so that they are interchangeable, and a link operation joins them to create a program.



No additional libraries or executables The OpenGL Shading Language and the compiler and linker that support it are defined as part of OpenGL. Applications need not worry about linking against any additional runtime libraries. Compiler improvements are delivered as part of OpenGL driver updates.

Example Shader Pair
A program typically contains two shaders: one vertex shader and one fragment shader. More than one shader of each type can be present, but there must be exactly one function main between all the fragment shaders and exactly one function main between all the vertex shaders. Frequently, it's easiest to just have one shader of each type.

The following is a simple vertex and fragment shader pair that can smoothly express a surface temperature with color. The range of temperatures and their colors are parameterized. First, we show the vertex shader. It is executed once for each vertex.

// uniform qualified variables are changed at most once per primitive
uniform float CoolestTemp;
uniform float TempRange;

// attribute qualified variables are typically changed per vertex
attribute float VertexTemp;

// varying qualified variables communicate from the vertex shader to
// the fragment shader
varying float Temperature;

void main()
{
    // compute a temperature to be interpolated per fragment,
    // in the range [0.0, 1.0]
    Temperature = (VertexTemp - CoolestTemp) / TempRange;

    /*
       The vertex position written in the application using
       glVertex() can be read from the built-in variable
       gl_Vertex. Use this value and the current model
       view transformation matrix to tell the rasterizer where
       this vertex is.
    */
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}


That's it for the vertex shader. Primitive assembly follows the preceding vertex processing, providing the rasterizer with enough information to create fragments. The rasterizer interpolates the Temperature values written per vertex to create values per fragment. Each fragment is then delivered to a single execution of the fragment shader, as follows:

// uniform qualified variables are changed at most once per primitive
// by the application, and vec3 declares a vector of three
// floating-point numbers
uniform vec3 CoolestColor;
uniform vec3 HottestColor;

// Temperature contains the now interpolated per-fragment
// value of temperature set by the vertex shader
varying float Temperature;

void main()
{
    // get a color between coolest and hottest colors, using
    // the mix() built-in function
    vec3 color = mix(CoolestColor, HottestColor, Temperature);

    // make a vector of 4 floating-point numbers by appending an
    // alpha of 1.0, and set this fragment's color
    gl_FragColor = vec4(color, 1.0);
}


Both shaders receive user-defined state from the application through the declared uniform qualified variables. The vertex shader gets information associated with each vertex through the attribute qualified variable. Information is passed from the vertex shader to the fragment shader through varying qualified variables, whose declarations must match between the vertex and fragment shaders. The fixed functionality located between the vertex and fragment processors will interpolate the per-vertex values written to this varying variable. When the fragment shader reads this same varying variable, it reads the value interpolated for the location of the fragment being processed.

Shaders interact with the fixed functionality OpenGL pipeline by writing built-in variables. OpenGL prefixes built-in variables with "gl_". In the preceding examples, writing to gl_Position tells the OpenGL pipeline where the transformed vertices are located, and writing to gl_FragColor tells the OpenGL pipeline what color to attach to a fragment.

Execution of the preceding shaders occurs multiple times to process a single primitive, once per vertex for the vertex shader and once per fragment for the fragment shader. Many such executions of the same shader can happen in parallel. In general, there is no direct tie or ordering between shader executions. Information can be communicated neither from vertex to vertex nor from fragment to fragment.


Data Types
We saw vectors of floating-point numbers in the example in the previous section. Many other built-in data types are available to ease the expression of graphical operations. Booleans, integers, matrices, vectors of other types, structures, and arrays are all included. Each is discussed in the following sections. Notably missing are string and character types, since there is little use for them in processing vertex and fragment data.

3.2.1. Scalars
The scalar types available are

float
 declares a single floating-point number

int
 declares a single integer number

bool
 declares a single Boolean number





These declare variables, as is familiar from C/C++.

float f;
float g, h = 2.4;
int NumTextures = 4;
bool skipProcessing;


Unlike the original C, the OpenGL Shading Language requires you to provide the type name because there are no default types. As in C++, declarations may appear when needed, not just after an open curly brace ({).

Literal floating-point numbers are also specified as in C, except there are no suffixes to specify precision since there is only one floating-point type.

3.14159
3.
0.2
.609
1.5e10
0.4E-4
etc.


In general, floating-point values and operations act as they do in C.

Integers are not the same as in C. There is no requirement that they appear to be backed in hardware by a fixed-width integer register. Consequently, wrapping behavior, when arithmetic would overflow or underflow a fixed-width integer register, is undefined. Bit-wise operations like left-shift (<<) and bit-wise and (&) are also not supported.

What can be said about integers? They are guaranteed to have at least 16 bits of precision; they can be positive, negative, or zero; and integer arithmetic that stays within this range gives the expected results. Note that the precision truly is 16 bits plus the sign of the valuethat is, a full range of [-65535,65535] or greater.

Literal integers can be given as decimal values, octal values, or hexadecimal values, as in C.

42    // a literal decimal integer
052   // a literal octal integer
0x2A  // a literal hexadecimal integer


Again, there are no suffixes to specify precision since there is only one integer type. Integers are useful as sizes of structures or arrays and as loop counters. Graphical types, such as color or position, are best expressed in floating-point variables within a shader.

Boolean variables are as bool in C++. They can have only one of two values: true or false. Literal Boolean constants true and false are provided. Relational operators like less-than (<) and logical operators like logical and (&&) always result in Boolean values. Flow-control constructs like if-else accept only Boolean-typed expressions. In these regards, the OpenGL Shading Language is more restrictive than C++.

3.2.2. Vectors
Vectors of float, int, or bool are built-in basic types. They can have two, three, or four components and are named as follows:

vec2
 Vector of two floating-point numbers

vec3
 Vector of three floating-point numbers

vec4
 Vector of four floating-point numbers





ivec2
 Vector of two integers

ivec3
 Vector of three integers

ivec4
 Vector of four integers





bvec2
 Vector of two Booleans

bvec3
 Vector of three Booleans

bvec4
 Vector of four Booleans





Vectors are quite useful. They conveniently store and manipulate colors, positions, texture coordinates, and so on. Built-in variables and built-in functions make heavy use of these types. Also, special operations are supported. Finally, hardware is likely to have vector-processing capabilities that mirror vector expressions in shaders.

Note that the language does not distinguish between a color vector and a position vector or other uses of a floating-point vector. These are all just floating-point vectors from the language's perspective.

Special features of vectors include component access that can be done either through field selection (as with structures) or as array accesses. For example, if position is a vec3, it can be considered as the vector (x, y, z), and position.x will select the first component of the vector.

In all, the following names are available for selecting components of vectors:

x, y, z, w
 Treat a vector as a position or direction

r, g, b, a
 Treat a vector as a color

s, t, p, q
 Treat a vector as a texture coordinate





There is no explicit way of stating that a vector is a color, a position, a coordinate, and so on. Rather, these component selection names are provided simply for readability in a shader. The only compile-time checking done is that the vector is large enough to provide a specified component. Also, if multiple components are selected (swizzling, discussed in Section 3.7.2), all the components are from the same family.

Vectors can also be indexed as a zero-based array to obtain components. For instance, position[2] returns the third component of position. Variable indices are allowed, making it possible to loop over the components of a vector. Multiplication takes on special meaning when operating on a vector since linear algebraic multiplies with matrices are understood. Swizzling, indexing, and other operations are discussed in detail in Section 3.7.

3.2.3. Matrices
Built-in types are available for matrices of floating-point numbers. There are 2 x 2, 3 x 3, and 4 x 4 sizes.

mat2
 2 x 2 matrix of floating-point numbers

mat3
 3 x 3 matrix of floating-point numbers

mat4
 4 x 4 matrix of floating-point numbers





These are useful for storing linear transforms or other data. They are treated semantically as matrices, particularly when a vector and a matrix are multiplied together, in which case the proper linear-algebraic computation is performed. When relevant, matrices are organized in column major order, as is the tradition in OpenGL.

You may access a matrix as an array of column vectorsthat is, if transform is a mat4, transform[2] is the third column of transform. The resulting type of transform[2] is vec4. Column 0 is the first column. Because transform[2] is a vector and you can also treat vectors as arrays, transform[3][1] is the second component of the vector forming the fourth column of transform. Hence, it ends up looking as if transform is a two-dimensional array. Just remember that the first index selects the column, not the row, and the second index selects the row.

3.2.4. Samplers
Texture lookups require some indication as to which texture or texture unit will do the lookup. The OpenGL Shading Language doesn't really care about the underlying implementation of texture units or other forms of organizing texture lookup hardware. Hence, it provides a simple opaque handle to encapsulate what to look up. These handles are called SAMPLERS. The sampler types available are

sampler1D
 Accesses a one-dimensional texture

sampler2D
 Accesses a two-dimensional texture

sampler3D
 Accesses a three-dimensional texture

samplerCube
 Accesses a cube-map texture

sampler1DShadow
 Accesses a one-dimensional depth texture with comparison

sampler2DShadow
 Accesses a two-dimensional depth texture with comparison





When the application initializes a sampler, the OpenGL implementation stores into it whatever information is needed to communicate what texture to access. Shaders cannot themselves initialize samplers. They can only receive them from the application, through a uniform qualified sampler, or pass them on to user or built-in functions. As a function parameter, a sampler cannot be modified, so there is no way for a shader to change a sampler's value.

For example, a sampler could be declared as

uniform sampler2D Grass;


(Uniform qualifiers are discussed in more detail in Section 3.5.2.)

This variable can then be passed into a corresponding texture lookup function to access a texture:

vec4 color = texture2D(Grass, coord);


where coord is a vec2 holding the two-dimensional position used to index the grass texture, and color is the result of doing the texture lookup. Together, the compiler and the OpenGL driver validate that Grass really references a two-dimensional texture and that Grass is passed only into two-dimensional texture lookups.

Shaders may not manipulate sampler values. For example, the expression Grass + 1 is not allowed. If a shader wants to combine multiple textures procedurally, an array of samplers can be used as shown here:

const int NumTextures = 4;
uniform sampler2D textures[NumTextures];


These can be processed in a loop:

for (int i = 0; i < NumTextures; ++i)
    . . . =  texture2D(textures[i], . . .);


The idiom Grass+1 could then become something like

textures[GrassIndex+1]


which is a valid way of manipulating the sampler.

3.2.5. Structures
The OpenGL Shading Language provides user-defined structures similar to C. For example,

struct light
{
    vec3 position;
    vec3 color;
};


As in C++, the name of the structure is the name of this new user-defined type. No typedef is needed. In fact, the typedef keyword is still reserved because there is not yet a need for it. A variable of type light from the preceding example is simply declared as

light ceilingLight;


Most other aspects of structures mirror C. They can be embedded and nested. Embedded structure type names have the same scope as the structure in which they are declared. However, embedded structures must be named. Structure members can also be arrays. Finally, each level of structure has its own name space for its members' names, as is familiar.

Bit-fields (the capability to declare an integer with a specified number of bits) are not supported.

Currently, structures are the only user-definable type. The keywords union, enum, and class are reserved for possible future use.

3.2.6. Arrays
Arrays of any type can be created. The declaration

vec4 points[10];


creates an array of ten vec4 variables, indexed starting with zero. There are no pointers; the only way to declare an array is with square brackets. Declaring an array as a function parameter also requires square brackets and a size because, currently, array arguments are passed as if the whole array is a single object, not as if the argument is a pointer.

Arrays, unless they are function parameters, do not have to be declared with a size. A declaration like

vec4 points[];


is allowed, as long as either of the following two cases is true:

Before the array is referenced, it is declared again with a size, with the same type as the first declaration. For example,

vec4 points[];    // points is an array of unknown size
vec4 points[10];  // points is now an array of size 10
This cannot be followed by another declaration:

vec4 points[];    // points is an array of unknown size
vec4 points[10];  // points is now an array of size 10
vec4 points[20];  // this is illegal
vec4 points[];    // this is also illegal
All indices that statically reference the array are compile-time constants. In this case, the compiler will make the array large enough to hold the largest index it sees used. For example,

vec4 points[];         // points is an array of unknown size
points[2] = vec4(1.0); // points is now an array of size 3
points[7] = vec4(2.0); // points is now an array of size 8
In this case, at runtime the array has only one size, determined by the largest index the compiler sees. Such automatically sized arrays cannot be passed as function arguments.

This feature is quite useful for handling the built-in array of texture coordinates. Internally, this array is declared as

varying vec4 gl_TexCoord[];


If a program uses only compile-time constant indices of 0 and 1, the array is implicitly sized as gl_TexCoord[2]. If a shader uses a nonconstant variable to index the array, that shader must explicitly declare the array with the desired size. Of course, keeping the size to a minimum is important, especially for varying variables, which are likely a limited hardware resource.

Multiple shaders sharing the same array must declare it with the same size. The linker verifies this.

3.2.7. Void
The type void declares a function that returns no value. For example, the function main returns no value and must be declared as type void.

void main()
{
    . . .
}


Other than for functions that return nothing, the void type is not useful.

3.2.8. Declarations and Scope
Variables are declared quite similarly to the way they are declared in C++. They can be declared where needed and have scope as in C++. For example,

float f;

f = 3.0;

vec4 u, v;

for (int i = 0; i < 10; ++i)
    v = f * u + v;


The scope of a variable declared in a for statement ends at the end of the loop's substatement. However, variables may not be declared in an if statement. This simplifies implementation of scoping across the else substatement, with little practical cost.

As in C, variable names are case sensitive, must start with a letter or underscore (_), and contain only letters, numbers, and underscores (_). Userdefined variables cannot start with the string "gl_", because those names are reserved for future use by OpenGL. Names containing consecutive underscores (__) are also reserved.

3.2.9. Type Matching and Promotion
The OpenGL Shading Language is strict with type matching. In general, types being assigned must match, argument types passed into functions must match formal parameter declarations, and types being operated on must match the requirements of the operator. There are no automatic promotions from one type to another. This may occasionally make a shader have an extra explicit conversion. However, it also simplifies the language, preventing some forms of obfuscated code and some classes of defects. For example, there are no ambiguities in which overloaded function should be chosen for a given function call.

Initializers and Constructors
A shader variable may be initialized when it is declared. As is familiar from C, the following example initializes b at declaration time and leaves a and c undefined:

float a, b = 3.0, c;


Constant qualified variables must be initialized.

const int Size = 4;  // initializer is required


Attribute, uniform, and varying variables cannot be initialized when declared.

attribute float Temperature;  // no initializer allowed,
                              // the vertex API sets this

uniform int Size;             // no initializer allowed,
                              // the uniform setting API sets this

varying float density;        // no initializer allowed, the vertex
                              // shader must programmatically set this


To initialize aggregate types, at either declaration time or elsewhere, CONSTRUCTORS are used. No initializer uses the brace syntax "{. . .}" from C. Syntactically, constructors look like function calls that have a type name where the function name would gofor example, to initialize a vec4 with the values (1.0, 2.0, 3.0, 4.0), use

vec4 v = vec4(1.0, 2.0, 3.0, 4.0);


Or, because constructor syntax is the same whether it's in an initializer or not, use

vec4 v;
v = vec4(1.0, 2.0, 3.0, 4.0);


There are constructors for all the built-in types (except samplers) as well as for structures. Some examples:

vec4 v = vec4(1.0, 2.0, 3.0, 4.0);
ivec2 c = ivec2(3, 4);
vec3 color = vec3(0.2, 0.5, 0.8);
vec4 color4 = vec4(color, 1.0)
struct light
{
    vec4 position;
    struct tLightColor
    {
        vec3 color;
        float intensity;
    } lightColor;
} light1 = light(v, tLightColor(color, 0.9));

Built-in constructors for vectors can also take a single argument, which is replicated into each component.

vec3 v = vec3(0.6);


is equivalent to

vec3 v = vec3(0.6, 0.6, 0.6);


This is true only for vectors. Structure constructors must receive one argument per member being constructed. Matrix constructors also have a single argument form, but in this case it initializes just the diagonal of the matrix. The remaining components are initialized to 0.0.

mat2 m = mat2(1.0);  // makes a 2 x 2 identity matrix


is equivalent to

mat2 m = mat2(1.0, 0.0, 0.0, 1.0);    // makes a 2 x 2 identity matrix


Constructors can also have vectors and matrices as arguments. However, constructing matrices from other matrices is reserved for future definition.

vec4 v = vec4(1.0);
vec2 u = vec2(v);  // the first two components of v initialize u
mat2 m = mat2(v);


Matrix components are read out of arguments in column major order and written in column major order.

Extra components within a single constructor argument are silently ignored. Normally, this is useful for shrinking a value, like eliminating alpha from a color or w from a position. It is an error to have completely unused arguments passed to a constructor.

vec2 t = vec2(1.0, 2.0, 3.0);  // illegal; third argument is unused


Type Conversions
Explicit type conversions are performed with constructors. For example,

float f = 2.3;
bool b = bool(f);


sets b to true. This is useful for flow-control constructs, like if, which require Boolean values. Boolean constructors convert non-zero numeric values to true and zero numeric values to false.

The OpenGL Shading Language does not provide C-style typecast syntax, which can be ambiguous as to whether a value is converted to a different type or is simply reinterpreted as a different type. In fact, there is no way of reinterpreting a value as a different type in the OpenGL Shading Language. There are no pointers, no type unions, no implicit type changes, and no reinterpret casts. Instead, constructors perform conversions. The arguments to a constructor are converted to the type they are constructing. Hence, the following are allowed:

float f = float(3);  // convert integer 3 to floating-point 3.0
float g = float(b);  // convert Boolean b to floating point
vec4 v = vec4(2);    // set all components of v to 2.0


For conversion from a Boolean, true is converted to 1 or 1.0, and false is converted to a zero.


Qualifiers and Interface to a Shader
Qualifiers prefix both variables and formal function parameters.

The following is the complete list of qualifiers (used outside of formal function parameters).

attribute
 For frequently changing information, from the application to a vertex shader

uniform
 For infrequently changing information, from the application to either a vertex shader or a fragment shader

varying
 For interpolated information passed from a vertex shader to a fragment shader

const
 For declaring nonwritable, compile-time constant variables, as in C





Getting information into and out of a shader is quite different from more typical programming environments. Information is transferred to and from a shader by reading and writing built-in variables and user-defined attribute, uniform, and varying variables. The most common built-in variables were shown in the example at the beginning of this chapter. They are gl_Position for output of the homogeneous coordinates of the vertex position and gl_FragColor for output of the fragment's color from a fragment shader. The complete set of built-in variables is provided in Chapter 4. Examples of attribute, uniform, and varying qualified variables were seen briefly in the opening example for getting other information into and out of shaders. Each is discussed in this section.

Variables qualified as attribute, uniform, or varying must be declared at global scope. This is sensible since they are visible outside of shaders and, for a single program, they all share a single name space.

Qualifiers are always specified before the type of a variable, and because there is no default type, the form of a qualified variable declaration always includes a type.

attribute float Temperature;
const int NumLights = 3;
uniform vec4 LightPosition[NumLights];
varying float LightIntensity;


3.5.1. Attribute Qualifiers
Attribute-qualified variables (or attributes) enable an application to pass frequently modified data into a vertex shader. They can be changed as often as once per vertex, either directly or indirectly by the application. Built-in attributes, like gl_Vertex and gl_Normal, read traditional OpenGL state, and user-defined attributes can be named by the coder.

Attributes are limited to floating-point scalars, floating-point vectors, and matrices. Attributes declared as integers or Booleans are not allowed, nor are attributes declared as structures or arrays. This is, in part, a result of encouraging high-performance frequent changing of attributes in hardware implementations of the OpenGL system. Attributes cannot be modified by a shader.

Attributes cannot be declared in fragment shaders.

3.5.2. Uniform Qualifiers
Uniform qualified variables (or uniforms), like attributes, are set only outside a shader and are intended for data that changes less frequently. They can be changed at most once per primitive. All data types and arrays of all data types are supported for uniform qualified variables. All the vertex and fragment shaders forming a single program share a single global name space for uniforms. Hence, uniforms of the same name in a vertex and fragment program will be the same uniform variable.

Uniforms cannot be written to in a shader. This is sensible because an array of processors may be sharing the same resources to hold uniforms and other language semantics break down if uniforms could be modified.

Recall that unless a sampler (e.g., sampler2D) is a function parameter, the uniform qualifier must be used when it is declared. This is because samplers are opaque, and making them uniforms allows the OpenGL driver to validate that the application initializes a sampler with a texture and texture unit consistent with its use in the shader.

3.5.3. Varying Qualifiers
Varying qualified variables (or varyings) are the only way a vertex shader can communicate results to a fragment shader. Such variables form the dynamic interface between vertex and fragment shaders. The intention is that for a particular attribute of a drawing primitive, each vertex might have a different value and these values need to be interpolated across the fragments in the primitive. The vertex shader writes the per-vertex values into a varying variable, and when the fragment shader reads from this variable, it gets back a value interpolated between the vertices. If some attribute were to be the same across a large primitive, not requiring interpolation, the vertex shader need not communicate it to the fragment shader at all. Instead, the application could pass this value directly to the fragment shader through a uniform qualified variable.

The exception to using varying variables only for interpolated values is for any value the application will change often, either per triangle or per some small set of triangles or vertices. These values may be faster to pass as attribute variables and forwarded as varying variables because changing uniform values frequently may impact performance.

The automatic interpolation of varying qualified variables is done in a perspective-correct manner. This approach is necessary no matter what type of data is being interpolated. Otherwise, such values would not change smoothly across edges introduced for surface subdivision. The non-perspective-correct interpolated result would be continuous, but its derivative would not be, and this effect can be quite visible.

A varying qualified variable is written in a vertex shader and read in a fragment shader. It is illegal for a fragment shader to write to a varying variable. However, the vertex shader may read a varying variable, getting back what it has just written. Reading a varying qualified variable before writing it returns an undefined value.

3.5.4. Constant Qualifiers
Variables qualified as const (except for formal function parameters) are compile-time constants and are not visible outside the shader that declares them. Both scalar and nonscalar constants are supported. Structure fields may not be qualified with const, but structure variables can be declared as const and initialized with a structure constructor. Initializers for const declarations must be formed from literal values, other const qualified variables (not including function call parameters), or expressions of these.

Some examples:

const int numIterations = 10;
const float pi = 3.14159;
const vec2 v = vec2(1.0, 2.0);
const vec3 u = vec3(v, pi);
const struct light
{
    vec3 position;
    vec3 color;
} fixedLight = light(vec3(1.0, 0.5, 0.5), vec3(0.8, 0.8, 0.5));


All the preceding variables are compile-time constants. The compiler may propagate and fold constants at compile time, using the precision of the processor executing the compiler, and need not allocate any runtime resources to const qualified variables.

3.5.5. Absent Qualifier
If no qualifier is specified when a variable (not a function parameter) is declared, the variable can be both read and written by the shader. Nonqualified variables declared at global scope can be shared between shaders of the same type that are linked in the same program. Vertex shaders and fragment shaders each have their own separate global name space for nonqualified globals. However, nonqualified user-defined variables are not visible outside a program. That privilege is reserved for variables qualified as attribute or uniform and for built-in variables representing OpenGL state.

Unqualified variables have a lifetime limited to a single run of a shader. There is also no concept corresponding to a static variable in a C function that would allow a variable to be set to a value and have its shader retain that value from one execution to the next. Implementation of such variables is made difficult by the parallel processing nature of the execution environment, in which multiple instantiations run in parallel, sharing much of the same memory. In general, writable variables must have unique instances per processor executing a shader and therefore cannot be shared.

Because unqualified global variables have a different name space for vertex shaders than that for fragment shaders, it is not possible to share information through such variables between vertex and fragment shaders. Read-only variables can be shared if declared as uniform, and variables written by a vertex shader can be read by the fragment shader only through the varying mechanism.


Flow Control
Flow control is very much like that in C++. The entry point into a shader is the function main. A program containing both vertex and fragment shaders has two functions named main, one for entering a vertex shader to process each vertex and one to enter a fragment shader to process each fragment. Before main is entered, any initializers for global variable declarations are executed.

Looping can be done with for, while, and do-while, just as in C++. Variables can be declared in for and while statements, and their scope lasts until the end of their substatements. The keywords break and continue also exist and behave as in C.

Selection can be done with if and if-else, just as in C++, with the exception that a variable cannot be declared in the if statement. Selection by means of the selection operator (?:) is also available, with the extra constraint that the second and third operands must have exactly the same type.

The type of the expression provided to an if statement or a while statement, or to terminate a for statement, must be a scalar Boolean. As in C, the right-hand operand to logical and (&&) is not evaluated (or at least appears not to be evaluated) if the left-hand operand evaluates to false, and the right-hand operand to logical or (||) is not evaluated if the left-hand operand evaluates to true. Similarly, only one of the second or third operands in the selection operator (:?) will be evaluated. A logical exclusive or (^^) is also provided, for which both sides are always evaluated.

A special branch, discard, can prevent a fragment from updating the frame buffer. When a fragment shader executes the discard keyword, the fragment being processed is marked to be discarded. An implementation might or might not continue executing the shader, but it is guaranteed that there is no effect on the frame buffer.

A goto keyword or equivalent is not available, nor are labels. Switching with switch is also not provided.

3.6.1. Functions
Function calls operate much as in C++. Function names can be overloaded by parameter type but not solely by return type. Either a function definition (body) or declaration must be in scope before a function is called. Parameter types are always checked. This means an empty parameter list () in a function declaration is not ambiguous, as in C, but rather explicitly means that the function accepts no arguments. Also, parameters must have exact matches since no automatic promotions are done, so selection of overloaded functions is quite straightforward.

Exiting from a function with return operates the same as in C++. Functions returning nonvoid types must return values whose type must exactly match the return type of the function.

Functions may not be called recursively, either directly or indirectly.

3.6.2. Calling Conventions
The OpenGL Shading Language uses call by value-return as its calling convention. The call by value part is familiar from C: Parameters qualified as input parameters are copied into the function and not passed as a reference. Because there are no pointers, a function need not worry about its parameters being aliases of some other memory. The return part of call by value-return means parameters qualified as output parameters are returned to the caller by being copied back from the called function to the caller when the function returns.

To specify which parameters are copied when, prefix them with the qualifier keywords in, out, or inout. For something that is just copied into the function but not returned, use in. The in qualifier is also implied when no qualifiers are specified. To say a parameter is not to be copied in but is to be set and copied back on return, use the qualifier out. To say a parameter is copied both in and out, use the qualifier inout.

in
 Copy in but don't copy back out; still writable within the function

out
 Only copy out; readable, but undefined at entry to function

inout
 Copy in and copy out





The const qualifier can also be applied to function parameters. Here, it does not mean the variable is a compile-time constant, but rather that the function is not allowed to write it. Note that an ordinary, nonqualified input-only parameter can be written to; it just won't be copied back to the caller. Hence, there is a distinction between a parameter qualified as const in and one qualified only as in (or with no qualifier). Of course, out and inout parameters cannot be declared as const.

Some examples:

void ComputeCoord(in vec3 normal,  // Parameter 'normal' is copied in,
                                   // can be written to, but will not be
                                   // copied back out.
                  vec3 tangent,    // Same behavior as if "in" was used.
                  inout vec3 coord)// Copied in and copied back out.


Or,

vec3 ComputeCoord(const vec3 normal,// normal cannot be written to
                  vec3 tangent,
                  in vec3 coord)    //the function will return the result


The following are not legal:

void ComputeCoord(const out vec3 normal, //not legal; can't write normal
                  const inout vec3 tang, //not legal; can't write tang
                  in out vec3 coord)     //not legal; use inout


Structures and arrays can also be passed as arguments to a function. Keep in mind, though, that these data types are passed by value and there are no references, so it is possible to cause some large copies to occur at function call time. Array parameters must be declared with their size, and only arrays of matching type and size can be passed to an array parameter. The return type of a function is not allowed to be an array.

Functions can either return a value or return nothing. If a function returns nothing, it must be declared as type void. If a function returns a value, the type can be any type except an array. However, structures can be returned, and structures can contain arrays.

3.6.3. Built-in Functions
A large set of built-in functions is available. Built-in functions are documented in full in Chapter 5.

A shader can override these functions, providing its own definition. To override a function, provide a prototype or definition that is in scope at call time. The compiler or linker then looks for a user-defined version of the function to resolve that call. For example, one of the built-in sine functions is declared as

float sin(float x);


If you want to experiment with performance or accuracy trade-offs in a sine function or specialize it for a particular domain, you can override the built-in function with your own function.

float sin(float x)
{
    return <.. some function of x..>
}

void main()
{
    // call the sin function above, not the built-in sin function
    float s = sin(x);
}


This is similar to the standard language linking techniques of using libraries of functions and to having more locally scoped function definitions satisfy references before the library is checked. If the definition is in a different shader, just make sure a prototype is visible before calling the function. Otherwise, the built-in version is used.

3.7.1. Indexing
Vectors, matrices, and arrays can be indexed with the index operator ([ ]). All indexing is zero based; the first element is at index 0. Indexing an array operates just as in C.

Indexing a vector returns scalar components. This allows giving components numerical names of 0, 1, . . . and also enables variable selection of vector components, should that be needed. For example,

vec4 v = vec4(1.0, 2.0, 3.0, 4.0);
float f = v[2];  // f takes the value 3.0


Here, v[2] is the floating-point scalar 3.0, which is then assigned into f.

Indexing a matrix returns columns of the matrix as vectors. For example,

mat4 m = mat4(3.0);  // initializes the diagonal to all 3.0
vec4 v;
v = m[1];   // places the vector (0.0, 3.0, 0.0, 0.0) into v


Here, the second column of m, m[1] is treated as a vector that is copied into v.

Behavior is undefined if an array, vector, or matrix is accessed with an index that is less than zero or greater than or equal to the size of the object.

3.7.2. Swizzling
The normal structure-member selector (.) is also used to SWIZZLE components of a vectorthat is, select or rearrange components by listing their names after the swizzle operator (.). Examples:

vec4 v4;
v4.rgba;   // is a vec4 and the same as just using v4,
v4.rgb;    // is a vec3,
v4.b;      // is a float,
v4.xy;     // is a vec2,
v4.xgba;   // is illegal - the component names do not come from
           // the same set.


The component names can be out of order to rearrange the components, or they can be replicated to duplicate the components:

vec4 pos = vec4(1.0, 2.0, 3.0, 4.0);
vec4 swiz = pos.wzyx; // swiz = (4.0, 3.0, 2.0, 1.0)
vec4 dup = pos.xxyy; // dup = (1.0, 1.0, 2.0, 2.0)


At most, four component names can be listed in a swizzle; otherwise, they would result in a nonexistent type. The rules for swizzling are slightly different for R-VALUES (expressions that are read from) and L-VALUES (expressions that say where to write to). R-values can have any combination and repetition of components. L-values must not have any repetition. For example:

vec4 pos = vec4(1.0, 2.0, 3.0, 4.0);
pos.xw = vec2(5.0, 6.0); // pos = (5.0, 2.0, 3.0, 6.0)
pos.wx = vec2(7.0, 8.0); // pos = (8.0, 2.0, 3.0, 7.0)
pos.xx = vec2(3.0, 4.0); // illegal - 'x' used twice


For R-values, this syntax can be used on any expression whose resultant type is a vector. For example, getting a two-component vector from a texture lookup can be done as

vec2 v = texture1D(sampler, coord).xy;


where the built-in function texture1D returns a vec4.

3.7.3. Component-wise Operation
With a few important exceptions, when an operator is applied to a vector, it behaves as if it were applied independently to each component of the vector. We refer to this behavior as component-wise for short.

For example,

vec3 v, u;
float f;
v = u + f;


is equivalent to

v.x = u.x + f;
v.y = u.y + f;
v.z = u.z + f;


And

vec3 v, u, w;
w = v + u;


is equivalent to

w.x = v.x + u.x;
w.y = v.y + u.y;
w.z = v.z + u.z;


If a binary operation operates on a vector and a scalar, the scalar is applied to each component of the vector. If two vectors are operated on, their sizes must match.

Exceptions are multiplication of a vector times a matrix and a matrix times a matrix, which perform standard linear-algebraic multiplies, not component-wise multiplies.

Increment and decrement operators (++ and --) and unary negation (-) behave as in C. When applied to a vector or matrix, they increment or decrement each component. They operate on integer and floating-point-based types.

Arithmetic operators of addition (+), subtraction (-), multiplication (*), and division (/) behave as in C, or component-wise, with the previously described exception of using linear-algebraic multiplication on vectors and matrices:

vec4 v, u;
mat4 m;
v * u;  // This is a component-wise multiply
v * m;  // This is a linear-algebraic row-vector times matrix multiply
m * v;  // This is a linear-algebraic matrix times column-vector multiply
m * m;  // This is a linear-algebraic matrix times matrix multiply


All other operations are performed component by component.

Logical not (!), logical and (&&), logical or (||), and logical inclusive or (^^) operate only on expressions that are typed as scalar Booleans, and they result in a Boolean. These cannot operate on vectors. A built-in function, not, computes the component-wise logical not of a vector of Booleans.

Relational operations (<, >, <=, and >=) operate only on floating-point and integer scalars and result in a scalar Boolean. Certain built-in functions, for instance, lessThanEqual, compute a Boolean vector result of component-wise comparisons of two vectors.

The equality operators (== and !=) operate on all types except arrays. They compare every component or structure member across the operands. This results in a scalar Boolean, indicating whether the two operands were equal. For two operands to be equal, their types must match, and each of their components or members must be equal. To compare two vectors in a component-wise fashion, call the built-in functions equal and notEqual.

Scalar Booleans are produced by the operators equal (==), not equal (!=), relational (<, >, <=, and >=), and logical not (!) because flow-control constructs (if, for, etc.) require a scalar Boolean. If built-in functions like equal are called to compute a vector of Booleans, such a vector can be turned into a scalar Boolean with the built-in functions any or all. For example, to do something if any component of a vector is less than the corresponding component of another vector, the code would be

vec4 u, v;
. . .
if (any(lessThan(u, v)))
    . . .


Assignment (=) requires exact type match between the left- and right-hand side. Any type, except for arrays, can be assigned. Other assignment operators (+=, -=, *=, and /=) are similar to C but must make semantic sense when expanded, as in

a *= b    a = a * b


where the expression a * b must be semantically valid, and the type of the expression a * b must be the same as the type of a. The other assignment operators behave similarly.

The ternary selection operator (?:) operates on three expressions: exp1 ? exp2 : exp3. This operator evaluates the first expression, which must result in a scalar Boolean. If the result is true, the operator selects to evaluate the second expression; otherwise, it selects to evaluate the third expression. Only one of the second and third expressions will appear to be evaluated. The second and third expressions must be the same type, but they can be of any type other than an array. The resulting type is the same as the type of the second and third expressions.

The sequence operator (,) operates on expressions by returning the type and value of the rightmost expression in a comma-separated list of expressions. All expressions are evaluated, in order, from left to right.

Preprocessor
The preprocessor is much like that in C. Support for

#define
#undef
#if
#ifdef
#ifndef
#else
#elif
#endif


as well as the defined operator are exactly as in standard C. This includes macros with arguments and macro expansion. Built-in macros are

__LINE__
__FILE__
__VERSION__


__LINE__ substitutes a decimal integer constant that is one more than the number of preceding new-lines in the current source string.

__FILE__ substitutes a decimal integer constant that says which source string number is currently being processed.

__VERSION__ substitutes a decimal integer reflecting the version number of the OpenGL Shading Language. The version of the shading language described in this document has __VERSION__ defined as the decimal integer 110.

Macro names containing two consecutive underscores (__) are reserved for future use as predefined macro names, as are all macro names prefixed with "GL_".

There is also the usual support for

#error message
#line
#pragma


#error puts message into the shader's information log. The compiler then proceeds as if a semantic error has been encountered.

#line must have, after macro substitution, one of the following two forms:

#line line
#line line source-string-number


where line and source-string-number are constant integer expressions. After processing this directive (including its new-line), the implementation behaves as if it is compiling at line number line+1 and source string number source-string-number. Subsequent source strings are numbered sequentially until another #line directive overrides that numbering.

#pragma is implementation dependent. Tokens following #pragma are not subject to preprocessor macro expansion. If an implementation does not recognize the tokens specified by the pragma, the pragma is ignored. However, the following pragmas are defined as part of the language.

#pragma STDGL


The STDGL pragma reserves pragmas for use by future revisions of the OpenGL Shading Language. No implementation may use a pragma whose first token is STDGL.

Use the optimize pragma

#pragma optimize(on)
#pragma optimize(off)


to turn optimizations on or off as an aid in developing and debugging shaders. The optimize pragma can occur only outside function definitions. By default, optimization is turned on for all shaders.

The debug pragma

#pragma debug(on)
#pragma debug(off)


enables compiling and annotating a shader with debug information so that it can be used with a debugger. The debug pragma can occur only outside function definitions. By default, debug is set to off.

Shaders should declare the version of the language to which they are written by using

#version number


If the targeted version of the language is the version that was approved in conjunction with OpenGL 2.0, then a value of 110 should be used for number. Any value less than 110 causes an error to be generated. Any value greater than the latest version of the language supported by the compiler also causes an error to be generated. Version 110 of the language does not require shaders to include this directive, and shaders without this directive are assumed to target version 110 of the OpenGL Shading Language. This directive, when present, must occur in a shader before anything else except comments and white space.

By default, compilers must issue compile-time syntactic, grammatical, and semantic errors for shaders that do not conform to the OpenGL Shading Language specification. Any extended behavior must first be enabled through a preprocessor directive. The behavior of the compiler with respect to extensions is declared with the #extension directive:

#extension extension_name : behavior
#extension all : behavior


extension_name is the name of an extension. The token all means that the specified behavior should apply to all extensions supported by the compiler. The possible values for behavior and their corresponding effects are shown in Table 3.2. A shader could check for the existence of a built-in function named foo defined by a language extension named GL_ARB_foo in the following way:

Table 3.2. Permitted values for the behavior expression in the #extension directive Value of behavior
 Effect

require
 Behave as specified by the extension extension_name. The compiler reports an error on the #extension directive if the specified extension is not supported or if the token all is used instead of an extension name.

enable
 Behave as specified by the extension extension_name. The compiler provides a warning on the #extension directive if the specified extension is not supported, and it reports an error if the token all is used instead of an extension name.

warn
 Behave as specified by the extension extension_name, except cause the compiler to issue warnings on any detectable use of the specified extension, unless such use is supported by other enabled or required extensions. If all is specified, then the compiler warns on all detectable uses of any extension used.

disable
 Behave (including errors and warnings) as if the extension specified by extension_name is not part of the language definition. If all is specified, then behavior must revert to that of the nonextended version of the language that is being targeted. The compiler warns if the specified extension is not supported.





#ifdef GL_ARB_foo
    #extension GL_ARB_foo : enable
    myFoo = foo(); // use the built-in foo()
#else
    // use some other method to compute myFoo
#endif


Directives that occur later override those that occur earlier. The all token sets the behavior for all extensions, overriding all previously issued #extension directives, but only for behaviors warn and disable.

The initial state of the compiler is as if the directive

#extension all : disable


were issued, telling the compiler that all error and warning reporting must be done according to the nonextended version of the OpenGL Shading Language that is being targeted (i.e., all extensions are ignored).

Macro expansion is not done on lines containing #extension and #version directives.

The number sign (#) on a line by itself is ignored. Any directive not described in this section causes the compiler to generate an error message. The shader is subsequently treated as ill-formed.



The Vertex Processor
The vertex processor executes a vertex shader and replaces the fixed functionality OpenGL per-vertex operations. Specifically, when the vertex processor is executing a vertex shader, the following fixed functionality operations are affected:

The modelview matrix is not applied to vertex coordinates.

The projection matrix is not applied to vertex coordinates.

The texture matrices are not applied to texture coordinates.

Normals are not transformed to eye coordinates.

Normals are not rescaled or normalized.

Normalization of GL_AUTO_NORMAL evaluated normals is not performed.

Texture coordinates are not generated automatically.

Per-vertex lighting is not performed.

Color material computations are not performed.

Color index lighting is not performed.

Point size distance attenuation is not performed.

All of the preceding apply to setting the current raster position.

The following fixed functionality operations are applied to vertex values that are the result of executing the vertex shader:

Color clamping or masking (for built-in varying variables that deal with color but not for user-defined varying variables)

Perspective division on clip coordinates

Viewport mapping

Depth range scaling

Clipping, including user clipping

Front face determination

Flat-shading

Color, texture coordinate, fog, point size, and user-defined varying clipping

Final color processing

The basic operation of the vertex processor was discussed in Section 2.3.1. As shown in Figure 2.2, data can come into the vertex shader through attribute variables (built in or user defined), uniform variables (built in or user defined), or texture maps (a vertex processing capability that is new with the OpenGL Shading Language). Data exits the vertex processor through built-in varying variables, user-defined varying variables, and special vertex shader output variables. Built-in constants (described in Section 4.4) are also accessible from within a vertex shader. A vertex shader has no knowledge of the primitive type for the vertex it is working on.

OpenGL has a mode that causes color index values to be produced rather than RGBA values. However, this mode is not supported in conjunction with vertex shaders. If the frame buffer is configured as a color index buffer, behavior is undefined when a vertex shader is used.

4.1.1. Vertex Attributes
To draw things with OpenGL, applications must provide vertex information such as normal, color, texture coordinates, and so on. These attributes can be specified one vertex at a time with OpenGL functions such as glNormal, glColor, and glTexCoord. When set with these function calls, the attributes become part of OpenGL's current state.

Geometry can also be drawn with vertex arrays. With this method, applications arrange vertex attributes in separate arrays containing positions, normals, colors, texture coordinates, and so on. By calling glDrawArrays, applications can send a large number of vertex attributes to OpenGL in a single function call. Vertex buffer objects (i.e., server-side storage for vertex arrays) were added in OpenGL 1.5 to provide even better performance for drawing vertex arrays.

Vertex attributes come in two flavors: standard and generic. The standard attributes are the attributes as defined by OpenGL; these are color, secondary color, color index, normal, vertex position, texture coordinates, edge flag, and the fog coordinate. The color index attribute, which sets the current color index, and the edge flag attribute are not available to a vertex shader (but the application is allowed to send the edge flags to OpenGL while using a vertex shader). A vertex shader accesses the standard attributes with the following built-in names. A compiler error is generated if these names are used in a fragment shader.

//
// Vertex Attributes
//
attribute vec4 gl_Color;
attribute vec4 gl_SecondaryColor;
attribute vec3 gl_Normal;
attribute vec4 gl_Vertex;
attribute vec4 gl_MultiTexCoord0;
attribute vec4 gl_MultiTexCoord1;
attribute vec4 gl_MultiTexCoord2;
// . . . up to gl_MultiTexCoordN-1 where N = gl_MaxTextureCoords
attribute float gl_FogCoord;


Details on providing generic vertex attributes to a vertex shader through the OpenGL Shading Language API are provided in Section 7.7.

Both standard attributes and generic attributes are part of the current OpenGL state. That means that they retain their values, once set. An application is free to set values for all generic and all standard attributes and count on OpenGL to store them (except for vertex position, see Section 7.7). However, the number of attributes a vertex shader can use is limited. Typically, this limit is smaller than the sum of all the standard and generic attributes. This limit is implementation specific and can be queried with glGet with the symbolic constant GL_MAX_VERTEX_ATTRIBS. Every OpenGL implementation is required to support at least 16 vertex attributes in a vertex shader.

To signal the end of one vertex, the application can set either the standard vertex attribute gl_Vertex or the generic vertex attribute with index zero. The gl_Vertex attribute is set with the glVertex command or one of the vertex array commands, and the generic attribute with index zero is set with glVertexAttrib with an index of zero. These two commands are equivalent, and either one signals the end of a vertex.

4.1.2. Uniform Variables
Shaders can access current OpenGL state through built-in uniform variables containing the reserved prefix "gl_". For instance, the current modelview matrix can be accessed with the built-in variable name gl_ModelViewMatrix. Various properties of a light source can be accessed through the array containing light parameters as in gl_LightSource[2].spotDirection. Any OpenGL state used by the shader is automatically tracked and made available to the shader. This automatic state-tracking mechanism enables the application to use existing OpenGL state commands for state management and have the current values of such state automatically available for use in the shader.

OpenGL state is accessible to both vertex shaders and fragment shaders by means of the built-in uniform variables defined in Section 4.3.

Applications can also define their own uniform variables in a vertex shader and use OpenGL API calls to set their values (see Section 7.8 for a complete description). There is an implementation-dependent limit on the amount of storage allowed for uniform variables in a vertex shader. The limit refers to the storage for the combination of built-in uniform variables and user-defined uniform variables that are actually used in a vertex shader. It is defined in terms of components, where a component is the size of a float. Thus, a vec2 takes up two components, a vec3 takes three, and so on. This value can be queried with glGet with the symbolic constant GL_MAX_VERTEX_UNIFORM_COMPONENTS.

4.1.3. Special Output Variables
Earlier we learned that results from the vertex shader are sent on for additional processing by fixed functionality within OpenGL, including primitive assembly and rasterization. Several built-in variables are defined as part of the OpenGL Shading Language to allow the vertex shader to pass information to these subsequent processing stages. The built-in variables discussed in this section are available only from within a vertex shader.

The variable gl_Position writes the vertex position in clipping coordinates after it has been computed in a vertex shader. Every execution of a wellformed vertex shader must write a value into this variable. Compilers may generate an error message if they detect that gl_Position is not written or read before being written, but not all such cases are detectable. Results are undefined if a vertex shader is executed and it does not store a value into gl_Position.

The built-in variable gl_PointSize writes the size (diameter) of a point primitive. It is measured in pixels. This allows a vertex shader to compute a screen size that is related to the distance to the point, for instance. Section 4.5.2 provides more details on using gl_PointSize.

If user clipping is enabled, it occurs as a fixed functionality operation after the vertex shader has been executed. For user clipping to function properly in conjunction with the use of a vertex shader, the vertex shader must compute a vertex position that is relative to the user-defined clipping planes. This value must then be stored in the built-in variable gl_ClipVertex. It is up to the application to ensure that the clip vertex value computed by the vertex shader and the user clipping planes are defined in the same coordinate space. User clip planes work properly only under linear transform. More details on using gl_ClipVertex are contained in Section 4.5.3.

These variables each have global scope. They can be written to at any time during the execution of the vertex shader, and they can be read back after they have been written. Reading them before writing them results in undefined behavior. If they are written more than once, the last value written will be the one that is consumed by the subsequent operations.

These variables can be referenced only from within a vertex shader and are intrinsically declared with the following types:

vec4  gl_Position;   // must be written to
float gl_PointSize;  // may be written to
vec4  gl_ClipVertex; // may be written to


4.1.4. Built-in Varying Variables
As explained previously, varying variables describe attributes that vary across a primitive. The vertex shader is responsible for writing values that need to be interpolated into varying variables. The fragment shader reads the interpolated results from varying variables and operates on them to produce a resulting value for each fragment. For each user-defined varying variable actually used by the fragment shader, there must be a matching varying variable declared in the vertex shader; otherwise, a link error occurs.

To properly communicate with the fixed functionality of OpenGL, the OpenGL Shading Language defines a number of built-in varying variables. A vertex shader can write certain varying variables that are not accessible from the fragment shader, and a fragment shader can read certain varying variables that were not accessible from the vertex shader.

The following built-in varying variables can be written in a vertex shader. (Those available from a fragment shader are described in Section 4.2.1.) The vertex shader should write to those that are required for the desired fixed functionality fragment processing (if no fragment shader is to be used), or to those required by the corresponding fragment shader.

varying vec4  gl_FrontColor;
varying vec4  gl_BackColor;
varying vec4  gl_FrontSecondaryColor;
varying vec4  gl_BackSecondaryColor;
varying vec4  gl_TexCoord[]; // at most will be gl_MaxTextureCoords
varying float gl_FogFragCoord;


Values written to gl_FrontColor, gl_BackColor, gl_FrontSecondaryColor, and gl_BackSecondaryColor are clamped to the range [0,1] by fixed functionality when they exit the vertex shader. These four values and the fixed functionality to determine whether the primitive is front facing or back facing are used to compute the two varying variables gl_Color and gl_SecondaryColor that are available in the fragment shader.

One or more sets of texture coordinates can be passed from a vertex shader with the gl_TexCoord array. This makes the coordinates available for fixed functionality processing if no fragment shader is present. Alternatively, they can be accessed from within a fragment shader with the gl_TexCoord varying variable. Index values used to reference this array must be constant integral expressions or this array must be redeclared with a size. The maximum size of the texture coordinate array is implementation specific and can be queried with glGet with the symbolic constant GL_MAX_TEXTURE_COORDS. Using array index values near 0 may aid the implementation in conserving resources consumed by varying variables.

For gl_FogFragCoord, the value written should be the one required by the current fog coordinate source as set by a previous call to glFog. If the fog coordinate source is set to GL_FRAGMENT_DEPTH, the value written into gl_FogFragCoord should be the distance from the eye to the vertex in eye coordinates (where the eye position is assumed to be (0, 0, 0, 1)). If the fog coordinate source is set to GL_FOG_COORDINATE, the value written into gl_FogFragCoord should be the fog coordinate value that is to be interpolated across the primitive (i.e., the built-in attribute variable gl_FogCoord).

4.1.5. User-Defined Varying Variables
Vertex shaders can also define varying variables to pass arbitrary values to the fragment shader. Such values are not clamped, but they are subjected to subsequent fixed functionality processing such as clipping and interpolation. There is an implementation-dependent limit to the number of floating-point values that can be interpolated. This limit can be queried with glGet with the symbolic constant GL_MAX_VARYING_FLOATS.


The Fragment Processor
The fragment processor executes a fragment shader and replaces the texturing, color sum, and fog fragment operations. Specifically, when the fragment processor is executing a fragment shader, the following fixed functionality operations are affected:

The texture environments and texture functions are not applied.

Texture application is not performed.

Color sum is not applied.

Fog is not applied.

The behavior of the following operations does not change:

Texture image specification

Alternate texture image specification

Compressed texture image specification

Texture parameters that behave as specified even when a texture is accessed from within a fragment shader

Texture state and proxy state

Texture object specification

Texture comparison modes

The basic operation of the fragment processor was discussed in Section 2.3.2. As shown in Figure 2.3, data can come into the fragment shader through varying variables (built in or user defined), uniform variables (built in or user defined), special input variables, or texture maps. Data exits the fragment processor through special fragment shader output variables. Built-in constants (described in Section 4.4) are also accessible from within a fragment shader.

Like vertex shaders, the behavior of a fragment shader is undefined when the frame buffer is configured as a color index buffer rather than as an RGBA buffer (i.e., OpenGL is in color index mode).

4.2.1. Varying Variables
The following built-in varying variables can be read in a fragment shader. The gl_Color and gl_SecondaryColor names are the same as built-in attribute variable names available in the vertex shader. However, the names do not conflict because attributes are visible only in vertex shaders and the following are only visible in fragment shaders:

varying vec4  gl_Color;
varying vec4  gl_SecondaryColor;
varying vec4  gl_TexCoord[]; // at most will be gl_MaxTextureCoords
varying float gl_FogFragCoord;


The values in gl_Color and gl_SecondaryColor are derived automatically from gl_FrontColor, gl_BackColor, gl_FrontSecondaryColor, and gl_BackSecondaryColor as part of fixed functionality processing that determines whether the fragment belongs to a front facing or a back facing primitive (see Section 4.5.1). If fixed functionality is used for vertex processing, gl_FogFragCoord is either the z-coordinate of the fragment in eye space or the interpolated value of the fog coordinate, depending on whether the fog coordinate source is currently set to GL_FRAGMENT_DEPTH or GL_FOG_COORDINATE. The gl_TexCoord[] array contains either the values of the interpolated gl_TexCoord[] values from a vertex shader or the texture coordinates from the fixed functionality vertex processing. No automatic division of texture coordinates by their q-component is performed.

When the fragment shader is processing fragments resulting from the rasterization of a pixel rectangle or bitmap, results are undefined if the fragment shader uses a varying variable that is not a built-in varying variable. In this case, the values for the built-in varying variables are supplied by the current raster position and the values contained in the pixel rectangle or bitmap because a vertex shader is not executed.

Fragment shaders also obtain input data from user-defined varying variables. Both built-in and user-defined varying variables contain the result of perspective-correct interpolation of values that are defined at each vertex.

4.2.2. Uniform Variables
As described in Section 4.1.2, OpenGL state is available to both vertex shaders and fragment shaders through built-in uniform variables that begin with the reserved prefix "gl_". The list of uniform variables that can be used to access OpenGL state is provided in Section 4.3.

User-defined uniform variables can be defined and used within fragment shaders in the same manner as they are used within vertex shaders. OpenGL API calls are provided to set their values (see Section 7.8 for complete details).

The implementation-dependent limit that defines the amount of storage available for uniform variables in a fragment shader can be queried with glGet with the symbolic constant GL_MAX_FRAGMENT_UNIFORM_COMPONENTS. This limit refers to the storage for the combination of built-in uniform variables and user-defined uniform variables that are actually used in a fragment shader. It is defined in terms of components, where a component is the size of a float.

4.2.3. Special Input Variables
The variable gl_FragCoord is available as a read-only variable from within fragment shaders, and it holds the window relative coordinates x, y, z, and 1/w for the fragment. This window position value is the result of the fixed functionality that interpolates primitives after vertex processing to generate fragments. The z component contains the depth value as modified by the polygon offset calculation. This built-in variable allows implementation of window position-dependent operations such as screen-door transparency (e.g., use discard on any fragment for which gl_FragCoord.x is odd or gl_FragCoord.y is odd, but not both).

The fragment shader also has access to the read-only built-in variable gl_FrontFacing whose value is true if the fragment belongs to a front-facing primitive, and false otherwise. This value can be used to select between two colors calculated by the vertex shader to emulate two-sided lighting, or it can be used to apply completely different shading effects to front and back surfaces. A fragment derives its facing direction from the primitive that generates the fragment. All fragments generated by primitives other than polygons, triangles, or quadrilaterals are considered to be front facing. For all other fragments (including fragments resulting from polygons drawn with a polygon mode of GL_POINT or GL_LINE), the determination is made by examination of the sign of the area of the primitive in window coordinates. This sign can possibly be reversed, depending on the last call to glFrontFace. If the sign is positive, the fragments are front facing; otherwise, they are back facing.

These special input variables have global scope and can be referenced only from within a fragment shader. They are intrinsically declared with the following types:

vec4 gl_FragCoord;
bool gl_FrontFacing;


4.2.4. Special Output Variables
The primary purpose of a fragment shader is to compute values that will ultimately be written into the frame buffer. Unless the keyword discard is encountered, the output of the fragment shader goes on to be processed by the fixed function operations at the back end of the OpenGL pipeline. Fragment shaders send their results on to the back end of the OpenGL pipeline by using the built-in variables gl_FragColor, gl_FragData, and gl_FragDepth. These built-in fragment shader variables have global scope, and they may be written more than once by a fragment shader. If they are written more than once, the last value assigned is the one used in the subsequent operations. They can also be read back after they have been written. Reading them before writing them results in undefined behavior.

The color value that is to be written into the frame buffer (assuming that it passes through the various back-end fragment processing stages unscathed) is computed by the fragment shader and stored in the built-in variable gl_FragColor. As it exits the fragment processor, each component of gl_FragColor is clamped to the range [0,1] and converted to fixed point with at least as many bits as are in the corresponding color component in the destination frame buffer. Most shaders compute a value for gl_FragColor, but it is not required that this value be computed by all fragment shaders. It is perfectly legal for a shader to compute values for gl_FragDepth or gl_FragData instead. The shader could also use the discard keyword to mark the fragment as one to be discarded rather than used to update the frame buffer. Note that if subsequent fixed functionality consumes fragment color and an execution of a fragment shader does not write a value to gl_FragColor, the behavior is undefined.

If depth buffering is enabled and a shader does not write gl_FragDepth, the fixed function value for depth is used as the fragment's depth value. Otherwise, writing to gl_FragDepth establishes the depth value for the fragment being processed. As it exits the fragment processor, this value is clamped to the range [0,1] and converted to fixed point with at least as many bits as are in the depth component in the destination frame buffer. Fragment shaders that write to gl_FragDepth should take care to write to it for every execution path through the shader. If it is written in one branch of a conditional statement but not the other, the depth value will be undefined for some execution paths.

The z component of gl_FragCoord contains the depth value resulting from the preceding fixed function processing. It contains the value that would be used for the fragment's depth if the shader contained no writes to gl_FragDepth. This component can be used to achieve an invariant result if a fragment shader conditionally computes gl_FragDepth but otherwise wants the fixed functionality fragment depth.

The values written to gl_FragColor and gl_FragDepth do not need to be clamped within a shader. The fixed functionality pipeline following the fragment processor clamps these values, if needed, to the range required by the buffer into which the fragment will be written.

gl_FragData is an array that can be assigned values that are written into one or more offscreen buffers. The size of this array is implementation dependent and can be queried with glGet with the symbolic constant GL_MAX_DRAW_BUFFERS. The offscreen buffers that are modified as a result of writing values into gl_FragData within a fragment shader are specified with glDrawBuffers. The value written into gl_FragData[0] updates the first buffer in the list specified in the call to glDrawBuffers, the value written into gl_FragData[1] updates the second buffer in the list, and so on. If subsequent fixed function processing consumes a value for gl_FragData[i] but this value is never written by the fragment shader, then the data consumed by the fixed function processing is undefined.

A fragment shader may assign values to gl_FragColor or gl_FragData but not both. If a shader executes the discard keyword, the fragment is discarded, no update of the frame buffer contents is performed, and the values of gl_FragDepth, gl_FragData and gl_FragColor become irrelevant.

The fragment shader output variables have global scope, can be referenced only from within a fragment shader, and are intrinsically declared with the following types:

vec4  gl_FragColor;
vec4  gl_FragData[gl_MaxDrawbuffers];
float gl_FragDepth;

Built-in Uniform Variables
OpenGL was designed as a state machine. It has a variety of state that can be set. At the time graphics primitives are provided to OpenGL for rendering, the current state settings affect how the graphics primitives are treated and ultimately how they are rendered into the frame buffer.

Some applications heavily utilize this aspect of OpenGL. Large amounts of application code might be dedicated to manipulating OpenGL state and providing an interface to allow the end user to change state to produce different rendering effects.

The OpenGL Shading Language makes it easy for these types of applications to take advantage of programmable graphics technology. It contains a variety of built-in uniform variables that allow a shader to access current OpenGL state. In this way, an application can continue to use OpenGL as a state management machine and still provide shaders that combine that state in ways that aren't possible with the OpenGL fixed functionality path. Because they are defined as uniform variables, shaders are allowed to read from these built-in variables but not to write to them


Built-in uniform variables
//
// Matrix state
//
uniform mat4  gl_ModelViewMatrix;
uniform mat4  gl_ProjectionMatrix;
uniform mat4  gl_ModelViewProjectionMatrix;
uniform mat4  gl_TextureMatrix[gl_MaxTextureCoords];

//
// Derived matrix state that provides inverse and transposed versions
// of the matrices above. Poorly conditioned matrices may result
// in unpredictable values in their inverse forms.
//
uniform mat3  gl_NormalMatrix; // transpose of the inverse of the upper
                               // leftmost 3x3 of gl_ModelViewMatrix

uniform mat4  gl_ModelViewMatrixInverse;
uniform mat4  gl_ProjectionMatrixInverse;
uniform mat4  gl_ModelViewProjectionMatrixInverse;
uniform mat4  gl_TextureMatrixInverse[gl_MaxTextureCoords];

uniform mat4  gl_ModelViewMatrixTranspose;
uniform mat4  gl_ProjectionMatrixTranspose;
uniform mat4  gl_ModelViewProjectionMatrixTranspose;
uniform mat4  gl_TextureMatrixTranspose[gl_MaxTextureCoords]

uniform mat4  gl_ModelViewMatrixInverseTranspose;
uniform mat4  gl_ProjectionMatrixInverseTranspose;
uniform mat4  gl_ModelViewProjectionMatrixInverseTranspose;
uniform mat4  gl_TextureMatrixInverseTranspose[gl_MaxTextureCoords]

//
// Normal scaling
//
uniform float gl_NormalScale;

//
// Depth range in window coordinates
//
struct gl_DepthRangeParameters
{
    float near;        // n
    float far;         // f
    float diff;        // f - n
};
uniform gl_DepthRangeParameters gl_DepthRange;

//
// Clip planes
//
uniform vec4  gl_ClipPlane[gl_MaxClipPlanes];

//
// Point Size
//
struct gl_PointParameters
{
    float size;
    float sizeMin;
    float sizeMax;
    float fadeThresholdSize;
    float distanceConstantAttenuation;
    float distanceLinearAttenuation;
    float distanceQuadraticAttenuation;
};

uniform gl_PointParameters gl_Point;

//
// Material State
//
struct gl_MaterialParameters
{
    vec4 emission;     // Ecm
    vec4 ambient;      // Acm
    vec4 diffuse;      // Dcm
    vec4 specular;     // Scm
    float shininess;   // Srm
};
uniform gl_MaterialParameters  gl_FrontMaterial;
uniform gl_MaterialParameters  gl_BackMaterial;
//
// Light State
//

struct gl_LightSourceParameters
{
    vec4  ambient;              // Acli
    vec4  diffuse;              // Dcli
    vec4  specular;             // Scli
    vec4  position;             // Ppli
    vec4  halfVector;           // Derived: Hi
    vec3  spotDirection;        // Sdli
    float spotExponent;         // Srli
    float spotCutoff;           // Crli
                                // (range: [0.0,90.0], 180.0)
    float spotCosCutoff;        // Derived: cos(Crli)
                                // (range: [1.0,0.0],-1.0)
    float constantAttenuation;  // K0
    float linearAttenuation;    // K1
    float quadraticAttenuation; // K2
};

uniform gl_LightSourceParameters   gl_LightSource[gl_MaxLights];

struct gl_LightModelParameters
{
    vec4 ambient;        // Acs
};

uniform gl_LightModelParameters gl_LightModel;

//
// Derived state from products of light and material.
//

struct gl_LightModelProducts
{
    vec4 sceneColor;      // Derived. Ecm + Acm * Acs
};

uniform gl_LightModelProducts gl_FrontLightModelProduct;
uniform gl_LightModelProducts gl_BackLightModelProduct;

struct gl_LightProducts
{
    vec4 ambient;         // Acm * Acli
    vec4 diffuse;         // Dcm * Dcli
    vec4 specular;        // Scm * Scli
};
uniform gl_LightProducts gl_FrontLightProduct[gl_MaxLights];
uniform gl_LightProducts gl_BackLightProduct[gl_MaxLights];

//
// Texture Environment and Generation
//
uniform vec4  gl_TextureEnvColor[gl_MaxTextureUnits];
uniform vec4  gl_EyePlaneS[gl_MaxTextureCoords];
uniform vec4  gl_EyePlaneT[gl_MaxTextureCoords];
uniform vec4  gl_EyePlaneR[gl_MaxTextureCoords];
uniform vec4  gl_EyePlaneQ[gl_MaxTextureCoords];
uniform vec4  gl_ObjectPlaneS[gl_MaxTextureCoords];
uniform vec4  gl_ObjectPlaneT[gl_MaxTextureCoords];
uniform vec4  gl_ObjectPlaneR[gl_MaxTextureCoords];
uniform vec4  gl_ObjectPlaneQ[gl_MaxTextureCoords];

//
// Fog
//
struct gl_FogParameters
{
    vec4 color;
    float density;
    float start;
    float end;
    float scale;   // 1.0 / (gl_Fog.end - gl_Fog.start)
};

uniform gl_FogParameters gl_Fog;




As you can see, these built-in uniform variables have been defined so as to take advantage of the language features whenever possible. Structures are used as containers to group a collection of parameters such as depth range parameters, material parameters, light source parameters, and fog parameters. Arrays define clip planes and light sources. Defining these uniform variables in this way improves code readability and allows shaders to take advantage of language capabilities like looping.

The list of built-in uniform variables also includes some derived state. These state values are things that aren't passed in directly by the application but are derived by the OpenGL implementation from values that are passed. For various reasons, it's convenient to have the OpenGL implementation compute these derived values and allow shaders to access them. The normal matrix (gl_NormalMatrix) is an example of this. It is simply the inverse transpose of the upper-left 3 x 3 subset of the modelview matrix. Because it is used so often, it wouldn't make sense to require shaders to compute this from the modelview matrix whenever it is needed. Instead, the OpenGL implementation is responsible for computing this whenever it is needed, and it is accessible to shaders through a built-in uniform variable.

Here are some examples of how these built-in uniform variables might be used. A vertex shader can transform the incoming vertex position (gl_Vertex) by OpenGL's current modelview-projection matrix (gl_ModelViewProjectionMatrix) with the following line of code:

gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;


Similarly, if a normal is needed, it is transformed by the current normal matrix:

tnorm = gl_NormalMatrix * gl_Normal;


(The transformed normal would typically also need to be normalized in order to be used in lighting computations. This normalization can be done with the built-in function normalize, which is discussed in Section 5.4.)

Here are some other examples of accessing built-in OpenGL state:

gl_FrontMaterial.emission  // emission value for front material
gl_LightSource[0].ambient  // ambient term of light source #0
gl_ClipPlane[3][0]         // first component of user clip plane #3
gl_Fog.color.rgb           // r, g, and b components of fog color
gl_TextureMatrix[1][2][3]  // 3rd column, 4th component of 2nd
                           // texture matrix


The mapping of OpenGL state values to these built-in variables should be straightforward, but if you need more details, see the OpenGL Shading Language Specification document.


Built-in Constants

OpenGL defines minimum values for each implementation-dependent constant. The minimum value informs application writers of the lowest value that is permissible for a conforming OpenGL implementation. The minimum value for each of the built-in constants is shown here.

//
// Implementation dependent constants.  The values below
// are the minimum values allowed for these constants.
//
const int  gl_MaxLights = 8;
const int  gl_MaxClipPlanes = 6;
const int  gl_MaxTextureUnits = 2;
const int  gl_MaxTextureCoords = 2;
const int  gl_MaxVertexAttribs = 16;
const int  gl_MaxVertexUniformComponents = 512;
const int  gl_MaxVaryingFloats = 32;
const int  gl_MaxVertexTextureImageUnits = 0;
const int  gl_MaxTextureImageUnits = 2;
const int  gl_MaxFragmentUniformComponents = 64;
const int  gl_MaxCombinedTextureImageUnits = 2;
const int  gl_MaxDrawBuffers = 1;


These values can occasionally be useful within a shader. For instance, a shader might include a general-purpose lighting function that loops through the available OpenGL lights and adds contributions from each enabled light source. The loop can easily be set up with the built-in constant gl_MaxLights. More likely, however, is that an application will use OpenGL's glGet function to obtain these implementation-dependent constants and decide, based on those values, whether to even load a shader .

Interaction with OpenGL Fixed Functionality
This section offers a little more detail to programmers who are intimately familiar with OpenGL operations and need to know precisely how the programmable capabilities introduced by the OpenGL Shading Language interact with the rest of the OpenGL pipeline. This section is more suitable for seasoned OpenGL programmers than for OpenGL novices.

4.5.1. Two-Sided Color Mode
Vertex shaders can operate in two-sided color mode. Front and back colors can be computed by the vertex shader and written to the gl_FrontColor, gl_BackColor, gl_FrontSecondaryColor, and gl_BackSecondaryColor output variables. If two-sided color mode is enabled after vertex processing, OpenGL fixed functionality chooses which of the front or back colors to forward to the rest of the pipeline. Which side OpenGL picks depends on the primitive type being rendered and the sign of the area of the primitive in window coordinates (see the OpenGL specification for details). If two-sided color mode is disabled, OpenGL always selects the front color outputs. Two-sided color mode is enabled and disabled with glEnable or glDisable with the symbolic value GL_VERTEX_PROGRAM_TWO_SIDE.

The colors resulting from this front/back facing selection step are clamped to the range [0,1] and converted to a fixed-point representation before being interpolated across the primitive that is being rendered. This is normal OpenGL behavior. When higher precision and dynamic range colors are required, the application should use its own user-defined varying variables instead of the four built-in gl_Color ones. The front/back facing selection step is then skipped. However, a built-in variable available in the fragment shader (gl_FrontFacing) indicates whether the current fragment is the result of rasterizing a front or back facing primitive.

4.5.2. Point Size Mode
Vertex shaders can also operate in point size mode. A vertex shader can compute a point size in pixels and assign it to the built-in variable gl_PointSize. If point size mode is enabled, the point size is taken from this variable and used in the rasterization stage; otherwise, it is taken from the value set with the glPointSize command. If gl_PointSize is not written while vertex shader point size mode is enabled, the point size used in the rasterization stage is undefined. Vertex shader point size mode is enabled and disabled with glEnable or glDisable with the symbolic value GL_VERTEX_PROGRAM_POINT_SIZE.

This point size enable is convenient for the majority of applications that do not change the point size within a vertex shader. By default, this mode is disabled, so most vertex shaders for which point size doesn't matter need not write a value to gl_PointSize. The value set by calls to glPointSize is always used by the rasterization stage.

If the primitive is clipped and vertex shader point size mode is enabled, the point size values are also clipped in a manner analogous to color clipping. The potentially clipped point size is used by the fixed functionality part of the pipeline as the derived point size (the distance attenuated point size). Thus, if the application wants points farther away to be smaller, it should compute some kind of distance attenuation in the vertex shader and scale the point size accordingly. If vertex shader point size mode is disabled, the derived point size is taken directly from the value set with the glPointSize command and no distance attenuation is performed. The derived point size is then used, as usual, optionally to alpha-fade the point when multisampling is also enabled. Again, see the OpenGL specification for details.

Distance attenuation should be computed in a vertex shader and cannot be left to the fixed functionality distance attenuation algorithm. This fixed functionality algorithm computes distance attenuation as a function of the distance between the eye at (0, 0, 0, 1) and the vertex position, in eye coordinates. However, the vertex position computed in a vertex shader might not have anything to do with locations in eye coordinates. Therefore, when a vertex shader is active, this fixed functionality algorithm is skipped. A point's alpha-fade, on the other hand, can be computed correctly only with the knowledge of the primitive type. That information is not available to a vertex shader, because it executes before primitive assembly. Consider the case of rendering a triangle and having the back polygon mode set to GL_POINT and the front polygon mode to GL_FILL. The vertex shader should fade only the alpha if the vertex belongs to a back facing triangle. But it cannot do that because it does not know the primitive type.

4.5.3. Clipping
User clipping can be used in conjunction with a vertex shader. The user clip planes are specified as usual with the glClipPlane command. When specified, these clip planes are transformed by the inverse of the current modelview matrix. The vertices resulting from the execution of a vertex shader are evaluated against these transformed clip planes. The vertex shader must provide the position of the vertex in the same space as the user-defined clip planes (typically, eye space). It does that by writing this location to the output variable gl_ClipVertex. If gl_ClipVertex is not specified and user clipping is enabled, the results are undefined.

When a vertex shader that mimics OpenGL's fixed functionality is used, the vertex shader should compute the eye-coordinate position of the vertex and store it in gl_ClipVertex. For example,

gl_ClipVertex = gl_ModelViewMatrix * gl_Vertex;


When you want to do object space clipping instead, keep in mind that the clip planes are transformed with the inverse of the modelview matrix. For correct object clipping, the modelview matrix needs to be set to the identity matrix when the clip planes are specified.

After user clipping, vertices are clipped against the view volume, as usual. In this operation, the value specified by gl_Position (i.e., the homogeneous vertex position in clip space) is evaluated against the view volume.

4.5.4. Raster Position
If a vertex shader is active when glRasterPos is called, it processes the coordinates provided with the glRasterPos command just as if these coordinates were specified with a glVertex command. The vertex shader is responsible for outputting the values necessary to compute the current raster position data.

The OpenGL state for the current raster position consists of the following seven items:

Window coordinates computed from the value written to gl_Position. These coordinates are treated as if they belong to a point and passed to the clipping stage and then projected to window coordinates.

A valid bit indicating if this point was culled.

The raster distance, which is set to the vertex shader varying variable gl_FogFragCoord.

The raster color, which is set to either the vertex shader varying variable gl_FrontColor or gl_BackColor, depending on the front/back facing selection process.

The raster secondary color, which is set to either the vertex shader varying variable gl_FrontSecondaryColor or gl_BackSecondaryColor, depending on the front/back facing selection process.

One or more raster texture coordinates. These are set to the vertex shader varying variable array gl_TexCoord[].

The raster color index. Because the result of a vertex shader is undefined in color index mode, the raster color index is always set to 1.

If any of the outputs necessary to compute the first six items are not provided, the value(s) for the associated item is undefined.

4.5.5. Position Invariance
For multipass rendering, in which a vertex shader performs some passes and other passes use the fixed functionality pipeline, positional invariance is important. This means that both the vertex shader and fixed functionality compute the exact same vertex position in clip coordinates, given the same vertex position in object coordinates and the same modelview and projection matrices. Positional invariance can be achieved with the built-in function ftransform in a vertex shader as follows:

gl_Position = ftransform();


In general, the vertex shader code

gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex


does not result in positional invariance, because of possible compiler optimizations and potential underlying hardware differences.

4.5.6. Texturing
One of the major improvements to OpenGL made by the OpenGL Shading Language is in the area of texturing. For one thing, texturing operations can be performed in a vertex shader. But the fragment side of the pipeline has improved as well. A fragment shader can potentially have access to more texture image units than the fixed functionality pipeline does. This means that more texture lookups can be performed in a single rendering pass. And with programmability, the results of all those texture lookups can be combined in any way the shader writer sees fit.

The changes to the pipeline have resulted in some clarification to the language used to describe texturing capabilities in OpenGL. The term "texture unit" in OpenGL formerly specified more than one thing. It specified the number of texture coordinates that could be attached to a vertex (now called texture coordinate sets) as well as the number of hardware units that could be used simultaneously for accessing texture maps (now called texture image units). A texture coordinate set encompasses vertex texture coordinate attributes, as well as the texture matrix stack and texture generation state. The symbolic constant GL_MAX_TEXTURE_UNITS can be queried with glGet to obtain a single number that indicates the quantity of both of these items.

For the implementations that support the OpenGL Shading Language, these two things might actually have different values, so the number of available texture coordinate sets is now decoupled from the maximum number of texture image units. Typically, the number of available texture coordinate sets is less than the available texture image units. This should not prove to be a limitation because new texture coordinates can easily be derived from the texture coordinate attributes passed in or the coordinates can be retrieved from a texture map.

Five different limits related to texture mapping should be taken into account.

For a vertex shader, the maximum number of available texture image units is given by GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS.

For a fragment shader, the maximum number of available texture image units is given by GL_MAX_TEXTURE_IMAGE_UNITS.

The combined number of texture image units used in the vertex and the fragment processing parts of OpenGL (either a fragment shader or fixed function) cannot exceed the limit GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS. If a vertex shader and the fragment processing part of OpenGL both use the same texture image unit, that counts as two units against this limit. This rule exists because an OpenGL implementation might have only GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS actual texture image units implemented, and it might share those units between the vertex and fragment processing parts of OpenGL.

When a fragment shader is not active, OpenGL can still perform multi-texturing. In this case, the maximum available multitexture stages are given by the state variable GL_MAX_TEXTURE_UNITS.

The number of supported texture coordinate sets is given by GL_MAX_TEXTURE_COORDS. This limit applies regardless of whether a vertex shader or fixed-function OpenGL performs vertex processing.

The fixed function hierarchy of texture-enables (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_3D, GL_TEXTURE_2D, and GL_TEXTURE_1D) is ignored by shaders. For example, even if the texture target GL_TEXTURE_1D is enabled for a texture image unit, a sampler can be used to access the GL_TEXTURE_2D target for that texture image unit.

Samplers of type sampler1DShadow or sampler2DShadow must be used to access depth textures (textures with a base internal format of GL_DEPTH_COMPONENT). The texture comparison mode requires the shader to use one of the variants of the shadow1D or shadow2D built-in functions for accessing the texture (see Section 5.7). If these built-in functions are used to access a texture with a base internal format other than GL_DEPTH_COMPONENT, the result is undefined. Similarly, if a texture access function other than one of the shadow variants is used to access a depth texture, the result is undefined.

If a shader uses a sampler to reference a texture object that is not complete (e.g., one of the textures in a mipmap has a different internal format or border width than the others, see the OpenGL specification for a complete list), the texture image unit returns (R, G, B, A) = (0, 0, 0, 1).



C functions to obtain OpenGL and OpenGL Shading Language version information
void getGlVersion(int *major, int *minor)
{
    const char *verstr = (const char *) glGetString(GL_VERSION);
    if ((verstr == NULL) || (sscanf(verstr,"%d.%d", major, minor) != 2))
    {
        *major = *minor = 0;
        fprintf(stderr, "Invalid GL_VERSION format!!!\n");
    }
}

void getGlslVersion(int *major, int *minor)
{
    int gl_major, gl_minor;
    getGlVersion(&gl_major, &gl_minor);

    *major = *minor = 0;
    if(gl_major == 1)
    {
        /* GL v1.x can only provide GLSL v1.00 as an extension */
        const char *extstr = (const char *) glGetString(GL_EXTENSIONS);
        if ((extstr != NULL) &&
            (strstr(extstr, "GL_ARB_shading_language_100") != NULL))
        {
            *major = 1;
            *minor = 0;
        }
    }
    else if (gl_major >= 2)
    {
        /* GL v2.0 and greater must parse the version string */
        const char *verstr =
            (const char *) glGetString(GL_SHADING_LANGUAGE_VERSION);

        if((verstr == NULL) ||
            (sscanf(verstr, "%d.%d", major, minor) != 2))
        {
            *major = *minor = 0;
            fprintf(stderr,
                "Invalid GL_SHADING_LANGUAGE_VERSION format!!!\n");
        }
    }
}



Creating Shader Objects
The design of the OpenGL Shading Language API mimics the process of developing a C or C++ application. The first step is to create the source code. The source code must then be compiled, the various compiled modules must be linked, and finally the resulting code can be executed by the target processor.

To support the concept of a high-level shading language within OpenGL, the design must provide storage for source code, compiled code, and executable code. The solution to this problem is to define two new OpenGL-managed data structures, or objects. These objects provide the necessary storage, and operations on these objects have been defined to provide functionality for specifying source code and then compiling, linking, and executing the resulting code. When one of these objects is created, OpenGL returns a unique identifier for it. This identifier can be used to manipulate the object and to set or query the parameters of the object.

The first step toward utilizing programmable graphics hardware is to create a shader object. This creates an OpenGL-managed data structure that can store the shader's source code. The command to create a shader is

GLuint glCreateShader(GLenum shaderType)

Creates an empty shader object and returns a non-zero value by which it can be referenced. A shader object maintains the source code strings that define a shader. shaderType specifies the type of shader to be created. Two types of shaders are supported. A shader of type GL_VERTEX_SHADER is a shader that runs on the programmable vertex processor; it replaces the fixed functionality vertex processing in OpenGL. A shader of type GL_FRAGMENT_SHADER is a shader that runs on the programmable fragment processor; it replaces the fixed functionality fragment processing in OpenGL.

When created, a shader object's GL_SHADER_TYPE parameter is set to either GL_VERTEX_SHADER or GL_FRAGMENT_SHADER, depending on the value of shaderType.





After a shader object is created, strings that define the shader's source code must be provided. The source code for a shader is provided as an array of strings. The command for defining a shader's source code is


void glShaderSource(GLuint shader,
                    GLsizei count,
                    const GLchar **string,
                    const GLint *length)


Sets the source code in shader to the source code in the array of strings specified by string. Any source code previously stored in the shader object is completely replaced. The number of strings in the array is specified by count. If length is NULL, then each string is assumed to be null terminated. If length is a value other than NULL, it points to an array containing a string length for each of the corresponding elements of string. Each element in the length array can contain the length of the corresponding string (the null character is not counted as part of the string length) or a value less than 0 to indicate that the string is null terminated. The source code strings are not scanned or parsed at this time; they are simply copied into the specified shader object. An application can modify or free its copy of the source code strings immediately after the function returns.





The multiple strings interface provides a number of benefits, including

A way to organize common pieces of source code (for instance, the varying variable definitions that are shared between a vertex shader and a fragment shader)

A way to share prefix strings (analogous to header files) between shaders

A way to share #define values to control the compilation process

A way to include user-defined or third-party library functions


Compiling Shader Objects
After the source code strings have been loaded into a shader object, the source code must be compiled to check its validity. The result of compilation remains as part of the shader object until another compilation operation occurs or until the shader object itself is deleted. The command to compile a shader object is

void glCompileShader(GLuint shader)

Compiles the source code strings that have been stored in the shader object specified by shader.

The compilation status is stored as part of the shader object's state. This value is set to GL_TRUE if the shader was compiled without errors and is ready for use, and GL_FALSE otherwise. It can be queried by calling glGetShader with arguments shader and GL_COMPILE_STATUS.

A shader will fail to compile if it is lexically, grammatically, or semantically incorrect. Whether or not the compilation was successful, information about the compilation can be obtained from the shader object's information log with glGetShaderInfoLog.

The OpenGL Shading Language has compilation rules that are slightly different depending on the type of shader being compiled, and so the compilation takes into consideration whether the shader is a vertex shader or a fragment shader.

Linking and Using Shaders
Each shader object is compiled independently. To create a program, applications need a mechanism for specifying a list of shader objects to be linked. You can specify the list of shaders objects to be linked by creating a program object and attaching to it all the shader objects needed to create the program.

To create a program object, use the following command:

GLuint glCreateProgram(void)

Creates an empty program object and returns a non-zero value by which it can be referenced. A program object is an object to which shader objects can be attached. This provides a mechanism to specify the shader objects that will be linked to create a program. It also provides a means for checking the compatibility between shaders that will be used to create a program (for instance, checking the compatibility between a vertex shader and a fragment shader). When no longer needed as part of a program object, shader objects can be detached.





After the program object has been defined, shader objects can be attached to it. Attaching simply means creating a reference to the shader object so that it will be included when an attempt to link a program object is made. This is the application's way of describing the recipe for creating a program. The command to attach a shader object to a program object is



void glAttachShader(GLuint program,
                    GLuint shader)


Attaches the shader object specified by shader to the program object specified by program. This indicates that shader will be included in link operations that are performed on program.





There is no inherent limit on the number of shader objects that can be attached to a program object. All operations that can be performed on a shader object are valid whether or not the shader object is attached to a program object. It is permissible to attach a shader object to a program object before source code has been loaded into the shader object or before the shader object has been compiled. It is also permissible to attach a shader object to more than one program object. In other words, glAttachShader simply specifies the set of shader objects to be linked.

To create a valid program, all the shader objects attached to a program object must be compiled and the program object itself must be linked. The link operation assigns locations for uniform variables, initializes user-defined uniform variables, resolves references between independently compiled shader objects, and checks to make sure the vertex and fragment shaders are compatible with one another. To link a program object, use the command

void glLinkProgram(GLuint program)

Links the program object specified by program. If any shader objects of type GL_VERTEX_SHADER are attached to program, they are used to create an executable that will run on the programmable vertex processor. If any shader objects of type GL_FRAGMENT_SHADER are attached to program, they are used to create an executable that will run on the programmable fragment processor.

The status of the link operation is stored as part of the program object's state. This value is set to GL_TRUE if the program object was linked without errors and is ready for use and set to GL_FALSE otherwise. It can be queried by calling glGetProgram with arguments program and GL_LINK_STATUS.

As a result of a successful link operation, all active user-defined uniform variables (see Section 7.8) belonging to program are initialized to 0, and each of the program object's active uniform variables is assigned a location that can be queried with glGetUniformLocation. Also, any active user-defined attribute variables (see Section 7.7) that have not been bound to a generic vertex attribute index are bound to one at this time.

If program contains shader objects of type GL_VERTEX_SHADER but it does not contain shader objects of type GL_FRAGMENT_SHADER, the vertex shader is linked to the implicit interface for fixed functionality fragment processing. Similarly, if program contains shader objects of type GL_FRAGMENT_SHADER but it does not contain shader objects of type GL_VERTEX_SHADER, the fragment shader is linked to the implicit interface for fixed functionality vertex processing.

glLinkProgram also installs the generated executables as part of the current rendering state if the link operation was successful and the specified program object is already currently in use as a result of a previous call to glUseProgram. If the program object currently in use is relinked unsuccessfully, its link status is set to GL_FALSE, but the previously generated executables and associated state remain part of the current state until a subsequent call to glUseProgram removes them. After they are removed, they cannot be made part of current state until the program object has been successfully relinked.





Linking of a program object can fail for a number of reasons.

The number of active attribute variables supported by the implementation has been exceeded.

The number of active uniform variables supported by the implementation has been exceeded.

The main function is missing for the vertex shader or the fragment shader.

A varying variable actually used in the fragment shader is not declared with the same type (or is not declared at all) in the vertex shader.

A reference to a function or variable name is unresolved.

A shared global is declared with two different types or two different initial values.

One or more of the attached shader objects has not been successfully compiled.

Binding a generic attribute matrix caused some rows of the matrix to fall outside the allowed maximum of GL_MAX_VERTEX_ATTRIBS.

Not enough contiguous vertex attribute slots could be found to bind attribute matrices.

The program object's information log is updated at the time of the link operation. If the link operation is successful, a program is generated. It may contain an executable for the vertex processor, an executable for the fragment processor, or both. Whether the link operation succeeds or fails, the information and executables from the previous link operation will be lost. After the link operation, applications are free to modify attached shader objects, compile attached shader objects, detach shader objects, and attach additional shader objects. None of these operations affect the information log or the program that is part of the program object until the next link operation on the program object.

Information about the link operation can be obtained by calling glGetProgramInfoLog (described in Section 7.6) with program. If the program object was linked successfully, the information log is either an empty string or contains information about the link operation. If the program object was not linked successfully, the information log contains information about any link errors that occurred, along with warning messages and any other information the linker chooses to provide.

When the link operation has completed successfully, the program it contains can be installed as part of the current rendering state. The command to install the program as part of the rendering state is

void glUseProgram(GLuint program)

Installs the program object specified by program as part of current rendering state.

A program object contains an executable that will run on the vertex processor if it contains one or more shader objects of type GL_VERTEX_SHADER that have been successfully compiled and linked. Similarly, a program object contains an executable that will run on the fragment processor if it contains one or more shader objects of subtype GL_FRAGMENT_SHADER that have been successfully compiled and linked.

If program contains shader objects of type GL_VERTEX_SHADER but it does not contain shader objects of type GL_FRAGMENT_SHADER, an executable is installed on the vertex processor but fixed functionality is used for fragment processing. Similarly, if program contains shader objects of type GL_FRAGMENT_SHADER but it does not contain shader objects of type GL_VERTEX_SHADER, an executable is installed on the fragment processor but fixed functionality is used for vertex processing. If program is 0, the programmable processors are disabled, and fixed functionality is used for both vertex and fragment processing.


Cleaning Up
Objects should be deleted when they are no longer needed, and deletion can be accomplished with the following commands

void glDeleteShader(GLuint shader)

Frees the memory and invalidates the name associated with the shader object specified by shader. This command effectively undoes the effects of a call to glCreateShader.

If a shader object to be deleted is attached to a program object, it will be flagged for deletion, but it will not be deleted until it is no longer attached to any program object for any rendering context (i.e., it must be detached from wherever it was attached before it can be deleted). A value of 0 for shader is silently ignored.

To determine whether a shader object has been flagged for deletion, call glGetShader with arguments shader and GL_DELETE_STATUS.





void glDeleteProgram(GLuint program)

Frees the memory and invalidates the name associated with the program object specified by program. This command effectively undoes the effects of a call to glCreateProgram.

If a program object is in use as part of a current rendering state, it will be flagged for deletion, but it will not be deleted until it is no longer part of current state for any rendering context. If a program object to be deleted has shader objects attached to it, those shader objects are automatically detached but not deleted unless they have already been flagged for deletion by a previous call to glDeleteShader.

To determine whether a program object has been flagged for deletion, call glGetProgram with arguments program and GL_DELETE_STATUS.





When a shader object no longer needs to be attached to a program object, it can be detached with the command

void glDetachShader(GLuint program, GLuint shader)

Detaches the shader object specified by shader from the program object specified by program. This command undoes the effect of the command glAttachShader.

If shader has already been flagged for deletion by a call to glDeleteShader and it is not attached to any other program object, it is deleted after it has been detached.


Query Functions
The OpenGL Shading Language API contains several functions for querying object state. To obtain information about a shader object, use the following command:


void glGetShaderiv(GLuint shader,
                                    GLenum pname,
                                    GLint *params)


Returns in params the value of a parameter for a specific shader object. This function returns information about a shader object.


Queriable shader object parameters Parameter
 Operation

GL_SHADER_TYPE
 params returns a value of either GL_VERTEX_SHADER or GL_FRAGMENT_SHADER, depending on whether shader is the name of a vertex shader object or a fragment shader object.

GL_DELETE_STATUS
 params returns GL_TRUE if shader is currently flagged for deletion, and GL_FALSE otherwise.

GL_COMPILE_STATUS
 params returns GL_TRUE if the last compile operation on shader was successful, and GL_FALSE otherwise.

GL_INFO_LOG_LENGTH
 params returns the number of characters in the information log for shader, including the null termination character. If the object has no information log, a value of 0 is returned.

GL_SHADER_SOURCE_LENGTH
 params returns the length of the concatenation of the source strings that make up the shader source for shader, including the null termination character. If no source code exists, 0 is returned.










A similar function is provided for querying the state of a program object: the status of an operation on a program object, the number of attached shader objects, the number of active attributes (see Section 7.7), the number of active uniform variables (see Section 7.8), or the length of any of the strings maintained by a program object. The command to obtain information about a program object is


void glGetProgramiv(GLuint program,
                                      GLenum pname,
                                      GLint *params)


Returns in params the value of a parameter for a particular program object. This function returns information about a program object. Permitted parameters and their meanings are described in Table 7.2. In this table, the value for pname is shown on the left, and the operation performed is shown on the right.

Table 7.2. Queriable program object parameters Parameter
 Operation

GL_DELETE_STATUS
 params returns GL_TRUE if program is currently flagged for deletion, and GL_FALSE otherwise.

GL_LINK_STATUS
 params returns GL_TRUE if the last link operation on program was successful, and GL_FALSE otherwise.

GL_VALIDATE_STATUS
 params returns GL_TRUE if the last validation operation on program was successful, and GL_FALSE otherwise.

GL_INFO_LOG_LENGTH
 params returns the number of characters in the information log for program, including the null termination character. If the object has no information log, a value of 0 is returned.

GL_ATTACHED_SHADERS
 params returns the number of shader objects attached to program.

GL_ACTIVE_ATTRIBUTES
 params returns the number of active attribute variables for program.

GL_ACTIVE_ATTRIBUTE_MAX_LENGTH
 params returns the length of the longest active attribute variable name for program, including the null termination character. If no active attribute variables exist, 0 is returned.

GL_ACTIVE_UNIFORMS
 params returns the number of active uniform variables for program.

GL_ACTIVE_UNIFORM_MAX_LENGTH
 params returns the length of the longest active uniform variable name for program, including the null termination character. If no active uniform variables exist, 0 is returned.










The command to obtain the current shader string from a shader object is


void glGetShaderSource(GLuint shader
                                              GLsizei bufSize,
                                              GLsizei *length,
                                              GLchar *source)


Returns a concatenation of the source code strings from the shader object specified by shader. The source code strings for a shader object are the result of a previous call to glShaderSource. The string returned by the function is null terminated.

glGetShaderSource returns in source as much of the source code string as it can, up to a maximum of bufSize characters. The number of characters actually returned, excluding the null termination character, is specified by length. If the length of the returned string is not required, a value of NULL can be passed in the length argument. The size of the buffer required to store the returned source code string can be obtained by calling glGetShader with the value GL_SHADER_SOURCE_LENGTH.





Information about the compilation operation is stored in the information log for a shader object. Similarly, information about the link and validation operations is stored in the information log for a program object. The information log is a string that contains diagnostic messages and warnings. The information log may contain information useful during application development even if the compilation or link operation was successful. The information log is typically only useful during application development, and an application should not expect different OpenGL implementations to produce identical descriptions of error. To obtain the information log for a shader object, call


void glGetShaderInfoLog(GLuint shader,
                                               GLsizei maxLength,
                                               GLsizei *length,
                                               GLchar *infoLog)


Returns the information log for the specified shader object. The information log for a shader object is modified when the shader is compiled. The string that is returned is null terminated.

glGetShaderInfoLog returns in infoLog as much of the information log as it can, up to a maximum of maxLength characters. The number of characters actually returned, excluding the null termination character, is specified by length. If the length of the returned string is not required, a value of NULL can be passed in the length argument. The size of the buffer required to store the returned information log can be obtained by calling glGetShader with the value GL_INFO_LOG_LENGTH.

The information log for a shader object is a string that may contain diagnostic messages, warning messages, and other information about the last compile operation. When a shader object is created, its information log is a string of length 0.





To obtain the information log for a program object, call


void glGetProgramInfoLog(GLuint program,
                                                  GLsizei maxLength,
                                                  GLsizei *length,
                                                  GLchar *infoLog)


Returns the information log for the specified program object. The information log for a program object is modified when the program object is linked or validated. The string that is returned is null terminated.

glGetProgramInfoLog returns in infoLog as much of the information log as it can, up to a maximum of maxLength characters. The number of characters actually returned, excluding the null termination character, is specified by length. If the length of the returned string is not required, a value of NULL can be passed in the length argument. The size of the buffer required to store the returned information log can be obtained by calling glGetProgram with the value GL_INFO_LOG_LENGTH.

The information log for a program object is an empty string, a string containing information about the last link operation, or a string containing information about the last validation operation. It may contain diagnostic messages, warning messages, and other information. When a program object is created, its information log is a string of length 0.





The way the API is set up, you first need to perform a query to find out the length of the the information log (number of characters in the string). After allocating a buffer of the appropriate size, you can call glGetShaderInfoLog or glGetProgramInfoLog to put the information log string into the allocated buffer. You can then print it if you want to do so. Listing 7.2 shows a C function that does all this for a shader object. The code for obtaining the information log for a program object is almost identical.

Listing 7.2. C function to print the information log for an object
void printShaderInfoLog(GLuint shader)
{
    int infologLen = 0;
    int charsWritten  = 0;
    GLchar *infoLog;

    glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infologLen);
    printOpenGLError();  // Check for OpenGL errors
 
    if (infologLen > 0)
    {
        infoLog = (GLchar*) malloc(infologLen);
        if (infoLog == NULL)
        {
            printf("ERROR: Could not allocate InfoLog buffer\n");
            exit(1);
        }
        glGetShaderInfoLog(shader, infologLen, &charsWritten, infoLog);
        printf("InfoLog:\n%s\n\n", infoLog);
        free(infoLog);
    }
    printOpenGLError(); // Check for OpenGL errors
}




You can obtain the program object that is currently in use by calling glGet with the symbolic constant GL_CURRENT_PROGRAM.

The command to query a list of shader objects attached to a particular program object is


void glGetAttachedShaders(GLuint program,
                                                    GLsizei maxCount,
                                                    GLsizei *count,
                                                    GLuint *shaders)


Returns the handles of the shader objects attached to program. It returns in shaders as many of the handles of these shader objects as it can, up to a maximum of maxCount. The number of handles actually returned is specified by count. If the number of handles actually returned is not required (for instance, if it has just been obtained with glGetProgram), a value of NULL may be passed for count. If no shader objects are attached to program, a value of 0 is returned in count. The actual number of attached shaders can be obtained by calling glGetProgram with the value GL_ATTACHED_SHADERS.





Two new functions have been added to determine whether an object is a shader object or a program object. These functions may be useful if you have to process an object (for instance, to print its information log) without knowing whether it is a valid shader or program object. These two functions are defined as

GLboolean glIsShader(GLuint shader)

Returns GL_TRUE if shader is the name of a shader object. If shader is zero or a non-zero value that is not the name of a shader object, glIsShader returns GL_FALSE.





GLboolean glIsProgram(GLuint program)

Returns GL_TRUE if program is the name of a program object. If program is zero or a non-zero value that is not the name of a program object, glIsProgram returns GL_FALSE.


Specifying Vertex Attributes
One way you can pass vertex data to OpenGL is by calling glBegin, followed by some sequence of glColor/glNormal/glVertex/etc. A call to glEnd terminates this method of specifying vertex data.

These calls continue to work in the OpenGL programmable environment. As before, a call to glVertex indicates that the data for an individual vertex is complete and should be processed. However, if a valid vertex shader has been installed with glUseProgram, the vertex data is processed by that vertex shader instead of by the usual fixed functionality of OpenGL. A vertex shader can use the following built-in variables to access the standard types of vertex data passed to OpenGL:

attribute vec4 gl_Color;
attribute vec4 gl_SecondaryColor;
attribute vec3 gl_Normal;
attribute vec4 gl_Vertex;
attribute vec4 gl_MultiTexCoord0;
attribute vec4 gl_MultiTexCoord1;
attribute vec4 gl_MultiTexCoord2;
. . .
attribute vec4 gl_FogCoord;


OpenGL's vertex-at-a-time interface is simple and powerful, but on today's systems it is definitely not the highest-performance way of transferring vertex data to the graphics accelerator. Whenever possible, applications should use the vertex array interface instead. This interface allows you to store vertex data in arrays and set pointers to those arrays. Instead of sending one vertex at a time to OpenGL, you can send a whole set of primitives at a time. With vertex buffer objects, it is even possible that vertex arrays are stored in memory on the graphics board to exact maximum performance.

The vertex array interface also works the same way in the OpenGL programmable environment as it did previously. When a vertex array is sent to OpenGL, the vertex data in the vertex array is processed one vertex at a time, just like the vertex-at-a-time interface. If a vertex shader is active, each vertex is processed by the vertex shader rather than by the fixed functionality of OpenGL.

However, the brave new world of programmability means that applications no longer need to be limited to the standard attributes defined by OpenGL. There are many additional per-vertex attributes that applications might like to pass into a vertex shader. It is easy to imagine that applications will want to specify per-vertex data such as tangents, temperature, pressure, and who knows what else. How do we allow applications to pass nontraditional attributes to OpenGL and operate on them in vertex shaders?

The answer is that OpenGL provides a small number of generic locations for passing in vertex attributes. Each location is numbered and has room to store up to four floating-point components (i.e., it is a vec4). An implementation that supports 16 attribute locations will have them numbered from 0 to 15. An application can pass a vertex attribute into any of the generic numbered slots by using one of the following functions:

void glVertexAttrib{1|2|3|4}{s|f|d}(GLuint index, TYPE v)
void glVertexAttrib{1|2|3}{s|f|d}v(GLuint index, const TYPE *v)
void glVertexAttrib4{b|s|i|f|d|ub|us|ui}v(GLuint index, const TYPE *v)

Sets the generic vertex attribute specified by index to the value specified by v. This command can have up to three suffixes that differentiate variations of the parameters accepted. The first suffix can be 1, 2, 3, or 4 to specify whether v contains 1, 2, 3, or 4 components. If the second and third components are not provided, they are assumed to be 0, and if the fourth component is not provided, it is assumed to be 1. The second suffix indicates the data type of v and may specify byte (b), short (s), int (i), float (f), double (d), unsigned byte (ub), unsigned short (us), or unsigned int (ui). The third suffix is an optional v meaning that v is a pointer to an array of values of the specified data type.





This set of commands has a certain set of rules for converting data to the floating-point internal representation specified by OpenGL. Floats and doubles are mapped into OpenGL internal floating-point values as you would expect, and integer values are converted to floats by a decimal point added to the right of the value provided. Thus, a value of 27 for a byte, int, short, unsigned byte, unsigned int, or unsigned short becomes a value of 27.0 for computation within OpenGL.

Another set of entry points supports the passing of normalized values as generic vertex attributes:

void glVertexAttrib4Nub(GLuint index, TYPE v)
void glVertexAttrib4N{b|s|i|f|d|ub|us|ui}v(GLuint index, const TYPE *v)

Sets the generic vertex attribute specified by index to the normalized value specified by v. In addition to N (to indicate normalized values), this command can have two suffixes that differentiate variations of the parameters accepted. The first suffix indicates the data type of v and specifies byte (b), short (s), int (i), float (f), double (d), unsigned byte (ub), unsigned short (us), or unsigned int (ui). The second suffix is an optional v meaning that v is a pointer to an array of values of the specified data type.





N in a command name indicates that, for data types other than float or double, the arguments will be linearly mapped to a normalized range in the same way as data provided to the integer variants of glColor or glNormalthat is, for signed integer variants of the functions, the most positive, representable value maps to 1.0, and the most negative representable value maps to 1.0. For the unsigned integer variants, the largest representable value maps to 1.0, and the smallest representable value maps to 0.

Attribute variables are allowed to be of type mat2, mat3, or mat4. Attributes of these types can be loaded with the glVertexAttrib entry points. Matrices must be loaded into successive generic attribute slots in column major order, with one column of the matrix in each generic attribute slot. Thus, to load a mat4 attribute, you would load the first column in generic attribute slot i, the second in slot i + 1, the third in slot i + 2, and the fourth in slot i + 3.

With one exception, generic vertex attributes are just thatgeneric. They pass additional color values, tangents, binormals, depth values, or anything. The exception is that the generic vertex attribute with index 0 indicates the completion of a vertex just like a call to glVertex.

A glVertex2, glVertex3, or glVertex4 command is completely equivalent to the corresponding glVertexAttrib command with an index argument of 0. There are no current values for generic vertex attribute 0 (an error is generated if you attempt to query its current value). This is the only generic vertex attribute with this property; calls to set other standard vertex attributes can be freely mixed with calls to set any of the other generic vertex attributes. You are also free to mix calls to glVertex and glVertexAttrib with index 0.

The vertex array API has been similarly extended to allow generic vertex attributes to be specified as vertex arrays. The following call establishes the vertex array pointer for a generic vertex attribute:


void glVertexAttribPointer(GLuint index,
                                                 GLint size,
                                                 GLenum type,
                                                 GLboolean normalized,
                                                 GLsizei stride,
                                                 const GLvoid *pointer)


Specifies the location and data format of an array of generic vertex attribute values to use when rendering. The generic vertex attribute array to be specified is indicated by index. size specifies the number of components per attribute and must be 1, 2, 3, or 4. type specifies the data type of each component (GL_BYTE, GL_UNSIGNED_BYTE, GL_SHORT, GL_UNSIGNED_SHORT, GL_INT, GL_UNSIGNED_INT, GL_FLOAT, or GL_DOUBLE). stride specifies the byte stride from one attribute to the next, allowing attribute values to be intermixed with other attribute values or stored in a separate array. A value of 0 for stride means that the values are stored sequentially in memory with no gaps between successive elements. If set to GL_TRUE, normalize specifies that values stored in an integer format are to be mapped to the range [1.0,1.0] (for signed values) or [0.0,1.0] (for unsigned values) when they are accessed and converted to floating point. Otherwise, values are converted to floats directly without normalization. pointer is the memory address of the first generic vertex attribute in the vertex array.





After the vertex array information has been specified for a generic vertex attribute array, the array needs to be enabled. When enabled, the generic vertex attribute data in the specified array is provided along with other enabled vertex array data when vertex array drawing commands such as glDrawArrays are called. To enable or disable a generic vertex attribute array, use the commands

void glEnableVertexAttribArray(GLuint index)
void glDisableVertexAttribArray(GLuint index)

Enables or disables the generic vertex attribute array specified by index. By default, all client-side capabilities are disabled, including all generic vertex attribute arrays. If enabled, the values in the generic vertex attribute array are accessed and used for rendering when calls are made to vertex array commands such as glArrayElement, glDrawArrays, glDrawElements, glDrawRangeElements, glMultiDrawArrays, or glMultiDrawElements.





This solves the question of how generic vertex data is passed into OpenGL, but how do we access that data from within a vertex shader? We don't want to refer to these numbered locations in our shader, because this approach is not very descriptive and is prone to errors. The OpenGL Shading Language API provides two ways for associating generic vertex indices with vertex shader attribute variables.

The first way is to let the linker assign the bindings automatically. In this case, the application would need to query OpenGL after linking to determine the generic vertex indices that were assigned and then would use these indices when passing the attributes to OpenGL.

The second way is for the application to choose the index value of the generic vertex attribute to be used and explicitly bind it to a specific attribute variable in the vertex shader by using the following function before linking occurs:


void glBindAttribLocation(GLuint program,
                                                GLuint index,
                                                const GLchar *name)


Associates a user-defined attribute variable in the program object specified by program with a generic vertex attribute index. The name of the user-defined attribute variable is passed as a null terminated string in name. If name was bound previously, that information is lost. Thus, you cannot bind one user-defined attribute variable to multiple indices, but you can bind multiple user-defined attribute variables to the same index. The generic vertex attribute index to be bound to this variable is specified by index. When program is made part of current state, values provided through the generic vertex attribute index modify the value of the user-defined attribute variable specified by name.

If name refers to a matrix attribute variable, index refers to the first column of the matrix. Other matrix columns are then automatically bound to locations index+1 for a matrix of type mat2; index+1 and index+2 for a matrix of type mat3; and index+1, index+2, and index+3 for a matrix of type mat4.

Applications are not allowed to bind any of the standard OpenGL vertex attributes with this command because they are bound automatically when needed. Any attribute binding that occurs after the program object has been linked does not take effect until the next time the program object is linked.





glBindAttribLocation can be called before any vertex shader objects are attached to the specified program object. It is also permissible to bind an attribute variable name that is never used in a vertex shader to a generic attribute index.

Applications are allowed to bind more than one vertex shader attribute name to the same generic vertex attribute index. This is called ATTRIBUTE ALIASING, and it is allowed only if just one of the aliased attributes is active in the executable program or if no path through the shader consumes more than one attribute of a set of attributes aliased to the same location. Another way of saying this is that more than one attribute name may be bound to a generic attribute index if, in the end, only one name is used to access the generic attribute in the vertex shader. The compiler and linker are allowed to assume that no aliasing is done and are free to employ optimizations that work only in the absence of aliasing. OpenGL implementations are not required to do error checking to detect attribute aliasing. Because there is no way to bind standard attributes, it is not possible to alias generic attributes with conventional ones.

The binding between an attribute variable name and a generic attribute index can be specified at any time with glBindAttribLocation. Attribute bindings do not go into effect until glLinkProgram is called, so any attribute variables that need to be bound explicitly for a particular use of a shader should be bound before the link operation occurs. After a program object has been linked successfully, the index values for attribute variables remain fixed (and their values can be queried) until the next link command occurs. To query the attribute binding for a named vertex shader attribute variable, use glGetAttribLocation. It returns the binding that actually went into effect the last time glLinkProgram was called for the specified program object. Attribute bindings that have been specified since the last link operation are not returned by glGetAttribLocation.


GLint glGetAttribLocation(GLuint program,
                                                 const GLchar *name)


Queries the previously linked program object specified by program for the attribute variable specified by name and returns the index of the generic vertex attribute that is bound to that attribute variable. If name is a matrix attribute variable, the index of the first column of the matrix is returned. If the named attribute variable is not an active attribute in the specified program object or if name starts with the reserved prefix "gl_", a value of 1 is returned.





Using these functions, we can create a vertex shader that contains a user-defined attribute variable named Opacity that is used directly in the lighting calculations. We can decide that we want to pass per-vertex opacity values in generic attribute location 1 and set up the proper binding with the following line of code:

glBindAttribLocation(myProgram, 1, "Opacity");


Subsequently, we can call glVertexAttrib to pass an opacity value at every vertex:

glVertexAttrib1f(1, opacity);


The glVertexAttrib calls are all designed for use between glBegin and glEnd. As such, they offer replacements for the standard OpenGL calls such as glColor, glNormal, and so on. But as we have already pointed out, vertex arrays should be used if graphics performance is a concern.

As mentioned, each of the generic attribute locations has enough room for four floating-point components. Applications are permitted to store 1, 2, 3, or 4 components in each location. A vertex shader may access a single location by using a user-defined attribute variable that is a float, a vec2, a vec3, or a vec4. It may access two consecutive locations by using a user-defined attribute variable that is a mat2, three using a mat3, and four using a mat4.


The application can provide a different program object and specify different names and mappings for attribute variables in the vertex shader, and if no calls have been made to update the attribute values in the interim, the attribute variables in the new vertex shader get the values left behind by the previous one.

Attribute variables that can be accessed when a vertex shader is executed are called ACTIVE ATTRIBUTES. To obtain information about an active attribute, use the following command:




void glGetActiveAttrib(GLuint program,
                                         GLuint index,
                                         GLsizei bufSize,
                                         GLsizei *length,
                                         GLint *size,
                                         GLenum *type,
                                         GLchar *name)


Returns information about an active attribute variable in the program object specified by program. The number of active attributes in a program object can be obtained by calling glGetProgram with the value GL_ACTIVE_ATTRIBUTES. A value of 0 for index causes information about the first active attribute variable to be returned. Permissible values for index range from 0 to the number of active attributes minus 1.

A vertex shader can use built-in attribute variables, user-defined attribute variables, or both. Built-in attribute variables have a prefix of "gl_" and reference conventional OpenGL vertex attributes (e.g., gl_Vertex, gl_Normal, etc.; see Section 4.1.1 for a complete list.) User-defined attribute variables have arbitrary names and obtain their values through numbered generic vertex attributes. An attribute variable (either built-in or user-defined) is considered active if it is determined during the link operation that it can be accessed during program execution. Therefore, program should have previously been the target of a call to glLinkProgram, but it is not necessary for it to have been linked successfully.

The size of the character buffer needed to store the longest attribute variable name in program can be obtained by calling glGetProgram with the value GL_ACTIVE_ATTRIBUTE_MAX_LENGTH. This value should be used to allocate a buffer of sufficient size to store the returned attribute name. The size of this character buffer is passed in bufSize, and a pointer to this character buffer is passed in name.

glGetActiveAttrib returns the name of the attribute variable indicated by index, storing it in the character buffer specified by name. The string returned is null terminated. The actual number of characters written into this buffer is returned in length, and this count does not include the null termination character. If the length of the returned string is not required, a value of NULL can be passed in the length argument.

The type argument returns a pointer to the attribute variable's data type. The symbolic constants GL_FLOAT, GL_FLOAT_VEC2, GL_FLOAT_VEC3, GL_FLOAT_VEC4, GL_FLOAT_MAT2, GL_FLOAT_MAT3, and GL_FLOAT_MAT4 may be returned. The size argument returns the size of the attribute in units of the type returned in type.

The list of active attribute variables may include both built-in attribute variables (which begin with the prefix "gl_") as well as user-defined attribute variable names.

This function returns as much information as it can about the specified active attribute variable. If no information is available, length is 0 and name is an empty string. This situation could occur if this function is called after a link operation that failed. If an error occurs, the return values length, size, type, and name are unmodified.





The glGetActiveAttrib command can be useful in an environment in which shader development occurs separately from application development. If some attribute-naming conventions are agreed to between the shader writers and the application developers, the latter could query the program object at runtime to determine the attributes that are actually needed and could pass those down. This approach can provide more flexibility in the shader development process.

To query the state of a particular generic vertex attribute, call one of the following commands:


void glGetVertexAttribfv(GLuint index,
                                             GLenum pname,
                                             GLfloat *params)
void glGetVertexAttribiv(GLuint index,
                                             GLenum pname,
                                             GLint *params)
void glGetVertexAttribdv(GLuint index,
                                             GLenum pname,
                                             GLdouble *params)


Returns in params the value of a generic vertex attribute parameter. The generic vertex attribute to be queried is specified by index, and the parameter to be queried is specified by pname. Parameters and return values are summarized in Table 7.3. All the parameters except GL_CURRENT_VERTEX_ATTRIB represent client-side state.

Table 7.3. Generic vertex attribute parameters Parameter
 Operation

GL_VERTEX_ATTRIB_ARRAY_ENABLED
 params returns a single value that is non-zero (true) if the vertex attribute array for index is enabled and 0 (false) if it is disabled. The initial value is GL_FALSE.

GL_VERTEX_ATTRIB_ARRAY_SIZE
 params returns a single value, the size of the vertex attribute array for index. The size is the number of values for each element of the vertex attribute array, and it is 1, 2, 3, or 4. The initial value is 4.

GL_VERTEX_ATTRIB_ARRAY_STRIDE
 params returns a single value, the array stride (number of bytes between successive elements) for the vertex attribute array for index. A value of 0 indicates that the array elements are stored sequentially in memory. The initial value is 0.

GL_VERTEX_ATTRIB_ARRAY_TYPE
 params returns a single value, a symbolic constant indicating the array type for the vertex attribute array for index. Possible values are GL_BYTE, GL_UNSIGNED_BYTE, GL_SHORT, GL_UNSIGNED_SHORT, GL_INT, GL_UNSIGNED_INT, GL_FLOAT, and GL_DOUBLE. The initial value is GL_FLOAT.

GL_VERTEX_ATTRIB_ARRAY_NORMALIZED
 params returns a single value that is nonzero (true) if fixed-point data types for the vertex attribute array indicated by index are normalized when they are converted to floating point and 0 (false) otherwise. The initial value is GL_FALSE.

GL_CURRENT_VERTEX_ATTRIB
 params returns four values that represent the current value for the generic vertex attribute specified by index. Generic vertex attribute 0 is unique in that it has no current state, so an error is generated if index is 0. The initial value for all other generic vertex attributes is (0, 0, 0, 1).

Parameter
 Operation

GL_VERTEX_ATTRIB_ARRAY_ENABLED
 params returns a single value that is non-zero (true) if the vertex attribute array for index is enabled and 0 (false) if it is disabled. The initial value is GL_FALSE.

GL_VERTEX_ATTRIB_ARRAY_SIZE
 params returns a single value, the size of the vertex attribute array for index. The size is the number of values for each element of the vertex attribute array, and it is 1, 2, 3, or 4. The initial value is 4.

GL_VERTEX_ATTRIB_ARRAY_STRIDE
 params returns a single value, the array stride (number of bytes between successive elements) for the vertex attribute array for index. A value of 0 indicates that the array elements are stored sequentially in memory. The initial value is 0.

GL_VERTEX_ATTRIB_ARRAY_TYPE
 params returns a single value, a symbolic constant indicating the array type for the vertex attribute array for index. Possible values are GL_BYTE, GL_UNSIGNED_BYTE, GL_SHORT, GL_UNSIGNED_SHORT, GL_INT, GL_UNSIGNED_INT, GL_FLOAT, and GL_DOUBLE. The initial value is GL_FLOAT.

GL_VERTEX_ATTRIB_ARRAY_NORMALIZED
 params returns a single value that is nonzero (true) if fixed-point data types for the vertex attribute array indicated by index are normalized when they are converted to floating point and 0 (false) otherwise. The initial value is GL_FALSE.

GL_CURRENT_VERTEX_ATTRIB
 params returns four values that represent the current value for the generic vertex attribute specified by index. Generic vertex attribute 0 is unique in that it has no current state, so an error is generated if index is 0. The initial value for all other generic vertex attributes is (0, 0, 0, 1).

void glGetVertexAttribPointerv(GLuint index,
                                                          GLenum pname,
                                                          GLvoid **pointer)


Returns pointer information. index is the generic vertex attribute to be queried, pname is a symbolic constant specifying the pointer to be returned, and params is a pointer to a location in which to place the returned data. The only accepted value for pname is GL_VERTEX_ATTRIB_ARRAY_POINTER. This causes params to return a single value that is a pointer to the vertex attribute array for the generic vertex attribute specified by index.



Specifying Uniform Variables
As described in the previous section, attribute variables provide frequently modified data to the vertex shader. Less frequently changing data can be specified using uniform variables. Uniform variables are declared within a shader and can be loaded directly by the application. This lets applications provide any type of arbitrary data to a shader. Applications can modify these values as often as every primitive in order to modify the behavior of the shader (although performance may suffer if this is done). Typically, uniform variables are used to supply state that stays constant for many primitives.

The OpenGL Shading Language also defines a number of built-in variables that track OpenGL state. Applications can continue using OpenGL to manage state through existing OpenGL calls and can use these built-in uniform variables in custom shaders. Of course, if you want something that isn't already supported directly by OpenGL, it is a simple matter to define your own uniform variable and supply the value to your shader.

When a program object is made current, built-in uniform variables that track OpenGL state are initialized to the current value of that OpenGL state. Subsequent calls that modify an OpenGL state value cause the built-in uniform variable that tracks that state value to be updated as well.

The basic model for specifying uniform variables is different from the model for specifying attribute variables. As discussed in the preceding section, for attribute variables, the application can specify the attribute location before linking occurs. In contrast, the locations of uniform variables cannot be specified by the application. Instead, they are always determined by OpenGL at link time. As a result, applications always need to query the uniform location after linking occurs.

To update the value of a user-defined uniform variable, an application needs to determine its location and then specify its value. The locations of uniform variables are assigned at link time and do not change until the next link operation occurs. Each time linking occurs, the locations of uniform variables may change, and so the application must query them again before setting them. The locations of the user-defined uniform variables in a program object can be queried with the following command:



GLint glGetUniformLocation(GLuint program,
                                                      const GLchar *name)


Returns an integer that represents the location of a specific uniform variable within a program object. name must be a null terminated string that contains no white space. name must be an active uniform variable name in program that is not a structure, an array of structures, or a subcomponent of a vector or a matrix. This function returns 1 if name does not correspond to an active uniform variable in program or if name starts with the reserved prefix "gl_".

Uniform variables that are structures or arrays of structures can be queried with glGetUniformLocation for each field within the structure. The array element operator "[]" and the structure field operator "." can be used in name to select elements within an array or fields within a structure. The result of using these operators is not allowed to be another structure, an array of structures, or a subcomponent of a vector or a matrix. Except if the last part of name indicates a uniform variable array, the location of the first element of an array can be retrieved with the name of the array or with the name appended by "[0]".

The actual locations assigned to uniform variables are not known until the program object is linked successfully. After linking has occurred, the command glGetUniformLocation can obtain the location of a uniform variable. This location value can then be passed to glUniform to set the value of the uniform variable or to glGetUniform in order to query the current value of the uniform variable. After a program object has been linked successfully, the index values for uniform variables remain fixed until the next link command occurs. Uniform variable locations and values can only be queried after a link if the link was successful.





Loading of user-defined uniform values is only possible for the program object that is currently in use. All user-defined uniform variables are initialized to 0 when a program object is successfully linked. User-defined uniform values are part of the state of a program object. Their values can be modified only when the program object is part of current rendering state, but the values of uniform variables are preserved as the program object is swapped in and out of current state. The following commands load uniform variables into the program object that is currently in use:

void glUniform{1|2|3|4}{f|i}(GLint location, TYPE v)

Sets the user-defined uniform variable or uniform variable array specified by location to the value specified by v. The suffix 1, 2, 3, or 4 indicates whether v contains 1, 2, 3, or 4 components. This value should match the number of components in the data type of the specified uniform variable (e.g., 1 for float, int, bool; 2 for vec2, ivec2, bvec2, etc.). The suffix f indicates that floating-point values are being passed, and the suffix i indicates that integer values are being passed; this type should also match the data type of the specified uniform variable. The i variants of this function should be used to provide values for uniform variables defined as int, ivec2, ivec3, and ivec4, or arrays of these. The f variants should be used to provide values for uniform variables of type float, vec2, vec3, or vec4, or arrays of these. Either the i or the f variants can be used to provide values for uniform variables of type bool, bvec2, bvec3, and bvec4 or arrays of these. The uniform variable is set to false if the input value is 0 or 0.0f, and it is set to true otherwise.






void glUniform{1|2|3|4}{f|i}v(GLint location,
                                                    GLuint count,
                                                    const TYPE v)


Sets the user-defined uniform variable or uniform variable array specified by location to the values specified by v. These commands pass a count and a pointer to the values to be loaded into a uniform variable or a uniform variable array. A count of 1 should be used for modifying the value of a single uniform variable, and a count of 1 or greater can be used to modify an array. The number specified in the name of the command specifies the number of components for each element in v, and it should match the number of components in the data type of the specified uniform variable (e.g., 1 for float, int, bool; 2 for vec2, ivec2, bvec2, etc.). The v in the command name indicates that a pointer to a vector of values is being passed. The f and i suffixes are defined in the same way as for the nonvector variants of glUniform.

For uniform variable arrays, each element of the array is considered to be of the type indicated in the name of the command (e.g., glUniform3f or glUniform3fv can be used to load a uniform variable array of type vec3). The number of elements of the uniform variable array to be modified is specified by count.






void glUniformMatrix{2|3|4}fv(GLint location,
                                                       GLuint count,
                                                       GLboolean transpose,
                                                       const GLfloat *v)


Sets the user-defined uniform matrix variable or uniform matrix array variable specified by location to the values specified by v. The number in the command name is interpreted as the dimensionality of the matrix. The number 2 indicates a 2 x 2 matrix (i.e., 4 values), the number 3 indicates a 3 x 3 matrix (i.e., 9 values), and the number 4 indicates a 4 x 4 matrix (i.e., 16 values). If transpose is GL_FALSE, each matrix is assumed to be supplied in column major order. If transpose is GL_TRUE, each matrix is assumed to be supplied in row major order. The count argument specifies the number of matrices to be passed. A count of 1 should be used for modifying the value of a single matrix, and a count greater than 1 can be used to modify an array of matrices.





glUniform1i and glUniform1iv are the only two functions that can be used to load uniform variables defined as sampler types (see Section 7.9). Attempting to load a sampler with any other function results in an error.

Errors can also be generated by glUniform for any of the following reasons:

If there is no current program object

If location is an invalid uniform variable location for the current program object

If the number of values specified by count would exceed the declared extent of the indicated uniform variable or uniform variable array

Other than the preceding exceptions noted, if the type and size of the uniform variable as defined in the shader do not match the type and size specified in the name of the command used to load its value

In all of these cases, the indicated uniform variable will not be modified.

When the location of a user-defined uniform variable has been determined, the following command can be used to query its current value:


void glGetUniformfv(GLuint program,
                                      GLint location,
                                      GLfloat *params)
void glGetUniformiv(GLuint program,
                                      GLint location,
                                      GLint *params)


Return in params the value(s) of the specified uniform variable. The type of the uniform variable specified by location determines the number of values returned. If the uniform variable is defined in the shader as a bool, int, or float, a single value is returned. If it is defined as a vec2, ivec2, or bvec2, two values are returned. If it is defined as a vec3, ivec3, or bvec3, three values are returned, and so on. To query values stored in uniform variables declared as arrays, call glGetUniform for each element of the array. To query values stored in uniform variables declared as structures, call glGetUniform for each field in the structure. The values for uniform variables declared as a matrix are returned in column major order.

The locations assigned to uniform variables are not known until the program object is linked. After linking has occurred, the command glGetUniformLocation can obtain the location of a uniform variable. This location value can then be passed to glGetUniform to query the current value of the uniform variable. After a program object has been linked successfully, the index values for uniform variables remain fixed until the next link command occurs. The uniform variable values can only be queried after a link if the link was successful.





The location of a uniform variable cannot be used for anything other than specifying or querying that particular uniform variable. Say you declare a uniform variable as a structure that has three fields in succession that are defined as floats. If you call glGetUniformLocation to determine that the first of those three floats is at location n, do not assume that the next one is at location n + 1. It is possible to query the location of the ith element in an array. That value can then be passed to glUniform to load one or more values into the array, starting at the ith element of the array. It is not possible to take i and add an integer N and use the result to try to modify element i + N in the array. The location of array element i + N should be queried specifically before any attempt to set its value. These location values do not necessarily represent real memory locations. Applications that assume otherwise will not work.

For example, consider the following structure defined within a shader:

uniform struct
{
    struct
    {
        float a;
        float b[10];
    } c[2];
    vec2 d;
} e;


and consider the API calls that attempt to determine locations within that structure:

loc1 = glGetUniformLocation(progObj, "e.d");         // is valid
loc2 = glGetUniformLocation(progObj, "e.c[0]");      // is not valid
loc3 = glGetUniformLocation(progObj, "e.c[0].b") ;   // is valid
loc4 = glGetUniformLocation(progObj, "e.c[0].b[2]"); // is valid


The location loc2 cannot be retrieved because e.c[0] references a structure.

Now consider the commands to set parts of the uniform variable:

glUniform2f(loc1, 1.0f, 2.0f);     // is valid
glUniform2i(loc1, 1, 2);           // is not valid
glUniform1f(loc1, 1.0f);           // is not valid
glUniform1fv(loc3, 10, floatPtr);  // is valid
glUniform1fv(loc4, 10, floatPtr);  // is not valid
glUniform1fv(loc4, 8, floatPtr);   // is valid


The second command in the preceding list is invalid because loc1 references a uniform variable of type vec2, not ivec2. The third command is invalid because loc1 references a vec2, not a float. The fifth command in the preceding list is invalid because it attempts to set values that will exceed the length of the array.

Uniform variables (either built in or user defined) that can be accessed when a shader is executed are called ACTIVE UNIFORMS. You can think of this as though the process of compiling and linking is capable of deactivating uniform variables that are declared but never used. This provides more flexibility in coding stylemodular code can define lots of uniform variables, and those that can be determined to be unused are typically optimized away.

To obtain the list of active uniform variables from a program object, use glGetActiveUniform. This command can be used by an application to query the uniform variables in a program object and set up user interface elements to allow direct manipulation of all the user-defined uniform values.


void glGetActiveUniform(GLuint program,
                                              GLuint index,
                                              GLsizei bufSize,
                                              GLsizei *length,
                                              GLint *size,
                                              GLenum *type,
                                              GLchar *name)


Returns information about an active uniform variable in the program object specified by program. The number of active uniform variables can be obtained by calling glGetProgram with the value GL_ACTIVE_UNIFORMS. A value of 0 for index selects the first active uniform variable. Permissible values for index range from 0 to the number of active uniform variables minus 1.

Shaders may use built-in uniform variables, user-defined uniform variables, or both. Built-in uniform variables have a prefix of "gl_" and reference existing OpenGL state or values derived from such state (e.g., gl_Fog, gl_ModelViewMatrix, etc., see Section 4.3 for a complete list.) User-defined uniform variables have arbitrary names and obtain their values from the application through calls to glUniform. A uniform variable (either built in or user defined) is considered active if it is determined during the link operation that it can be accessed during program execution. Therefore, program should have previously been the target of a call to glLinkProgram, but it is not necessary for it to have been linked successfully.

The size of the character buffer required to store the longest uniform variable name in program can be obtained by calling glGetProgram with the value GL_ACTIVE_UNIFORM_MAX_LENGTH. This value should be used to allocate a buffer of sufficient size to store the returned uniform variable name. The size of this character buffer is passed in bufSize, and a pointer to this character buffer is passed in name.

glGetActiveUniform returns the name of the uniform variable indicated by index, storing it in the character buffer specified by name. The string returned is null terminated. The actual number of characters written into this buffer is returned in length, and this count does not include the null termination character. If the length of the returned string is not required, a value of NULL can be passed in the length argument.

The type argument returns a pointer to the uniform variable's data type. One of the following symbolic constants may be returned: GL_FLOAT, GL_FLOAT_VEC2, GL_FLOAT_VEC3, GL_FLOAT_VEC4, GL_INT, GL_INT_VEC2, GL_INT_VEC3, GL_INT_VEC4, GL_BOOL, GL_BOOL_VEC2, GL_BOOL_VEC3, GL_BOOL_VEC4, GL_FLOAT_MAT2, GL_FLOAT_MAT3, GL_FLOAT_MAT4, GL_SAMPLER_1D, GL_SAMPLER_2D, GL_SAMPLER_3D, GL_SAMPLER_CUBE, GL_SAMPLER_1D_SHADOW, or GL_SAMPLER_2D_SHADOW.

If one or more elements of an array are active, the name of the array is returned in name, the type is returned in type, and the size parameter returns the highest array element index used, plus one, as determined by the compiler and linker. Only one active uniform variable will be reported for a uniform array.

Uniform variables that are declared as structures or arrays of structures are not returned directly by this function. Instead, each of these uniform variables is reduced to its fundamental components containing the "." and "[]" operators such that each of the names is valid as an argument to glGetUniformLocation. Each of these reduced uniform variables is counted as one active uniform variable and is assigned an index. A valid name cannot be a structure, an array of structures, or a subcomponent of a vector or matrix.

The size of the uniform variable is returned in size. Uniform variables other than arrays have a size of 1. Structures and arrays of structures are reduced as described earlier, such that each of the names returned will be a data type in the earlier list. If this reduction results in an array, the size returned is as described for uniform arrays; otherwise, the size returned is 1.

The list of active uniform variables may include both built-in uniform variables (which begin with the prefix "gl_") as well as user-defined uniform variable names.

This function returns as much information as it can about the specified active uniform variable. If no information is available, length is 0, and name is an empty string. This situation could occur if this function is called after a link operation that failed. If an error occurs, the return values length, size, type, and name are unmodified.





Using glGetActiveUniform, the application developer can programmatically query the uniform variables actually used in a shader and automatically create a user interface that allows the end user to modify those uniform variables. If among the shader writers there were some convention concerning the names of uniform variables, the user interface could be even more specific. For instance, any uniform variable name that ended with "Color" would be edited with the color selection tool. This function can also be useful when mixing and matching a set of vertex and fragment shaders designed to play well with each other, using a subset of known uniform variables. It can be much safer and less tedious to programmatically determine which uniform variables to send down than to hardcode all the combinations.




Samplers
glUniform1i and glUniform1iv load uniform variables defined as sampler types (i.e., uniform variables of type sampler1D, sampler2D, sample3D, samplerCube, sampler1DShadow, or sampler2DShadow). They may be declared within either vertex shaders or fragment shaders.

The value contained in a sampler is used within a shader to access a particular texture map. The value loaded into the sampler by the application should be the number of the texture unit to be used to access the texture. For vertex shaders, this value should be less than the implementationdependent constant GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS, which can be queried with glGet. For fragment shaders, this value should be less than the implementation-dependent constant GL_MAX_TEXTURE_IMAGE_UNITS.

The suffix on the sampler type indicates the texture type to be accessed: 1D, 2D, 3D, cube map, 1D shadow, or 2D shadow. In OpenGL, a texture object of each of the first four texture types can be bound to a single texture unit, and this suffix allows the desired texture object to be chosen. A 1D shadow sampler is used to access the 1D texture when depth comparisons are enabled, and a 2D shadow sampler is used to access the 2D texture when depth comparisons are enabled. If two uniform variables of different sampler types contain the same value, an error is generated when the next rendering command is issued. Attempting to load a sampler with any command other than glUniform1i or glUniform1iv results in an error being generated.

From within a shader, samplers should be considered an opaque data type. The current API provides a way of specifying an integer representing the texture image unit to be used. In the future, the API may be extended to allow a sampler to refer directly to a texture object.

Samplers that can be accessed when a program is executed are called ACTIVE SAMPLERS. The link operation fails if it determines that the number of active samplers exceeds the maximum allowable limits. The number of active samplers permitted on the vertex processor is specified by GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS, the number of active samplers permitted on the fragment processor is specified by GL_MAX_TEXTURE_IMAGE_UNITS, and the number of active samplers permitted on both processors combined is GL_COMBINED_TEXTURE_IMAGE_UNITS.

Multiple Render Targets
Another feature added to OpenGL in version 2.0 was the ability to render into multiple buffers simultaneously. The OpenGL Shading Language makes provisions for this capability by including a fragment shader output variable defined as an array called gl_FragData. The size of this array is implementation dependent, but must be at least 1. The elements of this array are defined to be of type vec4.

With this capability, applications can develop fragment shaders that compute multiple values for each fragment and store them in offscreen memory. These values can be accessed during a future rendering pass. Among other things, this lets applications implement complex multipass algorithms and use the graphics hardware for general-purpose computation.

To set up OpenGL for rendering into multiple target buffers, use


void glDrawBuffers(GLsizei n,
                                     const GLenum *bufs)


Defines an array of buffers into which fragment color values or fragment data will be written. If no fragment shader is active, rendering operations generate only one fragment color per fragment and it is written into each of the buffers specified by bufs. If a fragment shader is active and it writes a value to the output variable gl_FragColor, then that value is written into each of the buffers specified by bufs. If a fragment shader is active and it writes a value to one or more elements of the output array variable gl_FragData[], then the value of gl_FragData[0] is written into the first buffer specified by bufs, the value of gl_FragData[1] is written into the second buffer specified by bufs, and so on up to gl_FragData[n-1]. The draw buffer used for gl_FragData[n] and beyond is implicitly set to be GL_NONE.



Buffer names for use with the glDrawBuffers call Parameter
 Operation

GL_NONE
 The fragment color/data value is not written into any color buffer.

GL_FRONT_LEFT
 The fragment color/data value is written into the front-left color buffer.

GL_FRONT_RIGHT
 The fragment color/data value is written into the front-right color buffer.

GL_BACK_LEFT
 The fragment color/data value is written into the back-left color buffer.

GL_BACK_RIGHT
 The fragment color/data value is written into the back-right color buffer.

GL_AUXi
 The fragment color/data value is written into auxiliary buffer i.


An error is generated if glDrawBuffers specifies a buffer that does not exist in the current GL context. If more than one buffer is selected for drawing, blending and logical operations are computed and applied independently for each element of gl_FragData and its corresponding buffer. Furthermore, the alpha value (i.e., the fourth component) of gl_FragData[0] is used to determine the result of the alpha test. Operations such as scissor, depth, and stencil tests (if enabled) may cause the entire fragment (including all of the values in the gl_FragData array) to be discarded without any updates to the framebuffer.



Development Aids
A situation that can be difficult to diagnose is one in which a program may fail to execute because of the value of a sampler variable. These variables can be changed anytime between linking and program execution. To ensure robust behavior, OpenGL implementations must do some runtime checking just before the shader is executed (i.e., when a rendering operation is about to occur). At this point, the only way to report an error is to set the OpenGL error flag, and this is not usually something that applications check at this performance-critical juncture.

To provide more information when these situations occur, the OpenGL Shading Language API defines a new function that can be called to perform this runtime check explicitly and provide diagnostic information.

void glValidateProgram(GLuint program)

Checks whether the executables contained in program can execute given the current OpenGL state. The information generated by the validation process is stored in program's information log. The validation information may consist of an empty string, or it may be a string containing information about how the current program object interacts with the rest of current OpenGL state. This function provides a way for OpenGL implementors to convey more information about why the current program is inefficient, suboptimal, failing to execute, and so on.

The status of the validation operation is stored as part of the program object's state. This value is set to GL_TRUE if the validation succeeded and GL_FALSE otherwise. It can be queried by calling glGetProgram with arguments program and GL_VALIDATE_STATUS. If validation is successful, program is guaranteed to execute given the current state. Otherwise, program is guaranteed to not execute.

This function is typically useful only during application development. The informational string stored in the information log is completely implementation-dependent. Therefore, an application should not expect different OpenGL implementations to produce identical information strings.

Because the operations described in this section can severely hinder performance, they should be used only during application development and removed before shipment of the production version of the application.





Implementation-Dependent API Values
Some of the features we've described in previous sections have implementation-dependent limits. All of the implementation-dependent values in the OpenGL Shading Language API are defined in the list that follows, and all of them can be queried with glGet.


GL_MAX_COMBINED_TEXTURE_IMAGE_UNITSDefines the total number of hardware units that can be used to access texture maps from the vertex processor and the fragment processor combined. The minimum legal value is 2.



GL_MAX_DRAW_BUFFERSDefines the maximum number of buffers that can be simultaneously written into from within a fragment shader using the special output variable gl_FragData. This constant effectively defines the size of the gl_FragData array. The minimum legal value is 1.



GL_MAX_FRAGMENT_UNIFORM_COMPONENTSDefines the number of components (i.e., floating-point values) that are available for fragment shader uniform variables. The minimum legal value is 64.



GL_MAX_TEXTURE_COORDSDefines the number of texture coordinate sets that are available. The minimum legal value is 2.



GL_MAX_TEXTURE_IMAGE_UNITSDefines the total number of hardware units that can be used to access texture maps from the fragment processor. The minimum legal value is 2.



GL_MAX_VARYING_FLOATSDefines the number of floating-point variables available for varying variables. The minimum legal value is 32.



GL_MAX_VERTEX_ATTRIBSDefines the number of active vertex attributes that are available. The minimum legal value is 16.



GL_MAX_VERTEX_TEXTURE_IMAGE_UNITSDefines the number of hardware units that can be used to access texture maps from the vertex processor. The minimum legal value is 0.



GL_MAX_VERTEX_UNIFORM_COMPONENTSDefines the number of components (i.e., floating-point values) that are available for vertex shader uniform variables. The minimum legal value is 512.

Application Code for Brick Shaders
Each shader is going to be a little bit different. Each vertex shader may use a different set of attribute variables or different uniform variables, attribute variables may be bound to different generic vertex attribute index values, and so on. One of the demo programs whose source code is available for download from the 3Dlabs Web site is called ogl2brick. It is a small, clear example of how to create and use a vertex shader and a fragment shader. The code in ogl2brick was derived from an earlier demo program called ogl2demo, written primarily by Barthold Lichtenbelt with contributions from several others.



values of uniform variables.

GLint getUniLoc(GLuint program, const GLchar *name)
{
    GLint loc;
   
    loc = glGetUniformLocation(program, name);

    if (loc == -1)
        printf("No such uniform named \"%s\"\n", name);

    printOpenGLError();  // Check for OpenGL errors
    return loc;
}


Shaders are passed to OpenGL as strings. For our shader installation function, we assume that each of the shaders has been defined as a single string, and pointers to those strings are passed to the following function. This function does all the work to load, compile, link, and install our brick shaders. The function definition and local variables for this function are declared as follows:

int installBrickShaders(const GLchar *brickVertex,
                        const GLchar *brickFragment)
{
    GLuint brickVS, brickFS, brickProg;   // handles to objects
    GLint  vertCompiled, fragCompiled;    // status values
    GLint  linked;


The argument brickVertex contains a pointer to the string containing the source code for the brick vertex shader, and the argument brickFragment contains a pointer to the source code for the brick fragment shader. Next, we declare variables to refer to three OpenGL objects: a shader object that stores and compiles the brick vertex shader, a second shader object that stores and compiles the brick fragment shader, and a program object to which the shader objects will be attached. Flags to indicate the status of the compile and link operations are defined next.

The first step is to create two empty shader objects, one for the vertex shader and one for the fragment shader:

brickVS = glCreateShader(GL_VERTEX_SHADER);
brickFS = glCreateShader(GL_FRAGMENT_SHADER);


Source code can be loaded into the shader objects after they have been created. The shader objects are empty, and we have a single null terminated string containing the source code for each shader, so we can call glShaderSource as follows:

glShaderSource(brickVS, 1, &brickVertex, NULL);
glShaderSource(brickFS, 1, &brickFragment, NULL);


The shaders are now ready to be compiled. For each shader, we call glCompileShader and then call glGetShader to see what transpired. glCompileShader sets the shader object's GL_COMPILE_STATUS parameter to GL_TRUE if it succeeded and GL_FALSE otherwise. Regardless of whether the compilation succeeded or failed, we print the information log for the shader. If the compilation was unsuccessful, this log will have information about the compilation errors. If the compilation was successful, this log may still have useful information that would help us improve the shader in some way. You would typically check the info log only during application development or after running a shader for the first time on a new platform. The function exits if the compilation of either shader fails.

glCompileShader(brickVS);
printOpenGLError();  // Check for OpenGL errors
glGetShaderiv(brickVS, GL_COMPILE_STATUS, &vertCompiled);
printShaderInfoLog(brickVS);

glCompileShader(brickFS);
printOpenGLError();  // Check for OpenGL errors
glGetShaderiv(brickFS, GL_COMPILE_STATUS, &fragCompiled);
printShaderInfoLog(brickFS);

if (!vertCompiled || !fragCompiled)
    return 0;


This section of code uses the printShaderInfoLog function that we defined previously.

At this point, the shaders have been compiled successfully, and we're almost ready to try them out. First, the shader objects need to be attached to a program object so that they can be linked.

brickProg = glCreateProgram();
glAttachShader(brickProg, brickVS);
glAttachShader(brickProg, brickFS);


The program object is linked with glLinkProgram. Again, we look at the information log of the program object regardless of whether the link succeeded or failed. There may be useful information for us if we've never tried this shader before.

glLinkProgram(brickProg);
printOpenGLError();  // Check for OpenGL errors
glGetProgramiv(brickProg, GL_LINK_STATUS, &linked);
printProgramInfoLog(brickProg);

if (!linked)
    return 0;


If we make it to the end of this code, we have a valid program that can become part of current state simply by calling glUseProgram:

glUseProgram(brickProg);


Before returning from this function, we also want to initialize the values of the uniform variables used in the two shaders. To obtain the location that was assigned by the linker, we query the uniform variable by name, using the getUniLoc function defined previously. Then we use that location to immediately set the initial value of the uniform variable.

glUniform3f(getUniLoc(brickProg, "BrickColor"), 1.0, 0.3, 0.2);
glUniform3f(getUniLoc(brickProg, "MortarColor"), 0.85, 0.86, 0.84);
glUniform2f(getUniLoc(brickProg, "BrickSize"), 0.30, 0.15);
glUniform2f(getUniLoc(brickProg, "BrickPct"), 0.90, 0.85);
glUniform3f(getUniLoc(brickProg, "LightPosition"), 0.0, 0.0, 4.0);

return 1;
}


When this function returns, the application is ready to draw geometry that will be rendered with our brick shaders. The result of rendering some simple objects with this application code and the shaders described in Chapter 6 is shown in Figure 6.6. The complete C function is shown in Listing 7.3.

Listing 7.3. C function for installing brick shaders
int installBrickShaders(const GLchar *brickVertex,
                        const GLchar *brickFragment)
{
    GLuint brickVS, brickFS, brickProg;   // handles to objects
    GLint  vertCompiled, fragCompiled;    // status values
    GLint  linked;

// Create a vertex shader object and a fragment shader object

brickVS = glCreateShader(GL_VERTEX_SHADER);
brickFS = glCreateShader(GL_FRAGMENT_SHADER);

// Load source code strings into shaders

glShaderSource(brickVS, 1, &brickVertex, NULL);
glShaderSource(brickFS, 1, &brickFragment, NULL);

// Compile the brick vertex shader and print out
// the compiler log file.

glCompileShader(brickVS);
printOpenGLError();  // Check for OpenGL errors
glGetShaderiv(brickVS, GL_COMPILE_STATUS, &vertCompiled);
printShaderInfoLog(brickVS);

// Compile the brick fragment shader and print out
// the compiler log file.

glCompileShader(brickFS);
printOpenGLError();  // Check for OpenGL errors
glGetShaderiv(brickFS, GL_COMPILE_STATUS, &fragCompiled);
printShaderInfoLog(brickFS);

if (!vertCompiled || !fragCompiled)
    return 0;

// Create a program object and attach the two compiled shaders

brickProg = glCreateProgram();
glAttachShader(brickProg, brickVS);
glAttachShader(brickProg, brickFS);

// Link the program object and print out the info log

glLinkProgram(brickProg);
printOpenGLError();  // Check for OpenGL errors
glGetProgramiv(brickProg, GL_LINK_STATUS, &linked);
printProgramInfoLog(brickProg);

if (!linked)
    return 0;

// Install program object as part of current state

glUseProgram(brickProg);

// Set up initial uniform values

   glUniform3f(getUniLoc(brickProg, "BrickColor"), 1.0, 0.3, 0.2);
   glUniform3f(getUniLoc(brickProg, "MortarColor"), 0.85, 0.86, 0.84);
   glUniform2f(getUniLoc(brickProg, "BrickSize"), 0.30, 0.15);
   glUniform2f(getUniLoc(brickProg, "BrickPct"), 0.90, 0.85);
   glUniform3f(getUniLoc(brickProg, "LightPosition"), 0.0, 0.0, 4.0);

   return 1;
}

Copyright © 2009 virtualinfocom All rights reserved. Theme by Games. | animation Ani2Pix.