0000 0000 0000 0011
A WebGL triangle
This blog post will cover how to draw the first shape in WebGL, a triangle.
We will briefly cover the graphics pipeline defined by WebGL. Afterwards we will define the vertices of a triangle and create buffers for them. Next we will cover the basics for shaders so we can enter vertices and a color into the pipeline. Lastly we will connect all the steps and draw the triangle.
We will briefly cover the graphics pipeline defined by WebGL. Afterwards we will define the vertices of a triangle and create buffers for them. Next we will cover the basics for shaders so we can enter vertices and a color into the pipeline. Lastly we will connect all the steps and draw the triangle.
Table of contents
The WebGL graphics pipeline
The WebGL graphics pipeline effectively exists of 2 steps we can fully write ourselves, and 4 steps that will be done almost automatically.
The 2 steps that we have to define ourselves are called shaders. The shaders we have to create are the vertex shader and the fragment shader. Unlike OpenGL, WebGL does not support the geometry shader, so that will be out of scope for now.
The 4 other steps happen automatically between and after the 2 shaders. For these steps we can only adjust some settings in WebGL with calls to alter their behavior, but unlike shaders we can not write their entire behavior ourselves. Below is a small diagram to show the order of the 6 steps in the WebGL graphics pipeline.
We will go through the steps very briefly.
The full explanation will follow naturally when we get to the point where we interact with a step in the pipeline or implement the shaders.
The 2 steps that we have to define ourselves are called shaders. The shaders we have to create are the vertex shader and the fragment shader. Unlike OpenGL, WebGL does not support the geometry shader, so that will be out of scope for now.
The 4 other steps happen automatically between and after the 2 shaders. For these steps we can only adjust some settings in WebGL with calls to alter their behavior, but unlike shaders we can not write their entire behavior ourselves. Below is a small diagram to show the order of the 6 steps in the WebGL graphics pipeline.
Vertex shader
Backface culling and clipping
Projection to screen space
Rasterization
Fragment shader
Blending and output
- The vertex shader is responsible for calculating and mapping the vertices we sent into coordinates.
- Clipping occurs after the coordinates are calculated. Everything that falls out of the view space will be clipped at this step.
- Backface culling will remove all surfaces that are not facing the camera.
- In the projection to screen space step the 3D vertices will be mapped to 2D screen coordinates.
- The rasterization step will finally translate the floating point coordinate values into pixels on the screen or canvas.
- The fragment shader is basically responsible for the colors of the pixels. Textures will be applied in this step too.
- Lastly the blending and output step has a few substeps known as the depth test, alpha test and alpha blending. These are the last evaluations to the pixels before they get send to the screen to be drawn. This step involves transparency, opacity and depth which might cause pixels to be re-evaluated.
A triangle in normalized device coordinates
We will draw a triangle because this is the easiest shape in modern Graphics API's.
This has to do with the fact that all shapes are made up of smaller triangles by default.
If we want to show anything in the canvas at all, we need to know how WebGL understands coordinates.
The coordinates that WebGL uses to draw on the canvas are called Normalized Device Coordinates (NDC).
This is a small space where the x, y and z of the screen range from -1.0 to 1.0.
To map the triangle on the canvas, we need to figure out where we want our triangle to be.
In the example image we can make out 3 points that our triangle consists of.
These are the far ends of our triangle, and unsurprisingly these will also be the vertices we will need to supply to the vertex shader.
A vertex in a 3D space consists of 3 elements: x, y and z.
The 3 vertices we will need will also have 3 elements each.
From the image we can carefully plot out all elements of each vertex.
Our next point of business in regards to supplying the vertices for the graphics pipeline is telling WebGL in what order the positions should be drawn. For now it seems obvious that our 3 points form a triangle in the order that we have declared them, but once shapes get more complicated the order is a necessity. We will order the vertices just the way that we defined them in our code. Do keep in mind that the order of corners does matter when using backface culling. Backface culling is disabled by default in WebGL, but we will definitely visit the subject at a later date. To keep in line with OpenGL, we will define all triangles in a counter-clockwise order. Luckily the points are already defined in this order, so we can keep the indices very simple. The choice for a Uint16Array (array with unsigned 16bit integers) is due to the way the indices are expected by WebGL. As indices will always be positive, the integer being unsigned is a given.
1
-1
1
-1
1
-1
▲
1
-1
- -0.5, -0.5, 0.0Lower left
- 0.5, -0.5, 0.0Lower right
- 0.0, 0.5, 0.0Upper
Our next point of business in regards to supplying the vertices for the graphics pipeline is telling WebGL in what order the positions should be drawn. For now it seems obvious that our 3 points form a triangle in the order that we have declared them, but once shapes get more complicated the order is a necessity. We will order the vertices just the way that we defined them in our code. Do keep in mind that the order of corners does matter when using backface culling. Backface culling is disabled by default in WebGL, but we will definitely visit the subject at a later date. To keep in line with OpenGL, we will define all triangles in a counter-clockwise order. Luckily the points are already defined in this order, so we can keep the indices very simple. The choice for a Uint16Array (array with unsigned 16bit integers) is due to the way the indices are expected by WebGL. As indices will always be positive, the integer being unsigned is a given.
Creating buffers for vertices and indices
Our next challenge is creating something from which the graphics pipeline can read our input vertices and indices.
This can be done with the array buffer object.
We will need 2 array buffer objects, because we have both a vertices array and an indices array.
Let's start with our first buffer object, the Vertex Buffer Object (VBO). Our positions consists of 3 vertices, with each its own x, y and z position. The vertex buffer has no need to know the vertices at all, or even which coordinate is x, y or z. All it needs is the list of floats in the right order. The very first step is creating a buffer. After creating a new buffer object, we have to bind it to a target buffer using the bindBuffer() function. In our case we will bind it to an array buffer. This buffer is perfect because it's specifically made for arrays holding positions, colors and other data regarding vertices. The final piece of information we need to get the array buffer all set up is the size of the data we will supply, and if we expect the data to change often. We will achieve this by calling the bufferData() function.
The data we want the buffer to read into the graphics pipeline is a Float32Array, as we have determined a few steps back when finding the positions. Since the size of a Float32Array can not be changed after initialization, supplying the positions array will be enough to find the size and the data for WebGL.
We also need to supply if we expect the positions to change much. We will only be drawing a static image of a triangle, so we will be using the static drawing mode. Don't worry about this choice too much, as the built-in optimizers will make the difference in this choice minimal. Putting all the steps together, will result in the following code for the vertex buffer object: Creating the Element Buffer Object (EBO) will seem surprisingly similar to the vertex buffer object. The only big difference will be that we will be using a different buffer, namely the element array buffer. Since the code is so similar, we won't go through it line by line. The code will look like this: After specifying the positions and indices, and creating buffer objects for them as input, we can move on to the steps of the graphics pipeline we can write ourselves. These steps are the vertex shader and the fragment shader.
Let's start with our first buffer object, the Vertex Buffer Object (VBO). Our positions consists of 3 vertices, with each its own x, y and z position. The vertex buffer has no need to know the vertices at all, or even which coordinate is x, y or z. All it needs is the list of floats in the right order. The very first step is creating a buffer. After creating a new buffer object, we have to bind it to a target buffer using the bindBuffer() function. In our case we will bind it to an array buffer. This buffer is perfect because it's specifically made for arrays holding positions, colors and other data regarding vertices. The final piece of information we need to get the array buffer all set up is the size of the data we will supply, and if we expect the data to change often. We will achieve this by calling the bufferData() function.
The data we want the buffer to read into the graphics pipeline is a Float32Array, as we have determined a few steps back when finding the positions. Since the size of a Float32Array can not be changed after initialization, supplying the positions array will be enough to find the size and the data for WebGL.
We also need to supply if we expect the positions to change much. We will only be drawing a static image of a triangle, so we will be using the static drawing mode. Don't worry about this choice too much, as the built-in optimizers will make the difference in this choice minimal. Putting all the steps together, will result in the following code for the vertex buffer object: Creating the Element Buffer Object (EBO) will seem surprisingly similar to the vertex buffer object. The only big difference will be that we will be using a different buffer, namely the element array buffer. Since the code is so similar, we won't go through it line by line. The code will look like this: After specifying the positions and indices, and creating buffer objects for them as input, we can move on to the steps of the graphics pipeline we can write ourselves. These steps are the vertex shader and the fragment shader.
Minimal shader code
Before writing a shader ourselves, we should know about the shader language used in WebGL.
The shader language which is supported by WebGL is OpenGL Shader Language or simply GLSL.
There are various other shader languages, such as HLSL (High Level Shader Language) which is used by DirectX and Unity.
For the simple reason that all browsers supporting WebGL also support GLSL, we will be using GLSL.
More specifically we will be using GLSL ES (OpenGL Shader Language for Embedded Systems) version 3.0 (300).
The versioning is important, because if your browser only supports WebGL1, you can only use GLSL ES version 2.0 (100).
This brings a few implications to the shaders, which we will not cover this time as we will be focussing on WebGL2.
If you really want to develop for WebGL1, you can look at the source of the WebGL scripts on this blog.
I have made sure to make my WebGL scripts WebGL1 and WebGL2 compatible on this blog.
With that said, we can finally talk about the vertex and fragment shaders themselves. The first shader in the graphics pipeline is the Vertex Shader. Our shader is a small program that will be executed on every element we feed it. The elements in the vertex shader will be the positions we defined above. The first thing to start with when writing a GLSL shader, is defining the version at the top of the shader. The next thing we will do is define the parameters we expect coming in and going out of the shader program. For this first shader in the graphics pipeline, we will want it to accept the positions as 3 floats. We will be using the vector data type for this. In a future blog post I will go further into vectors and the math driving them. For now, just see them as a convenient way to store our positions and do calculations with them. Because a shader is basically a small program in itself, it will also contain a main method. All we want the vertex shader to do is supply our positions to the next part of the graphics pipeline. We can achieve this by putting our position in a predefined variable called gl_Position. This variable will then be used by the other steps in the graphics pipeline. The gl_Position is a vec4 however, so we still need to add an arbitrary 1.0 float value. The reason this predefined variable is a vec4 is because it can be used without conversions for matrix calculations in the other steps in the pipeline, but those are out of scope for this blog post. Let's see what the main() of the vertex shader looks like. The resulting vertex shader will look like this: But there is actually a catch to the shader source code that we didn't discuss yet. The shader actually is a different program. This means we will have to compile it before we can use it in WebGL. Luckily WebGL has a function which will actually compile it for us as long as we can supply the source code. To prepare the shader for this compilation, we will put the entire shader source code into a string. The final result of our shader will look like this:
The second and last shader we can define in our graphics pipeline is the fragment shader. This shader is responsible for the color values of everything on screen. That means that this is where we will use our colors and textures. For now we will go with a static black color so the triangle will be easily visible. Aside from the color value we want to go with, there are 2 other things we need to define / provide in the fragment shader.
With that said, we can finally talk about the vertex and fragment shaders themselves. The first shader in the graphics pipeline is the Vertex Shader. Our shader is a small program that will be executed on every element we feed it. The elements in the vertex shader will be the positions we defined above. The first thing to start with when writing a GLSL shader, is defining the version at the top of the shader. The next thing we will do is define the parameters we expect coming in and going out of the shader program. For this first shader in the graphics pipeline, we will want it to accept the positions as 3 floats. We will be using the vector data type for this. In a future blog post I will go further into vectors and the math driving them. For now, just see them as a convenient way to store our positions and do calculations with them. Because a shader is basically a small program in itself, it will also contain a main method. All we want the vertex shader to do is supply our positions to the next part of the graphics pipeline. We can achieve this by putting our position in a predefined variable called gl_Position. This variable will then be used by the other steps in the graphics pipeline. The gl_Position is a vec4 however, so we still need to add an arbitrary 1.0 float value. The reason this predefined variable is a vec4 is because it can be used without conversions for matrix calculations in the other steps in the pipeline, but those are out of scope for this blog post. Let's see what the main() of the vertex shader looks like. The resulting vertex shader will look like this: But there is actually a catch to the shader source code that we didn't discuss yet. The shader actually is a different program. This means we will have to compile it before we can use it in WebGL. Luckily WebGL has a function which will actually compile it for us as long as we can supply the source code. To prepare the shader for this compilation, we will put the entire shader source code into a string. The final result of our shader will look like this:
The second and last shader we can define in our graphics pipeline is the fragment shader. This shader is responsible for the color values of everything on screen. That means that this is where we will use our colors and textures. For now we will go with a static black color so the triangle will be easily visible. Aside from the color value we want to go with, there are 2 other things we need to define / provide in the fragment shader.
- First we need to provide a precision to use on float calculations in the fragment shader. These precisions can be lowp, mediump and highp. Of course using a higher precision level will impact the performance. Because we are just using colors, we will stick to the low precision for now.
- Unlike the vertex shader, the fragment shader does not have the outgoing variable set by default, so we will have to define it ourselves.
Compiling and using the shaders
Now we have the source of the shaders, we have to compile the shaders, and attach the shaders.
To compile the shaders, we first need a place to store the shader.
Just like the buffers we had WebGL create for us before, we have told WebGL to create a place to store a vertex shader too.
The next steps are to attach our own shader source to the vertex shader object created by WebGL and finally compile it.
This is enough to prepare the shader for linking to our program, but because the shaders will be prone to mistakes, it's good to check the compilation status for errors.
These next few lines are to read the error if the compilation of the shader failed.
Next we will prepare the fragment shader for the shader program linking. Probably unsurprising at this point, this will look exactly like the fragment shader. The only change is the variable name, and the kind of shader object we ask WebGL to create for us. Here is the code to get an object to store the shader, link the shader to the object, compile the shader and do error checks on the shader:
The last thing we will have to do to be able to use the shader, is to create a shader program. Firstly we will ask WebGL to create a shader program object for us. For the shader program to work, we need to attach the shader objects to the program, and finally link them together. Lastly we need to tell WebGL that we want to use this shader program. We have created a shader able to handle vec3 positions once they are inserted. We have also created a buffer earlier containing the positions we defined to draw a triangle. What we still need to do to finish the triangle drawing, is connecting the buffers and shaders, and giving the command to draw to WebGL.
Next we will prepare the fragment shader for the shader program linking. Probably unsurprising at this point, this will look exactly like the fragment shader. The only change is the variable name, and the kind of shader object we ask WebGL to create for us. Here is the code to get an object to store the shader, link the shader to the object, compile the shader and do error checks on the shader:
The last thing we will have to do to be able to use the shader, is to create a shader program. Firstly we will ask WebGL to create a shader program object for us. For the shader program to work, we need to attach the shader objects to the program, and finally link them together. Lastly we need to tell WebGL that we want to use this shader program. We have created a shader able to handle vec3 positions once they are inserted. We have also created a buffer earlier containing the positions we defined to draw a triangle. What we still need to do to finish the triangle drawing, is connecting the buffers and shaders, and giving the command to draw to WebGL.
Attaching the buffer objects to the shaders
It is very important that the shader code and the shader program are created before the buffer objects.
This is because we will need a reference to that shader program so we can attach the vertex array to the shader program.
The first thing we should do to connect the buffer we created to the shader, is finding the part of the code where we bound the ARRAY_BUFFER.
Make sure all the buffer code is placed below the shader code before continuing.
With the buffer object bound, we should first try to get the point in our shader where we can insert the buffer objects.
The first shader in the pipeline is the vertex shader, so we want to supply the buffer objects to the shader.
When we created the shaders, we also created an "in" attribute (or variable) called "coordinates" inside the vertex shader.
This "in" attribute expects input from outside of the shader program into that attribute.
We will to find this attribute by searching it by name in the compiled shaders in the shader program.
Once the "in" attribute is found, we can store its position into a variable.
The next call will be the most difficult in this blog post.
We will be telling the shader how to read the data from our bound array buffer into the attribute position we just found inside the shader.
The data we have supplied looks something like this:
To set the vertex attribute pointer, we need to fill in a total of 6 parameters.
Vertex 0
Vertex 1
Vertex 2
x
y
z
x
y
z
x
y
z
- 1. The position of the vertex attribute that will be modified, which is the "in" attribute in the vertex shader for us.
- 2. The number of components per vertex attribute. This will be 3 as we supply the x, y and z for each vertex.
- 3. The data type of the components, which is the 32 bit float as specified at the creation of the position array.
- 4. Boolean indicating whether the data inserted should be normalized. This has no effect on float data types, so leave it as false.
- 5. The stride between vertex attributes. This will be 0 because the next vertex is directly after the earlier vertex in our buffer object.
- 6. The offset at the start of the vertex attribute array. This will also be 0 because we have inserted nothing before our first vertex.
Draw the elements
The final piece to get the triangle to draw over the background, is making a call to draw what we have specified so far.
This call has 4 parameters, which we will also shortly go through.
- 1. The drawing mode. We will use triangles, but there are other fun modes to play with.
- 2. The number of positions we will draw. We have 3, but supplying the length of the indices array is future-proof.
- 3. The data type of values in the element array buffer. We have specified the elements as uint16, which is an unsigned short.
- 4. The offset at the start of the element array buffer. This will be 0, as we have nothing in front of our first element.
The Result
* CODE *