Create your own game
eng   рус

OpenGL Basics and Triangle Rendering

Previous tutorial: Rendering pipeline in OpenGL

In the last tutorial we learned how to create windows with OpenGL context. Context is a whole state of OpenGL. Context is bound to the specific window during creation. OpenGL doesn't know anything about windows. It outputs images to a framebuffer. And SDL/GLFW binds framebuffer and window.

In this tutorial we'll draw a triangle in a window with the help of OpenGL and will learn several basic concepts of OpenGL.

OpenGL outputs (renders) image to framebuffer. The default framebuffer is created automatically by SDL2/GLFW libraries. We can create framebuffers on our own too. You need to understand that framebuffer is just a junk of memory, where we can write any values. The process of creation of a 2d image from 3d scene is called rendering.

The task of OpenGL is to pass instructions from standard programs to the video card and execute them there. Our program in C++ is a client for OpenGL. OpenGL (video card) is a server that processes commands received from the client. The server can be even on another computer, i.e. commands can be passed by network. The server can use several contexts at the same time. The client renders an image to the current context.

In next tutorials I'll use words video card, GPU (Graphics Processor Unit) and server interchangeable.

After the context is created we can change OpenGL state. In this tutorial, we'll set up OpenGL context to draw a triangle in the framebuffer (and respectively in the window).

OpenGL can draw different primitives: points, lines, and triangles. In this tutorial, we'll look at how to render one triangle.

Triangle Coordinates

The First thing we need to do is to set vertex coordinates. These vertices form the triangle. Coordinates will be in normalized form:

float vertices[] = { -0.5f, -0.5f, 0.0f, 1.0f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.5f, 0.0f, 1.0f };

As you can see, separate components of vertices have type float. Each vertex has four components. We need four components (x, y, z, w) for simpler math. These are homogeneous coordinates. Homogeneous coordinates will allow us later to encode in one matrix both translation and rotation. We'll have several tutorials where we'll learn all the needed math. For now, we'll set 1 for the fourth component. Also, we don't need z in this example, so we'll set it to 0, i.e. the triangle will be in the XY plane.

OpenGL Objects

There are different types of objects in OpenGL: vertex array, buffer, shader, program, texture renderbuffer and some others.

We can create, change and delete instances of these objects. To use some object we need to bind it to OpenGL context (make it active).

Each object type has its own namespace. A name is just a number. Usually, you request name by calling some function that starts with Gen. "Generated" names are marked by OpenGL as used and during the next call to Gen function, it returns new values. After the name is generated you need to bind an object to the context.

In some cases names are generated at the same time with context binding, like in case of function glCreateProgram.

Here are OpenGL objects that you need to understand above all.

Buffer object is used to store data in the GPU memory. Now we are interested in one buffer object - vertex buffer object (VBO), which stores vertices.

Vertex Array Object (VAO) is a container that stores references to buffer objects. Also, the vertex array object defines the format of vertex attributes. Attributes of current vertex array object are used as input data in vertex shader during rendering commands.

It's important to understand how VBO and VAO connected. VBO stores data. VAO stores state and references to data from VBO. VAO describes the format of data in VBO. Any buffer object is just an array that is stored on the server. Their destination will become clear in the next tutorials when our VAO objects will become more complex.

Shader object is a compiled program in GLSL that's executed on GPU.

Program object is a set of shader objects.

Framebuffer object contains state of a frame buffer and references to color, depth, stencil buffers. Each of these buffers is a renderbuffer or texture buffer. I.e. framebuffer object is a container for renderbuffer objects.

Renderbuffer object contains a singular image. We can change the content of renderbuffer objects by specific commands.

For example buffer object is created by BindBuffer function. We pass a name to it. During creation, the name is bound to the object and GPU allocates resources for buffer and its state.

But let's return to our vertices.

Vertex Array Object (VAO) and Vertex Buffer Object (VBO)

Vertices can have different attributes. In our example vertices have only one attribute - coordinates. In the following tutorials, we'll set other attributes: color, normals... It's important to distinguish attributes and components. Attribute belongs to vertex, component belongs to attribute. In our example, each vertex has one attribute - coordinates. And this attribute has four components - x, y, z, w.

In GPU vertex array is represented by vertex array object. This object contains different data: how many components each attribute has, type of the components, how to represent primitives. Vertex array object has a name. The name of any OpenGL object is just a number above zero. We can get available names by calling GenVertexArrays. The name of the function doesn't correspond to what it does. It just returns available names (numbers) and doesn't generate anything.

The process of defining vertex attributes and passing them to a shader is called vertex transferring. Let's look at the code in which vertices are passed to GPU:

GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

First, we generate a name for the vertex array object. We ask OpenGL for one name (first argument of glGenVertexArrays) and store it to variable vao. Then, we bind VAO with the OpenGL context with the function glBindVertexArray. It's important to bind vao before we set vertex format. The order is this: vao is bound to the context. After that vertex format is set.

We do the same for vertex buffer object: generate a name and bind received name with the context. GL_ARRAY_BUFFER flag says that we bind vertex array with name vbo. In the next tutorial, we'll learn what other buffers we can create.

glBufferData copies vertices in the memory of the video card. The first argument tells that we copy vertices, second - the size of array, third - address from which to copy. The fourth one is interesting. We pass GL_STATIC_DRAW - it tells GPU that this data will not be changed and it will be drawn many times. We'll look at other values in next tutorials.

glBindBuffer makes vbo active (binds to context).

One more time I want to point out that vao and vbo are just numbers. You should consider the names of OpenGL objects as pointers. We just don't have direct access to GPU memory.

Now we have vertices array in the memory of the server. But GPU doesn't know anything about the format of data. We tell OpenGL about the format with this code:

glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArray(0);

First argument - index of vertex attribute which we want to change. It should correspond to the shader attribute. In our vertex, one attribute (location) is defined, which means we need to pass to shader only it. We bind data from a vertex array with a shader. Now we have only one attribute, it will always be zero. In more complex examples we'll request attributes from the shader program. The second argument is the number of values in the attribute. We set all four components of the location, so we pass four here too. The third is the type of values. Fourth defines if coordinates should be normalized. We'll skip it for now. The fifth argument is a stride between different attributes in the array. It will be important when we'll have more attributes. The last one is the offset to where the attribute starts. In our array, only one attribute (coordinates) and its first element coincides with the beginning of the array, that's why we set it to zero.

Information that we used during the call of glVertexAttribPointer is stored in vao (in VAO object which is bound to the context at the moment).

Function glEnableVertexAttribArray activates vertex attribute array. The argument tells what attribute is activated. It coincides with the first argument of function glVertexAttribPointer

Ok, now we've created vertex array and copied it to OpenGL. Now it's the time to learn what happens with the vertices next.

Graphics Pipeline in OpenGL

Rendering is happening in several stages. Each stage is programmable (you can change it) and adjustable. Pay attention that everything is happening in the processors of the video card.

At the beginning of the graphics pipeline we input separate primitives (points, lines, and the most important - triangles). Primitives consist of vertices. Each vertex has coordinates in the 3d space.

The mandatory pipeline stages are vertex and fragment shaders.

Vertex Shader

At the vertex shader stage vertex coordinates transforms into final form. In the game, there may be a huge amount of objects. And each triangle has it's own coordinates. Moreover, the player can be located at different points of 3d space, i.e. virtual camera may have different coordinates too. The situation is not simple. Matrices and transformations help to manage different coordinates and make the location of different objects simple. Transformations are made in the vertex shader. Each vertex appears in the vertex shader with its initial coordinates. At the end of the vertex shader, vertex has final coordinates in normalized device form. Users will see everything that is located in the range between -1 and 1. The center of the normalized device coordinates corresponds to the center of the window. The y-axis is going from bottom to top.

In our example coordinates of the triangle vertices are set in the normalized form. Later we'll learn how to do transformations.

Vertices are passed to the vertex shader one by one.

Later in the rendering pipeline coordinates in the normalized form will be transformed to the actual coordinates of the program window. OpenGL does it automatically.

Fragment Shader

The second mandatory stage in the graphics pipeline is a fragment shader. There is a rasterization stage somewhere between vertex and fragment shaders. So fragment shader receives not vertices but separate pixels of the framebuffer. The output of the fragment shader is the final color for the current pixel. On this stage, we'll fill the triangle with one color.

There are other stages of the graphics pipeline, but they are not mandatory and we'll leave them for now.

Now we can check the code. We need two additional files for shaders. Put them in the project folder. Let's begin with the vertex shader:

Shader code in GLSL

// vs.glsl #version 460 core layout (location = 0) in vec4 position; void main() { gl_Position = position; }

OpenGL uses GLSL language for shaders. It's very similar to C.

Line layout (location = 0) in vec4 aPos; takes vertex attribute with index 0 (it corresponds to the same attribute in the function glVertexAttribPointer) and puts it in the variable position of type vec4. There are many types of vectors and matrices defined in GLSL. In a nutshell, vec4 is a float array with four elements. Elements are separate components of a vertex.

There is the main function. Inside we put variable position into gl_Position. gl_Position is a built-in variable. It stores the output position of the vertex. We just pass initial input value to the next stages of the pipeline.

Now, the fragment shader:

#version 460 core layout(location = 0) out vec4 color; void main() { color = vec4(0.0f, 0.0f, 0.0f, 1.0f); }

There was an analog of the vertex shader gl_Position variable in previous versions of OpenGL. This variable defined the final color of the fragment (pixel) and it's called gl_FragColor. But now we need to bind output variables with attributes. We have only one attribute for now - color. We assign it to zero. Moreover, as we have only one attribute we could omit layout(location = 0) part, and OpenGL would bind variable color with attribute 0 automatically. Inside main, we assign a black color to all fragments (any pixel inside triangle will be black). First three channels (red, green, blue) are equal to zero and the last one (transparency, but it will not work in this example) - to 1.

At this moment we have two files with shaders. Now we need to read them in the main program, compile and bind to context.

First, let's read files to the string char*:

std::ifstream ivs("vs.glsl"); std::string vs((std::istreambuf_iterator(ivs)), (std::istreambuf_iterator())); const char* vsc = vs.c_str(); std::ifstream ifs("fs.glsl"); std::string fs((std::istreambuf_iterator(ifs)), (std::istreambuf_iterator())); const char* fsc = fs.c_str();

We open ifstream and read the file in the string, then we save text info char* for each shader. Files vs.glsl and fs.glsl are located in the project folder with other source files.

It's better to use ifstream::read for reading from files (faster for large files) but buffer iterator allows to reduce the number of lines to one, so I use it here for brevity. Next:

GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vsc, NULL); glCompileShader(vertexShader); GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fsc, NULL); glCompileShader(fragmentShader);

glCreateShader creates shader object. The function returns the name of the object. In the argument we pass the shader type we want to create.

glShaderSource sets shader source code. Here we bind strings of the source code. The first argument - the name of the shader, second - number of strings in the source code, third - address of the string with the source code, fourth - array which contains a number of characters in each string of source code. If we set NULL it supposes that the string is null-terminated.

glCompileShader compiles shader. We pass the name of shader object which we want to compile.

Now we have two compiled shaders. Shaders are bound to the program:

GLuint shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); glUseProgram(shaderProgram);

In the first string we create program object with the name shaderProgram. Then we bound both shaders to the program.

Then we link program by calling glLinkProgram. At this stage, the shader object of type GL_VERTEX_SHADER will be used to create executable code that will be run on the vertex processors of GPU. Similar for GL_FRAGMENT SHADER - its executable code will be run on fragment processors.

At last we can bind program to the OpenGL with the function glUseProgram.

Before main loop we set the background color:

glClearColor(1.0f, 1.0f, 1.0f, 1.0f);

glClearColor sets the color that will be used to clear render buffer. In this case - white. Channel values should be in the range between 0 and 1.

In the main loop we need to redraw the triangle each frame:

glClear(GL_COLOR_BUFFER_BIT); glDrawArrays(GL_TRIANGLES, 0, 3); // SDL_GL_SwapWindow(window); // frame change in SDL2 // glfwSwapBuffers(window); // frame change in GLFW

glClear clears renderbuffers. In this case, only the color buffer is cleared. glDrawArrays draws primitives. It sends a specific amount of vertices to the vertex shader. The first argument is the type of primitives we want to draw. It's possible to draw points, lines/triangles set in a specific order. The second argument is the vertex index from which to start drawing. The third argument is the number of indices which will be used by this command.

Conclusion - First Triangle in OpenGL

When you run the app you should see the black (this color was used in the fragment shader) triangle in the center of the screen on the white background. You can download source code from attachments (in the menu list on the right side of the page). The archive contains Visual Studio 2019 solution with two projects: SDL2 and GLFW.

Before we start to draw more complex geometry we need to solve the most important task in the 3d graphics - moving and rotating different objects. There will be several tutorials about vectors, matrices, and transformations in the math section. The transformations is the complex subject but when/if you'll understand it, acquiring other subjects in 3d graphics, will be just a matter of time.

Comments:

No comments yet