Skip to content

Rasterization

Jean-Milost Reymond edited this page Apr 19, 2022 · 14 revisions

Introduction

The CompactStar engine contains a little and very basic software rasterizer, mainly based on the excellent Rasterization: a pratical Implementation article, published on the scratchapixel website.

I also based my work on the tinyrenderer project provided by ssltoy.

As the above articles will describe the rasterization process infinitely better than what I can do, I not pretend to detail how a raster works, I'll let you read the provided documentation instead, if you want. I'll instead explain how the raster is implemented in CompactStar.

The software rasterizer

You will find the software raster code in the CSR_SoftwareRaster.c and CSR_SoftwareRaster.h files.

A demo showing how to use it is also available here.

Basic overview

The software rasterizer provided in CompactStar is able to render a small 3D model, provided as input, in a RGBA pixel buffer, provided as output. It supports the surface culling, the z-buffering, and provide callback functions on the rendering stages, which act as a Shader programs substitute.

To draw a model, the csrRasterDraw() function should be called every time a new image is required, and the resulting bitmap should be drawn on the target viewport. This function will iterate through all the polygons contained in the model (the so called "outer loop" in the above mentioned documentation), and will apply the rasterization on each polygon which was not rejected by visibility tests (the so called "inner loop").

The transformation to the raster space

For each polygon composing the model, the first step consists of transforming the polygon vertices in the raster space, which is, as mentioned above, a 2D image. This task is performed in the below function:

void csrRasterRasterizeVertex(const CSR_Vector3* pInVertex,
                              const CSR_Matrix4* pMatrix,
                              const CSR_Rect*    pScreenRect,
                                    float        zNear,
                                    float        imageWidth,
                                    float        imageHeight,
                                    CSR_Vector3* pOutVertex)
{
    CSR_Vector3 vertexCamera;
    CSR_Vector2 vertexScreen;
    CSR_Vector2 vertexNDC;
    float       subRightLeft;
    float       addRightLeft;
    float       subTopBottom;
    float       addTopBottom;

    // validate the input
    if (!pInVertex || !pMatrix || !pScreenRect || !pOutVertex)
        return;

    // transform the input vertex into the camera space
    csrMat4Transform(pMatrix, pInVertex, &vertexCamera);

    // transfrom the camera vertex to a point in the screen space
    vertexScreen.m_X = (zNear * vertexCamera.m_X) / -vertexCamera.m_Z;
    vertexScreen.m_Y = (zNear * vertexCamera.m_Y) / -vertexCamera.m_Z;

    subRightLeft = pScreenRect->m_Max.m_X - pScreenRect->m_Min.m_X;
    addRightLeft = pScreenRect->m_Max.m_X + pScreenRect->m_Min.m_X;
    subTopBottom = pScreenRect->m_Min.m_Y - pScreenRect->m_Max.m_Y;
    addTopBottom = pScreenRect->m_Min.m_Y + pScreenRect->m_Max.m_Y;

    // convert point from screen space to Normalized Device Coordinates (NDC) space (in range [-1, 1])
    vertexNDC.m_X = ((2.0f * vertexScreen.m_X) / subRightLeft) - (addRightLeft / subRightLeft);
    vertexNDC.m_Y = ((2.0f * vertexScreen.m_Y) / subTopBottom) - (addTopBottom / subTopBottom);

    // convert to raster space. NOTE in raster space y is down, so the direction is inverted
    pOutVertex->m_X = (vertexNDC.m_X + 1.0f) / 2.0f * imageWidth;
    pOutVertex->m_Y = (1.0f - vertexNDC.m_Y) / 2.0f * imageHeight;
    pOutVertex->m_Z = -vertexCamera.m_Z;
}

The visibility tests

Once rasterized, the polygon visibility is tested. 2 tests are performed:

  • The vertex culling, to determine if the polygon should be rejected due to its orientation
  • The bounding test, to determine if the polygon is out of the raster space

The vertex culling

This test is performed in the csrRasterDrawPolygon() function. The main idea is to calculate the dot product between the camera direction and the polygon normal. Depending on the resulting angle and the type of culling to apply, the polygon will be kept or rejected. The code performing this task is the following:

// check if the polygon is culled and determine the culling mode to use (0 = CW, 1 = CCW, 2 = both)
switch (cullingType)
{
    case CSR_CT_None:
        // both faces are accepted
        cullingType = 2;
        cullingMode = 0;
        break;

    case CSR_CT_Front:
    case CSR_CT_Back:
    {
        CSR_Plane   polygonPlane;
        CSR_Vector3 polygonNormal;
        CSR_Vector3 cullingNormal;
        float       cullingDot;

        // calculate the rasterized polygon plane
        csrPlaneFromPoints(&rasterPoly.m_Vertex[0],
                           &rasterPoly.m_Vertex[1],
                           &rasterPoly.m_Vertex[2],
                           &polygonPlane);

        // calculate the rasterized polygon surface normal
        polygonNormal.m_X = polygonPlane.m_A;
        polygonNormal.m_Y = polygonPlane.m_B;
        polygonNormal.m_Z = polygonPlane.m_C;

        // get the culling normal
        cullingNormal.m_X =  0.0f;
        cullingNormal.m_Y =  0.0f;
        cullingNormal.m_Z = -1.0f;

        // calculate the dot product between the culling and the polygon normal
        csrVec3Dot(&polygonNormal, &cullingNormal, &cullingDot);

        switch (cullingFace)
        {
            case CSR_CF_CW:
                // is polygon rejected?
                if (cullingDot <= 0.0f)
                    return 1;

                // apply a clockwise culling
                cullingMode = 0;
                break;

            case CSR_CF_CCW:
                // is polygon rejected?
                if (cullingDot >= 0.0f)
                    return 1;

                // apply a counter-clockwise culling
                cullingMode = 1;
                break;

            // error
            default:
                return 1;
        }

        break;
    }

    case CSR_CT_Both:
    default:
        // both faces are rejected
        return 1;
}

The bounding test

This test is performed in the csrRasterDrawPolygon() function. It simply consists to calculate the polygon bounding box, and to use it to test if the polygon is still in the raster space after the rasterization. If the polygon is out of bounds, it is rejected. This is achieved by the following code:

// calculate the polygon bounding rect
csrRasterFindMin(rasterPoly.m_Vertex[0].m_X, rasterPoly.m_Vertex[1].m_X, rasterPoly.m_Vertex[2].m_X, &xMin);
csrRasterFindMin(rasterPoly.m_Vertex[0].m_Y, rasterPoly.m_Vertex[1].m_Y, rasterPoly.m_Vertex[2].m_Y, &yMin);
csrRasterFindMax(rasterPoly.m_Vertex[0].m_X, rasterPoly.m_Vertex[1].m_X, rasterPoly.m_Vertex[2].m_X, &xMax);
csrRasterFindMax(rasterPoly.m_Vertex[0].m_Y, rasterPoly.m_Vertex[1].m_Y, rasterPoly.m_Vertex[2].m_Y, &yMax);

// is the polygon out of screen?
if (xMin > (float)(pFB->m_Width  - 1) || xMax < 0.0f ||
    yMin > (float)(pFB->m_Height - 1) || yMax < 0.0f)
    return 1;

The drawing

Once the polygon was transformed to the raster space and its visibility was tested, it is ready to be drawn. This task is performed in the csrRasterDrawPolygon() function, by the following code:

// calculate the area to draw
csrMathMax(0.0f,                       xMin, &xStart);
csrMathMin((float)(pFB->m_Width  - 1), xMax, &xEnd);
csrMathMax(0.0f,                       yMin, &yStart);
csrMathMin((float)(pFB->m_Height - 1), yMax, &yEnd);

x0 = (size_t)floor(xStart);
x1 = (size_t)floor(xEnd);
y0 = (size_t)floor(yStart);
y1 = (size_t)floor(yEnd);

// calculate the triangle area (multiplied by 2)
csrRasterFindEdge(&rasterPoly.m_Vertex[0], &rasterPoly.m_Vertex[1], &rasterPoly.m_Vertex[2], &area);

// iterate through pixels to draw
for (y = y0; y <= y1; ++y)
    for (x = x0; x <= x1; ++x)
    {
        pixelSample.m_X = x + 0.5f;
        pixelSample.m_Y = y + 0.5f;
        pixelSample.m_Z =     0.0f;

        // calculate the sub-triangle areas (multiplied by 2)
        csrRasterFindEdge(&rasterPoly.m_Vertex[1], &rasterPoly.m_Vertex[2], &pixelSample, &w0);
        csrRasterFindEdge(&rasterPoly.m_Vertex[2], &rasterPoly.m_Vertex[0], &pixelSample, &w1);
        csrRasterFindEdge(&rasterPoly.m_Vertex[0], &rasterPoly.m_Vertex[1], &pixelSample, &w2);

        pixelVisible = 0;

        // check if the pixel is visible. The culling mode is important to determine the sign
        switch (cullingMode)
        {
            // clockwise
            case 0:
                if (w0 >= 0 && w1 >= 0 && w2 >= 0)
                    pixelVisible = 1;

                break;

            // counter-clockwise
            case 1:
                if (w0 <= 0 && w1 <= 0 && w2 <= 0)
                {
                    pixelVisible = 1;

                    // invert the sampler values
                    w0 = -w0;
                    w1 = -w1;
                    w2 = -w2;
                }

                break;

            // both
            case 2:
                if (w0 >= 0 && w1 >= 0 && w2 >= 0)
                    pixelVisible = 1;
                else
                if (w0 <= 0 && w1 <= 0 && w2 <= 0)
                {
                    pixelVisible = 1;

                    // invert the sampler values
                    w0 = -w0;
                    w1 = -w1;
                    w2 = -w2;
                }

                break;

            // error
            default:
                return 0;
        }

        // is pixel visible?
        if (pixelVisible)
        {
            // calculate the barycentric coordinates, which are the areas of the sub-triangles
            // divided by the area of the main triangle
            w0 /= area;
            w1 /= area;
            w2 /= area;

            // calculate the pixel depth
            invZ = (rasterPoly.m_Vertex[0].m_Z * w0) +
                   (rasterPoly.m_Vertex[1].m_Z * w1) +
                   (rasterPoly.m_Vertex[2].m_Z * w2);
            z    = 1.0f / invZ;

            // test the pixel against the depth buffer
            if (z < pDB->m_pData[y * pFB->m_Width + x])
            {
                // test passed, update the depth buffer
                pDB->m_pData[y * pFB->m_Width + x] = z;

                // calculate the default pixel color, based on the per-vertex color
                color.m_R = w0 * pColor[0].m_R + w1 * pColor[1].m_R + w2 * pColor[2].m_R;
                color.m_G = w0 * pColor[0].m_G + w1 * pColor[1].m_G + w2 * pColor[2].m_G;
                color.m_B = w0 * pColor[0].m_B + w1 * pColor[1].m_B + w2 * pColor[2].m_B;

                // calculate the texture coordinate
                stCoord.m_X = ((st[0].m_X * w0) + (st[1].m_X * w1) + (st[2].m_X * w2)) * z;
                stCoord.m_Y = ((st[0].m_Y * w0) + (st[1].m_Y * w1) + (st[2].m_Y * w2)) * z;

                // for each pixel, apply the fragment shader
                if (fOnApplyFragmentShader)
                {
                    // set the sampler items
                    sampler.m_X = w0;
                    sampler.m_Y = w1;
                    sampler.m_Z = w2;

                    fOnApplyFragmentShader(pMatrix,
                                           pPolygon,
                                           &stCoord,
                                           &sampler,
                                           z,
                                           &color);
                }

                // limit the color components between 0.0 and 1.0
                csrMathClamp(color.m_R, 0.0, 1.0, &color.m_R);
                csrMathClamp(color.m_G, 0.0, 1.0, &color.m_G);
                csrMathClamp(color.m_B, 0.0, 1.0, &color.m_B);

                // write the final pixel inside the frame buffer
                pFB->m_pPixel[y * pFB->m_Width + x].m_R = (unsigned char)(color.m_R * 255.0f);
                pFB->m_pPixel[y * pFB->m_Width + x].m_G = (unsigned char)(color.m_G * 255.0f);
                pFB->m_pPixel[y * pFB->m_Width + x].m_B = (unsigned char)(color.m_B * 255.0f);
                pFB->m_pPixel[y * pFB->m_Width + x].m_A = (unsigned char)(color.m_A * 255.0f);
            }
        }
    }

In the above code, the following tasks are performed:

  • The pixel position in the output image is calculated, by taking advantage of the barycentric coordinate system
  • The pixel visibility is tested, if the pixel is out of the raster space it will be rejected
  • The pixel is tested against the z-buffer, to determine if another pixel hides it
  • The fragment Shader callback function is called
  • The final pixel is written in the image buffer

The Shader functions

The software rasterizer provides 2 Shaders in the form of 2 callback functions:

  • The vertex Shader is called from the csrRasterGetPolygon() function, while the rasterization stage is performed
  • The fragment Shader is called from the csrRasterDrawPolygon() function, while the drawing stage is performed

Clone this wiki locally