Lightmaps from Unity

Me and our game group’s main graphics programmer Morris Gustavsson collaborated on implementing baked lighting from Unity into our engine. Our engine uses tiled forward rendering, which may make some steps slightly different than in deferred. It was no easy feat, but it was very much possible, and it was definitely worth the effort. I was responsible for everything in Unity, both handling the baking, getting hold of all of the data required, and converting the textures to a proper format. I also did a lot of trial and error in the shader department, trying to figure out how we could use dynamic lights with static shadows, something I eventually figured out. We also experimented with light probes, where I got to learn about spherical harmonics, and export them from Unity as well. Unfortunately, Unity doesn’t expose the tetrahedral structure that is used to interpolate between the light probes, so the task to create that structure instead fell on Morris, after I had tried and failed to generate it manually in Unity.

The first task was to make sure all meshes has a UV set where no UVs share the same texture space. This can easily be done using an automatic map in Maya during export with my Exporter tool. Unity automatically uses this UV set to bake to, so no additional setup is required. Next, we need to understand how the lightmaps work, and what textures we want to export from Unity.

 

Shadowmap, lightmap and directional map respectively.

 

These textures are densely packed, and each object has its own tiny little space on it. Their position and size on the texture is stored per object in the MeshRenderer in the lightmapScaleOffset variable, where X and Y are the scale, and Z and W are the position. Exporting these values to engine allows us to sample the textures.

The left texture is a shadowmask, and it stores the shadows cast by a light in a channel. In practice, this means that an object can have at most four different baked lights that affect it. The reason this texture exists is that it can be used together with stationary (or mixed) light sources in engine, which means the object can change color or intensity, as long as it doesn’t move. This was the most important texture for us to get into our game, and due to memory limits, was the only texture we could actually use in the final game. Sampling this texture correctly isn’t as easy as it seems however, as I will explain in detail a bit later.

If there are more lights, they get baked into the next texture, the one in the middle, which is the main lightmap. This one mainly stores the indirect lighting; the light that bounces off of surfaces, but as mentioned, can also get fully static lighting baked into it. The right texture is a directional texture, and it stores the direction of the main directional light. It is used to allow the use of normal maps together with the baked lighting.

 
 

Bake settings

To get the necessary data to our engine, I had to do a lot of testing of different settings and setups. The light settings I settled on were to bake all lights as if they were in the mixed mode, with soft shadows. This is applicable for all lights, directional-, spot- and point lights. In the lighting tab, the important settings to note is that the lighting mode should be set to Shadowmask, and the directional mode should be set to Directional. The rest of the settings are just quality settings that I just experimented a lot with. The final textures we used were very low resolution, and low settings, and it was hardly noticeable.

Exporting the data

Now to the fun part! The data that is required is spread across three different types of assets. It is the textures that need to be converted to DDS and exported to the engine, it is the scale and offset, along with what textures the object uses, and finally, a light index, that should be exported per light. This light index stores what channel the light’s shadows are stored on in the shadowmask texture. Before I had found that out, I was tearing my hair out trying to sample the shadowmask correctly to no avail; no matter what I did I either got a bright spot where lights overlapped, or some other artifact. This is some code for what data we export from Unity.

foreach(GameObject go in allBakedMeshes)
{
    MeshRenderer mr = go.GetComponent<MeshRenderer>();
    Export(mr.lightmapIndex); // What texture to use
    Export(mr.lightmapScaleOffset.xyzw) // Scale and position of UV1
}

foreach(Light light in allLights)
{
    Export(light.bakingOutput.occlusionMaskChannel); // The channel to sample the lightmap from.
}

I have used Microsoft’s Texconv for all my texture conversion needs, but all of the pre-built binaries on their GitHub page don’t support exr; a shame, since the lightmap texture is exported in that format. The utility does have support for it though, you just have to build it yourself from source, and jump through some extra hoops. After that, the setup for converting the textures is as simple as getting the lightmaps, running them through Texconv, and moving the resulting DDS’ to their final destination.

The shaders

With all the data available in engine during the lighting pass, we could move on and actually implement the baked lighting. This is the important line of code that allows me to get the correct shadowmap value for each light. It takes the dot product between the shadow map sample for that fragment, and the component mask, which is a float4 with a 1 in the channel that matches the channel the shadow is in. The result is that only the shadows from the matching light is retrieved. But then we got a problem with lights that weren’t baked; they were always 0. Morris showed me the solution, and added the second term, which resolves this issue.

// For each light
float shadowmap = dot(material.myShadowMaps, light.myComponentMask) + (length(light.myComponentMask) == 0.0);
 

Masking the shadowmap texture using a component mask that is exported from Unity.

 

Final Thoughts

This was a really enjoyable project, since it results in a lot higher quality lighting. Unfortunately, it was also very time consuming, both because of Unity, and because there was a lot of trial and error to get the shader to work correctly. Unity caused a lot of problems mainly due to the baking process; sometimes objects weren’t static when they should’ve been, sometimes lights having the wrong settings, and so on. It is very prone to human error, which is a shame. Then we had the shader side, which doesn’t seem that complicated, but when you factor in dynamic vs. static objects, the fact that every object needs to fit together, it immediately gets more difficult. We experimented with light probes, and got a basic implementation working, but chose not to use it in the final product.