In two dimensions curl noise is fairly easy to understand and implement.
For a thorough description of 2D curl noise see Keith Peters’ article “Curl noise, demystified”.
Basically one starts with a potential field such as multiple octaves of Worley noise.
One then extracts the 2D gradient vectors and rotates them by 90°.
To generate curl vectors for a spherical surface one can use 3D Worley noise and sample the gradients on the surface of the sphere.
The gradient vectors then need to be projected onto the sphere.
This can be achieved by projecting the gradient vector onto the local normal vector of the sphere using vector projection.
By subtracting the projected vector from the gradient vector one obtains the tangential component of the gradient vector.
The resulting vector p needs to be rotated around the normal n by 90°.
This can be achieved by rotating the vector p into a TBN system, rotating by 90° around N and then transforming back.
The GLSL functions for the rotation (without OpenGL tests) are shown below:
In OpenGL one can create a cubemap where each pixel on each surface contains a 3D warp vector.
Using a fragment shader the cubemap is initialised to be an identity transform for unit vectors.
A second fragment shader is used to initialise a cubemap with the curl vectors which are tangential to the sphere.
A third fragment shader is called multiple times to renormalize and increment the identity transform to become a warp field.
A final fragment shader uses the cubemap warp field to perform lookups in a 3D Worley noise field to generate a cubemap of the global cloud cover.
If one uses octaves of Worley noise one obtains vortices rotating in one direction.
To obtain prevailing winds and vortices with different direction of rotation depending on the latitude one can use the function (1+sin(2.5*latitude))/2 to mix positive and negative Worley noise.
Below is a result obtained using the method described in this article.
Another detail I forgot to mention is that the fragment shaders and the cubemap texture lookups use modified vectors to avoid performing lookups in the texture clamping regions which would lead to seams in the cloud cover.
I.e. when converting fragment coordinates, one increases the range of the index by half a pixel on both ends:
Furthermore when performing lookups, two coordinates of the lookup vector are scaled down by half a pixel:
The following picture illustrates the two related conversions.
Initially a binary pattern BP is created where N pixels (about 10 percent) are set to 1 and the rest to 0.
The binary pattern then is convolved with the following filter function to generate the density array DA:
You can use a sigma value of 1.5.
The convolution is wrapped around to facilitate a tileable result:
Maxima of the density array are called clusters and minima are called voids.
The 1 value in BP with the highest density value DA (tightest cluster) is set to 0 and DA is updated accordingly.
Now the 0 value in BP with the lowest density value DA (largest void) is set to 1 (and DA is updated).
This is repeated until disolving the tightest cluster creates the largest void.
This is done to spread the 1 values evenly.
In phase 1 of the dithering algorithm the 1 values of a copy of the seed pattern are removed one by one starting where the density DA is the highest.
A copy of the density array DA is updated accordingly.
The corresponding positions in the resulting dither array are set to N-1, N-2, …, 0.
In phase 2 starting with the seed pattern a mask is filled with 1 values where the density DA is the lowest.
The density array DA is updated while filling in 1 values.
Phase 2 stops when half of the values in the mask are 1.
The corresponding positions in the dither array are set to N, N+1, …, (M * M) / 2 - 1
In phase 3 the density array DA is recomputed using the boolean negated mask from the previous phase (0 becomes 1 and 1 becomes 0).
Now the mask is filled with 1 values where the density DA is the highest (clusters of 0s) always updating DA.
Phase 3 stops when all the values in the mask are 1.
The corresponding positions in the dither array are set to (M * M) / 2, …, M * M - 1.
The result can be normalised to 0 to 255 in order to inspect it.
The blue noise dither array looks as follows:
Here is an example with constant offsets when sampling 3D clouds without dithering.
Here is the same scene using dithering to set the sampling offsets.
One can apply a blur filter to reduce the noise.
Note how the blurred image shows more detail than the image with constant offsets even though the sampling rate is the same.
Let me know any comments/suggestions in the comments below.
Test driven development (TDD) undoubtedly helps a great deal in preventing development grinding to a halt once a project’s size surpasses a few lines of code.
The reason for first writing a failing test is to ensure that the test is actually failing and testing the next code change.
A minimal change to the code is performed to pass the new test while also still passing all previously written tests.
If necessary the code is refactored/simplified. The reason to do this after passing the test is so that one does not have to worry about passing the test and writing clean code at the same time.
Testing rendering output
One can test OpenGL programs by rendering test images and comparing them with a saved image (a test fixture).
In order to automate this, one can perform offscreen rendering and do a pixel-wise image comparison with the saved image.
Using the Clojure programming language and the Lightweight Java Game Library (LWJGL) one can perform offscreen rendering with a Pbuffer object using the following macro (of course this approach is not limited to Clojure and LWJGL):
The image is recorded initially by using the checker record-image instead of is-image and verifying the result manually.
One can use this approach (and maybe only this approach) to test code for handling vertex array objects, textures, and for loading shaders.
Testing shader code
Above approach has the drawback that it can only test complete rendering programs.
Also the output is limited to 24-bit RGB images.
The tests are therefore more like integration tests and they are not suitable for unit testing shader functions.
However it is possible to use a Pbuffer just as a rendering context and perform rendering to a floating-point texture.
One can use a texture with a single pixel as a framebuffer.
A single pixel of a uniformly colored quad is drawn.
The floating point channels of the texture’s RGB pixel then can be compared with the expected value.
Furthermore it is possible to compose the fragment shader by linking the shader function under test with a main function.
I.e. it is possible to link the shader function under test with a main function implemented just for probing the shader.
The shader-test function defines a test function using the probing shader and the shader under test.
The new test function then can be used using the Midje tabular environment.
In the following example the GLSL function phase is tested.
Note that parameters in the probing shaders are set using the weavejester/comb templating library.
Note that using mget the red channel of the pixel is extracted.
Sometimes it might be more desirable to check all channels of the RGB pixel.
Here is the actual implementation of the tested function:
The empty function (fn [program]) is specified as a setup function.
In general the setup function is used to initialise uniforms used in the shader under test.
Here is an example of tests using uniform values:
Here a setup function initialising 5 uniform values is specified.
Mocking shader functions
If each shader function is implemented as a separate string (loaded from a separate file), one can easily link with mock functions when testing shaders.
Here is an example of a probing shader which also contains mocks to allow the shader to be unit tested in isolation:
Let me know if you have any comments or suggestions.
Rhawk187 pointed out that exact image comparisons are also problematic because updates to graphics drivers can cause subtle changes. This can be adressed by allowing a small average difference between the expected and actual image.
The code of make-vertex-array-object and render-quads is added here for reference.
It is recommended when choosing the RAM to use a multi-channel configuration for better performance.
Also make sure to order enough RAM because some of it is used by the integrated graphics card.
Finally doing parallelized builds on many cores requires more memory.
Here is an in-depth review in German language (and here is a shorter review).
Also there is a Youtube video review in English below:
Installing Debian 11
On Reddit I got helpful information from Ferdinand from Tuxedo Computers on how to install Debian 11 on the Aura 15 Gen1.
Also see Debian Wiki for some information.
Debian is not supported by Tuxedo Computers but it works nonetheless on a Tuxedo Aura 15 Gen1.
I followed the following steps (no warranty) to get Debian 11 running: