Basically for each rigid object the following steps are executed:
The 3D cursor is set to a point on the center of the joint axis
The origin of the rigid object is set to the 3D cursor
The parent of the rigid object is set to the other object it is connected to via the joint (while maintaining the transformation)
All forbidden translation axes and rotation axes are locked
The resulting object then can be animated by defining keyframes for joint angles as shown in Blender Guru donut tutorial part 11.
Rendering the video using the Cycles engine on my 8 core CPU took 3.5 hours.
If you have a GPU supported by Blender this would take much less time.
I finally did the Blender Guru donut tutorial for beginners.
Blender is a powerful 3D editor to create models and animate them.
Here is a series of images made while progressing through the tutorial.
Initially one creates a torus and distorts it.
Furthermore the upper half of the torus is copied, solidified, and sculpted to create the icing of the donut.
One can set the material properties of the donut to make the icing more reflective and add subsurface scattering.
Here is another example using white icing.
One can use texture painting to add a bright ring at the middle height of the torus.
Using geometry nodes one can distribute cylindrical sprinkles on the surface and rotate them randomly around the surface normal.
By doing weight painting one can ensure that the sprinkles are less frequent on the sides of the donut.
Furthermore one can select from a set of differently shaped sprinkles and color them randomly using a color map with a few distinct colors.
Using key, fill, and rim lighting, one can illuminate the donut.
Compositing is used to add a background with a blurred white circle creating a color gradient.
Also one can add lens distortion and color dispersion to make the image look more like a photo.
See below for the resulting video:
I really enjoyed the tutorial and I hope I will remember the right tool at the right moment when creating models in the future.
All the time there are people finishing the tutorial and posting it on r/BlenderDoughnuts.
Note that LWJGL version 3 consists of several packages.
Furthermore most packages have native bindings for different platforms.
See LWJGL customization page for more information.
There is a Leiningen LWJGL project example by Roger Allen which shows how to install all LWJGL libraries using Leiningen.
However here we are going to use the more modern approach with a deps.edn file and we are going to use more efficient OpenGL indirect rendering.
The deps.edn file uses symbols of the form <groupID>/<artifactId>$<classifier> to specify libraries.
For LWJGL version 3 the classifier is used to install native extensions.
The deps.edn file for installing LWJGL with OpenGL and GLFW under GNU/Linux is as follows:
Displaying a window with GLFW instead of LWJGL 2 is a bit different.
The updated code to use lwjgl-glfw and lwjgl-opengl to display a triangle with a small texture is shown below:
If you download the two files, you can run the code as follows:
clj -M raw-opengl-lwjgl3.clj
It should show a triangle like this:
Any feedback, comments, and suggestions are welcome.
This is a quick performance comparison of the Clojure core.matrix library and the Efficient Java Matrix Library.
Because core.matrix uses the VectorZ Java library as a backend, direct calls to VectorZ were also included in the comparison.
Finally I added fastmath to the comparison after it was pointed out to me by the developer.
The criterium 0.4.6 benchmark library was used to measure the performance of common matrix expressions.
The Clojure version was 1.11.1 and OpenJDK runtime version was 17.0.6.
Here are the results running it on an AMD Ryzen 7 4700U with a turbo speed of 4.1 GHz:
op
core. matrix 0.63.0
ejml-all 0.43
vectorz- clj 0.48.0
fastmath 2.2.1
make 4x4 matrix
675 ns
135 ns
50.5 ns
13.1 ns
make 4D vector
299 ns
47.6 ns
9.27 ns
3.67 ns
add 4D vectors
13.5 ns
18.2 ns
9.02 ns
4.29 ns
inverse matrix
439 ns
81.4 ns
440 ns
43.6 ns
elementwise matrix multiplication
64.9 ns
29.0 ns
29.1 ns
13.7 ns
matrix multi plication
102 ns
74.7 ns
100 ns
22.4 ns
matrix-vector multiplication
20.9 ns
31.2 ns
19.1 ns
6.46 ns
vector dot product
6.56 ns
6.90 ns
4.46 ns
6.36 ns
vector norm
10.1 ns
11.4 ns
no support?
3.74 ns
matrix determinant
170 ns
7.35 ns
166 ns
7.67 ns
matrix element access
4.14 ns
3.35 ns
3.26 ns
3.53 ns1
get raw data array
12.0 ns
3.00 ns
11.9 ns
13.2 ns1
1requires fastmath 2.2.2-SNAPSHOT or later
See matperf.clj for source code of benchmark script.
Comparing EJML with a mix of core.matrix and direct calls to vectorz:
EJML has support for both single and double precision floating point numbers
it uses single column matrices to represent vectors leading to slower matrix-vector multiplication
it has a fast 4x4 matrix inverse
it does not come with a Clojure wrapper
it offers fast access to raw data
it does not support multi-dimensional arrays
Comparing EJML with fastmath:
EJML has support for matrices larger than 4x4
EJML gives you access to the matrix as a flat floating point array (fastmath will add support in the future)
EJML is mostly slower
The implementations of the libraries are all quite impressive with custom optimisations for small matrices and vectors.
Note that I didn’t include Neanderthal in the comparison because it is more suitable for large matrices.
I hope you find this comparison useful.
Update:
The large performance difference for matrix inversion is probably because EJML has custom 4x4 matrix classes while VectorZ stops at 3x3.
Here is a performance comparison of matrix inverse for 3x3, 4x4, and 5x5 matrices:
Volumetric clouds use 3D density functions to represent clouds in a realistic way.
Ray marching is used to generate photorealistic rendering.
With modern graphics cards it is possible to do this in realtime.
Sebastian Lague’s video on cloud rendering shows how to generate Worley noise which can be used to generate realistic looking clouds.
Worley noise basically is a function which for each location returns the distance to the nearest point of a random set of points.
Usually the space is divided into cells with each cell containing one random point.
This improves the performance of determining the distance to the nearest point.
The following image shows a slice through inverted 3D Worley noise.
Ray marching works by starting a view ray for each render pixel and sampling the cloud volume which is a cube in this example.
This ray tracing program can be implemented in OpenGL by rendering a dummy background quad.
The transmittance for a small segment of the cloud is basically the exponent of negative density times step size:
The resulting sampled cube of Worley noise looks like this:
The amount of scattered light can be changed by using a mix of isotropic scattering and a phase function for approximating Mie scattering.
I.e. the amount of scattered light is computed as follows:
The resulting rendering of the Worley noise now shows a bright halo around the sun:
The rendering does not yet include self-shadowing.
Shadows are usually computed by sampling light rays towards the light source for each sample of the view ray.
However a more efficient way is to use deep opacity maps (also see Pixar’s work on deep shadow maps).
In a similar fashion to shadow maps, a depth map of the start of the cloud is computed as seen from the light source.
While rendering the depth map, several samples of the opacity (or transmittance) behind the depth map are taken with a constant stepsize.
I.e. the opacity map consists of a depth (or offset) image and a 3D array of opacity (or transmittance) images.
Similar as when performing shadow mapping, one can perform lookups in the opacity map to determine the amount of shading at each sample in the cloud.
To make the cloud look more realistic, one can add multiple octaves of Worley noise with decreasing amplitude.
This is also sometimes called fractal Brownian motion.
To reduce sampling artifacts without loss of performance, one can use blue noise offsets for the sample positions when computing shadows as well as when creating the final rendering.
In a previous article I have demonstrated how to generate global cloud cover using curl noise.
One can add the global cloud cover with octaves of mixed Perlin and Worley noise and subtract a threshold.
Clamping the resulting value creates 2D cloud patterns on a spherical surface.
By restricting the clouds to be between a bottom and top height, one obtains prism-like objects as shown below:
Note that at this point it is recommended to use cascaded deep opacity maps instead of a single opacity map.
Like cascaded shadow maps, cascaded deep opacity maps are a series of cuboids covering different splits of the view frustum.
Guerilla Games uses a remapping function to introduce high frequency noise on the surfaces of the clouds.
The high frequency noise value is remapped using a range defined using the low frequency noise value.