This is an informational post on how to simulate the physics of atmospheric flight of a Venturestar style single-stage-to-orbit space craft.
My dad Gerhard Wedekind is an experienced aerodynamics engineer and I asked him to help with making the aerodynamics of the sfsim space flight simulator realistic to some extent.
The information in this post is a write-up of relevant formulas and approximate data he obtained from numerical simulation and estimates from aerodynamics knowledge.
The information provided in this article is for general informational purposes only and comes without any warranty, express or implied.
Coordinate systems
The following drawing shows the body coordinate system (xb, yb, zb) and the wind coordinate system (xw, yw, zw).
The wind system is rotated against the body system so that the speed vector (in a stationary atmosphere) points in positive xw.
Note that lift, drag, and side force are defined in the wind system and not in the body system.
A positive lift force points upwards (negative zw) in the wind system.
The drag force points backwards (negative xw) in the wind system.
A positive side force points starboard (positive yw) in the wind system.
Yaw, pitch, and roll moments on the other hand are specified in the body system.
A coordinate system transformation from body system to wind system can be performed using two angles:
α is the angle of attack
β is the sideslip angle
When transforming coordinates from body system to wind system, one first rotates by β (sideslip angle) about the body z axis (zb).
Then one rotates by α (angle of attack) about the new y axis.
Dynamic pressure
The dynamic pressure q depends on air density ρ and speed V:
Zero-lift drag is computed using the zero-lift drag coefficient CD0 as well as dynamic pressure q and the reference area Sref:
The zero-lift drag coefficient depends on the speed of the aircraft.
Induced drag is determined using the lift coefficient CL, the Oswald factor e, the aspect ratio Λ, as well as q and the reference area Sref.
The Oswald factor e depends on the speed of the aircraft.
The lift coefficient depends on the angle of attack α.
The aspect ratio Λ depends on wing span b and wing area S:
The lift L is computed using the lift coefficient CL, dynamic pressure q, and the reference area Sref:
The side force Y (and corresponding coefficient) is usually not important but we will look into it later.
Moments
The pitching moment M is computed using the pitching moment coefficient Cm, the dynamic pressure q, the reference area Sref, and the aerodynamic chord cbar:
The pitching moment coefficient depends on the lift coefficient CL, the position of the neutral point XN, the centre of gravity xref. and the aerodynamic chord cbar:
The yawing moment N is the product of the yawing moment coefficient Cn, the dynamic pressure q, the reference area Sref, and half the wing span b:
The yawing moment coefficient depends on the side slip angle β.
The rolling moment L (using the same symbol as lift for some reason) is the product of the rolling moment coefficient Cl, the dynamic pressure q, the reference area Sref, and half the wing span b:
The rolling moment coefficient depends on the angle of attack α and the side slip angle β.
Data Sheet
Here are the parameters for the flight model above:
Note that xref is defined in a coordinate system where x=0 is at the intersection of the inner leading edges (wing apex).
The following picture also shows the position of the aerodynamic chord with length cbar.
The center of gravity is at 25% of the aerodynamic chord.
Tables
Here is a data table with information for determining the remaining coefficients depending on the airspeed in Mach (Ma).
The table shows for each speed:
a factor to determine the lift coefficient CL
the position XN of the neutral point relative to the aerodynamic chord (note that the center of gravity xref is at the 25% mark of the aerodynamic chord)
the Oswald factor e
a factor to determine the rolling moment coefficient Cl
a factor to determine the yawing moment coefficient Cn
the zero-lift drag coefficient CD0
For small values of α, the lift coefficient increases linearly with α (where α is specified in radians):
For small values of α and β, the rolling moment coefficient increases linearly with the product of α and β (where α and β are specified in radians):
For small values of β, the yawing moment coefficient increases linearly with β (where β is specified in radians):
The following table shows for each speed:
the value for α at which the linear relation of CL and α breaks down
the maximum value of CL
the angle of attack where CL reaches its maximum
the drag coefficient for 90° angle of attack
Near α=90°, the lift and drag coefficients behave as follows:
At hypersonic speeds (V/Ma=10.0), lift and induced drag coefficients behave as follows:
I.e. the coefficients are stabilising at hypersonic speeds!
Control surfaces
The following table shows parameters to determine different moments generated by control surfaces:
The side force coefficient for a given rudder angle ζ is:
The yawing moment coefficient for the rudder is:
The pitching moment coefficient for flaps δF (down is positive) is
The rolling moment coefficient for ailerons with angle ξ (positive: port aileron up, starboard aileron down) is:
The yawing moment coefficient is
Next steps
Using the information, the curves for a full range of angles and speeds need to be fitted and guessed in some places.
Also the damping moment formulas and coefficients still need to be added.
The following presentation is a short introduction to the connascence software design metric.
Connascence is a useful tool to understand and reduce the degree of coupling in software systems and make software more modular.
Click above image to watch the 10 minutes presentation.
You can get the slides of the presentation here: connascence.pdf
I finally managed to get enough of a prototype together to make a page about sfsim (space flight simulator).
I was inspired by realistic space flight simulators like Orbiter 2016 and I decided to start a cross-platform project based on LWJGL/OpenGL.
The plan is to support vertical launch, space station docking, moon landing, reentry, and landing on a runway in the future.
The page features a new trailer demonstrating the aerodynamics prototype:
Also my application as a Steam developer was accepted and the Steam app page for sfsim was approved.
You can find the page here.
If you are interested in a space flight simulator with realistic physics, just add sfsim to your Steam wishlist to get notified when it is released:
Let me know in the comments if there is anything you would like to see in a realistic space flight simulator.
Another approach is to train the model in Python, export it to the ONNX format and then use the ONNX runtime directly to perform inference in Clojure.
There is a recent tutorial on using ONNX models from Clojure.
However it only deals with tabular data.
Training
The following example uses PyTorch to train a traditional CNN classifier on the well-known MNIST dataset (the dataset can be obtained here).
The implementation performs the following steps:
A class for reading MNIST images and labels is implemented.
A CNN model using two convolutional layers and two fully connected layers is implemented and dropout regularization is applied.
The training and test data is loaded as batches.
The cross entropy loss function and an Adam optimizer are instantiated. Note that learning rate and dropout are hyperparameters which need to be tuned.
The training loop performs prediction, loss computation, backpropagation, and optimization step.
The test loop accumulates and displays the prediction accuracy on the test set.
After 25 epochs, the models is exported to the ONNX format.
importnumpyasnpimporttorchfromtorchimportnnfromtorchimportonnxfromtorch.nnimportfunctionalasFfromtorch.utils.dataimportDataLoader,DatasetclassMNISTData(Dataset):def__init__(self,images_file_name,labels_file_name):"""Read MNIST images and labels from specified files"""super(MNISTData,self).__init__()# Read images (skip magic, length, height, and width integers)
self.images=np.fromfile(images_file_name,dtype=np.uint8)[16:].reshape(-1,28,28)# Read labels (skip magic and length integer)
self.labels=np.fromfile(labels_file_name,dtype=np.uint8)[8:]def__len__(self):"""Return the number of images (or labels) in the dataset"""returnlen(self.labels)def__getitem__(self,idx):"""Return the image and label at the specified index"""image=torch.from_numpy(self.images[idx]).to(torch.float)/255.0label=torch.zeros(10)label[self.labels[idx]]=1returnimage,labelclassMNISTNet(nn.Module):def__init__(self):"""Construct network with 2 convolutional layers and 2 fully connected layers"""super(MNISTNet,self).__init__()self.conv1=nn.Conv2d(1,10,kernel_size=5)self.conv2=nn.Conv2d(10,20,kernel_size=5)self.conv2_drop=nn.Dropout2d(p=0.2)self.fc1=nn.Linear(320,50)self.fc2=nn.Linear(50,10)defforward(self,x):"""Perform forward pass of network"""x=x.view(-1,1,28,28)x=F.relu(F.max_pool2d(self.conv1(x),2))x=F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)),2))x=x.view(-1,320)x=F.relu(self.fc1(x))x=F.dropout(x,p=0.2,training=self.training)x=self.fc2(x)returnF.softmax(x,dim=1)defmain():train_data=MNISTData('data/train-images-idx3-ubyte','data/train-labels-idx1-ubyte')test_data=MNISTData('data/t10k-images-idx3-ubyte','data/t10k-labels-idx1-ubyte')train_loader=DataLoader(train_data,batch_size=64)test_loader=DataLoader(test_data,batch_size=64)model=MNISTNet()loss=nn.CrossEntropyLoss()# Adam optimizer
optimizer=torch.optim.Adam(model.parameters(),lr=0.001)forepochinrange(25):forx,yintrain_loader:pred=model(x)l=loss(pred,y)optimizer.zero_grad()l.backward()optimizer.step()correct=0total=0forx,yintest_loader:pred=model(x).argmax(dim=1)correct+=(pred==y.argmax(dim=1)).sum().item()total+=len(y)print('Accuracy: {}'.format(correct/total))# Save model as ONNX
torch.onnx.export(model,(torch.randn((1,1,28,28),dtype=torch.float),),'mnist.onnx',input_names=['input'],output_names=['output'])
Inference
The model file mnist.onnx can now be used for inference in Clojure.
The deps.edn file specifies the ONNX runtime and the cljfx library:
The infer.clj file contains the code to run the inference on the model.
The code contains the following functions for inference:
read-digit - Read a 28*28 gray-scale byte block from the MNIST dataset
feature-scaling - Scale byte features to [0, 1] floating-point range. Note that Clojure byte arrays contain signed values which need to be converted to unsigned values!
argmax - Return the index of the maximum value of a one-dimensional probability vector.
infer - Convert a byte array to a ONNX tensor with batch size and number of channels being 1, run inference, and return the argmax of the probability vector.
Furthermore the digit->image function uses the idea shown in James Thompson’s Gist to convert a byte array to a JavaFX image in order to display it.
The remaining code displays a small JavaFX GUI showing random images from the MNIST test data and the inference result.
(nsinfer.core(:require[clojure.java.io:asio][cljfx.api:asfx])(:import[java.ioByteArrayOutputStreamByteArrayInputStream][java.nioFloatBuffer][javafx.applicationPlatform][ai.onnxruntimeOrtEnvironmentOrtSessionOnnxTensor]))(defenvironment(OrtEnvironment/getEnvironment))(defmnist(->environment(.createSession"mnist.onnx")))(defnread-digit[n]"Read a 28*28 gray-scale byte block from the MNIST dataset."(with-open[in(io/input-stream"data/t10k-images-idx3-ubyte")](.skipin(+16(*n2828)))(.readNBytesin(*2828))))(defnbyte->ubyte[b]"Convert byte to unsigned byte"(if(>=b0)b(+b256)))(defnfeature-scaling[digit]"Scale features to [0, 1] range"(float-array(map#(/(byte->ubyte%)255.0)digit)))(defnargmax[arr]"Return the index of the maximum value in the array"(first(reduce(fn[[resultmaximum][indexvalue]](if(>valuemaximum)[indexvalue][resultmaximum]))[0(firstarr)](mapvector(range)arr))))(defninference[digit]"Run inference on a digit image"(let[scaled(feature-scalingdigit)input-buffer(FloatBuffer/wrapscaled)inputs{"input"(OnnxTensor/createTensorenvironmentinput-buffer(long-array[112828]))}outputs(.runmnistinputs)output-tensor(.get(.getoutputs"output"))output-buffer(.getFloatBufferoutput-tensor)result(float-array10)](.getoutput-bufferresult)(argmaxresult)))(defndigit->image[data]"Convert a 28*28 byte array to JavaFX image"(let[image(java.awt.image.BufferedImage.2828java.awt.image.BufferedImage/TYPE_BYTE_GRAY)raster(.getRasterimage)out(ByteArrayOutputStream.)](.setDataElementsraster002828data)(javax.imageio.ImageIO/writeimage"png"out)(.flushout)(javafx.scene.image.Image.(ByteArrayInputStream.(.toByteArrayout)))))(defapp-state(atom{:index(rand-int10000)}))(defnevent-handler[&args]"Update application state with random index"(swap!app-stateupdate:index(fn[_](rand-int10000))))(defndisplay-image[{:keys[image]}]"Image display for cljfx GUI"{:fx/type:image-view:fit-width256:fit-height256:imageimage})(defnnext-button[_]"Next button for cljfx GUI"{:fx/type:button:text"Next":on-actionevent-handler})(defnroot[{:keys[index]}]"Main window for cljfx GUI"(let[digit(read-digitindex)result(inferencedigit)]{:fx/type:stage:showingtrue:title"MNIST":scene{:fx/type:scene:root{:fx/type:v-box:padding3:spacing5:children[{:fx/typedisplay-image:image(digit->imagedigit)}{:fx/type:h-box:padding3:spacing5:children[{:fx/typenext-button}{:fx/type:label:text(str"result = "result)}]}]}}}))(defrenderer"Renderer for cljfx GUI"(fx/create-renderer:middleware(fx/wrap-map-descassoc:fx/typeroot)))(defn-main[&args](Platform/setImplicitExittrue)(fx/mount-rendererapp-staterenderer))
Here is a screenshot of the inference GUI:
GPU usage
For the MNIST example a CPU is sufficient for training and inference.
For larger models one needs to use a GPU.
In PyTorch one can use the .to method to move models and tensors to the GPU.
For inference in Clojure, one needs to install onnxruntime_gpu instead of onnxruntime.
Furthermore one needs to select a GPU device when creating a session:
The ONNX runtime allows you to train models using PyTorch and deploy them in Clojure applications.
Furthermore there are Tensorflow-Java bindings however they are more verbose.
Hopefully the Clojure Tribuo bindings eventually will provide a more concise API for implementing ML models and training them.
When using byte arrays in Clojure to represent images, one needs to convert them to unsigned byte in order to obtain correct results.
In the example we also used feature scaling for faster convergence during training.