Age | Commit message (Collapse) | Author |
|
|
|
Calculating the curl of our simulated velocity field requires an additional
compute shader step. Handling of buffer and shader switching depending on
the display mode is implemented rudimentarily for now.
Most of this commit is scaffolding, the actual computation is more or less
trivial:
```
const float dxvy = (getFluidVelocity(x+1,y).y - getFluidVelocity(x-1,y).y)
/ (2*convLength);
const float dyvx = (getFluidVelocity(x,y+1).x - getFluidVelocity(x,y-1).x)
/ (2*convLength);
setFluidExtra(x, y, dxvy - dyvx);
```
This implements the following discretization of the 2d curl operator:
Let $V : \mathbb{N}^2 \to \mathbb{R}^2$ be the simulated velocity field at
discrete lattice points spaced by $\Delta x \in \mathbb{R}_{\gt 0}$.
We want to approximate the $z$-component of the curl for visualization:
$$\omega := \partial_x F_y - \partial_y F_x$$
As we do not possess the actual function $F$ but only its values at a
set of discrete points we approximate the two partial derivatives using
a second order central difference scheme:
$$\overline{\omega}(i,j) := \frac{F_y(i+1,j) - F_y(i-1,j)}{2 \Delta x} - \frac{F_x(i,j+1) - F_x(i,j-1)}{2 \Delta x}$$
Note that the scene shader does some further rescaling of the curl to better
fit the color palette. One issue that irks me is the emergence of some
artefacts near boundaries as well as isolated "single-cell-vortices".
This might be caused by running the simulation too close to divergence
but as I am currently mostly interested in building an interactive fluid
playground it could be worth it to try running an additional smoothening
shader pass to straighten things out.
|
|
|
|
i.e. restarting the simulation without clearing the geometry
|
|
|
|
|
|
e.g. check out `./compustream --size 512 128 --open --lups 300 --quality`
|
|
The paper "Automatic grid refinement criterion for lattice Boltzmann method" [2015]
by Lagrava et al. describes a criterion for measuring the local simulation quality
using a comparison of the theoretical Knudsen number and the quotient of the cells's
non-equilibrium and equilibrium function.
While this criterion was developed to enable automatic selection of areas to be refined,
it also offers a interesting and unique perspective on the fluid structure.
As the criterion requires calculation of the modeled Reynolds-, Mach- and Knudsen-numbers
I took the time to set up the basics for scaling the simulation to actually model a physical
system. Or rather calculating which physical model is represented by the chosen resolution
and relaxation time.
[2015]: https://arxiv.org/abs/1507.06767
|
|
Seems to be more stable when drawing around.
Not that all of this doesn't aim to be accurate in any real world sense.
|
|
The GLFW window rendering loop used to dispatch the compute shaders was
restricted to 60 FPS. I did not notice this because I never actually
measured the computed lattice updates per seconds in addition to trying
to push the GPU to its limits. Turns out the lattice sizes I commonly
use can be updated 500 times per second comfortably… Now this looks more
like the performance gains promised by GPU computation.
|
|
|
|
i.e. implement the A-B pattern.
Dispatching only one compute shader per interaction-less simulation step
already yields very noticeable performance gains. All cell types are now
fully handled by the collide shader which further simplifies the code.
|
|
|
|
|
|
The collide shader became to crowded for my taste.
As a nice side benefit we can now execute interaction processing only
when actual interaction is taking place.
|
|
|
|
Replaces the density value which is actually not that useful for visualization.
Encoding integer values as floats by casting and comparing them using
exact floating point comparison is not very safe but works out for now.
|
|
|
|
Internal wall cells need to be disabled to prevent delayed propagation
of the reflected populations.
This is just quickly thrown together - both the visual drawing and the backend's
material handling remain to be improved.
|
|
Introduce a inactive receive-only outer boundary to simplify streaming.
Extract and generalize bounce back handling. Further work will require
tracking cell _material_ to enable both easier definition and dynamic
updating of the geometry.
|
|
Increases consistency and should help to avoid confusion
|
|
|
|
…seems to be correctly unrolled during compilation. Or at least no
performance impact is visible.
|
|
|
|
|
|
|
|
|
|
i.e. move fluid vertex placement to appropriate vertex shader.
Do not amplify or shift fluid moments in any way prior to
passing it to the display pipeline.
|
|
|
|
|
|
The same thing occurs in computicle. I suspect some initialization /
compute shader invokation problem. On the other hand: Why would that
happen for the origin vertex and not e.g. the first or last vertex
in memory? To be investigated further.
|
|
This should provide much more flexibility.
For our purpose it would be useful if the vertex shader was executed
after the geometry shader (to apply the projection matrix) but alas
this is not the case. Thus the MVP matrix is applied during geometry
construction and the vertex shader only provides density extraction.
|
|
|
|
Improvised on top of computicles's scaffolding.
Works in a world where _works_ is defined as "displays stuff on screen that invokes thoughts of fluid movement".
|