Age | Commit message (Collapse) | Author |
|
|
|
|
|
An attempt to produce a minimal LBM implementation to benchmark various
memory and vectorization schemes on the CPU.
|
|
There are at least two distinct areas where padding can be beneficial on a GPU:
1. Padding the global thread sizes to support specific thread layouts
e.g. (32,1) layouts require the global lattice width to be a multiple of 32
2. Padding the memory layout at the lowest level to align memory accesses
i.e. some GPUs read memory in 128 Byte chunks and as such it is beneficial if
the operations are aligned accordingly
For lattice and thread layout sizes that are exponents of two these two padding
areas are equivalent. However when one operates on e.g. a (300,300) lattice using
a (30,1) layout, padding to 128 bytes yields a performance improvement of about
10 MLUPS on a K2200.
Note that I am getting quite unsatisfied with how the Lattice class and
its suroundings continue to accumulate parameters. The naming distinction
between Geometry, Grid, Memory and Lattice is also not very intuitive.
|
|
|
|
i.e. return unshifted moments in a implicitly ordered float4 array.
Cell positions are reconstructed by a vertex shaded analogously to
how it is done in compustream.
|
|
|
|
|
|
|
|
Note how this basically required no changes besides generalizing cell indexing
and adding the symbolic formulation of a D3Q19 BGK collision step.
Increasing the neighborhood communication from 9 to 19 cells leads to a
significant performance "regression": The 3D kernel yields ~ 360 MLUPS
compared to the 2D version's ~ 820 MLUPS.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|