Age | Commit message (Collapse) | Author | |
---|---|---|---|
2019-06-08 | Performance optimizations | Adrian Kummerlaender | |
Starting point: ~200 MLUPS on a NVidia K2200 Changes that did not noticeably impact performance: * Memory layout AOS vs. SOA (weird, probably highly platform dependent) * Propagate on read * Tagging pointers as read / write only * Manual code inlining Changes that made things worse: * Bad thread block sizes The actual issue: * Hidden double precision computations => Code now yields ~600 MLUPS | |||
2019-06-04 | Check whether hand-unrolling makes a difference | Adrian Kummerlaender | |
…it doesn't in this case. | |||
2019-05-31 | Try out various OpenCL work group sizes using a Jupyter notebook | Adrian Kummerlaender | |
This is actually quite nice for this kind of experimentation! | |||
2019-05-30 | Collapse SOA into single array | Adrian Kummerlaender | |
Weirdly the expected performance gains due to better coalescence of memory access is not achieved. | |||
2019-05-29 | Move to structure of arrays | Adrian Kummerlaender | |
2019-05-28 | Add const qualifiers for pointers | Adrian Kummerlaender | |
2019-05-28 | Pull streaming for local writes | Adrian Kummerlaender | |
2019-05-28 | Remove branch to enable vectorization on Intel | Adrian Kummerlaender | |
Twice the MLUPS! | |||
2019-05-27 | Add material numbers | Adrian Kummerlaender | |
2019-05-27 | Print some performance statistics | Adrian Kummerlaender | |
2019-05-26 | Add basic D2Q9 LBM | Adrian Kummerlaender | |
Ported the basic compustream structure |