Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How well openpiv-python deals with the upcoming huge PIV cameras? #209

Open
alexlib opened this issue Aug 18, 2021 · 7 comments
Open

How well openpiv-python deals with the upcoming huge PIV cameras? #209

alexlib opened this issue Aug 18, 2021 · 7 comments

Comments

@alexlib
Copy link
Member

alexlib commented Aug 18, 2021

Is your feature request related to a problem? Please describe.
Here is a link to the issue on PIVLab: Shrediquette/PIVlab#68 (comment)

The message is about differences in RAM and disk usage when 25 megapixel images are analysed for PIV. I wonder how does openpiv-python performs and where are the bottlenecks.

Describe the solution you'd like
A volunteer is needed to create a good test environment based on debuggers and memory profilers, create different increasing size PIV test images up to 100 Mp and test performance. The main goal is to find the bottlenecks and understand them, looking for possible future enhancements of the code.

@ErichZimmer
Copy link
Contributor

ErichZimmer commented Oct 18, 2021

FYI, scipy FFTs operate better with dtype of float32 compared to other dtypes because some reason scipy processes in float64 on everything except float32. Additionally, it is faster by 10-112% and takes half as much RAM (good for multiprocessing).

@alexlib
Copy link
Member Author

alexlib commented Oct 18, 2021

Good point. We need to check everywhere what dtype is used.

@ErichZimmer
Copy link
Contributor

On possible bottlenecks, scipy/numpy's use of vectorized FFT algorithms could be one due to its high memory usage. A general rule of thumb is 1GB of free RAM per core to be used mostly eliminates it, but it still could be an issue for images >3 MP with window sizes <32 pixels and using multi/parallel-processing. This seems to be a python/matlab issue as nearly all c++ softwares using FFTW + OpenMP use very little memory and are orders of magnitude faster on large images. Another possible bottleneck is local median validation and second peak searching algorithms as it can take some time when there is a large amount of vectors (>50,000).

@alexlib
Copy link
Member Author

alexlib commented Oct 22, 2021

Probably you are right. We need some profiling tools to learn the issue and prepare some heavy load tests. Meanwhile it is only a wishful future extension.

@ErichZimmer
Copy link
Contributor

Have we tried chunking the 3D stack of arrays and use the multiprocessing.Process module for parallel processing? This is what Fluere uses and it's as fast as some commercial softwares. It also lowers the RAM requirements.

@alexlib
Copy link
Member Author

alexlib commented Nov 29, 2021 via email

@ErichZimmer
Copy link
Contributor

ErichZimmer commented Dec 1, 2021

For the testing of large images, my personal GUI now has a synthetic image generator based on synimagegen and PIVlab. It should give us a good idea of the bottlenecks.
Additionally, if you like, I could make it a standalone GUI.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants