-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propositions for optimizations #6
Comments
The skeletonization code stores polygon and hole edges in the class LineSegment2, constructed in the method from_polygon() of the class _LAV and collected in the class _LAVertex. Whenever a unit vector of an edge is required, it gets computed applying the method normalized() on the direction vector v in LineSegment2, which is very inefficient. We could provide the line segments of polygon and holes already as input to skeletonize() and compute the unit vector only once per segment and store it in LineSegment2. It would then be available within the computation of the skeleton and also as output of polygonize(). How should the output of polygonize() look like to be convenient for the usage in blender-osm? |
I suggest to add input additional parameters:
|
I dont think it makes any sense to return unit vectors for the skeleton edges on calling polyskel.polygonize(..), as their direction is different for different faces? |
Ok, agreed
Ok. No edge lengths as inputs.
Unit vectors for the skeleton edges aren't needed by the addon. Only the ones for the polygon edges (including holes) are needed by the addon. |
OK, then the new signature will be
I will update the new code soon. |
I updated the first message at #2. |
This has been done now, the new code is commited to the repository. |
Thank you! I'll test it in the coming days. |
Do we need logging in bpypolyskel.py? From the original code of skeletonize(), we actuall import a logger and use many instructions like
Additionally, every class, also those in the euclid replacement, needs to implement
which provide string representations of the object. During my 'exercise' in computational geometry I preferred to debug using mathplotlib in order to see what's going on. I know, it's not very much code, but we could save some memory. |
Hi @polarkernel, I think it's ok to remove the logging code. |
In #5 (comment) I mentioned that using double-precision floating point computations in cross- and dot-products increased the quality of the skeleton. At this time it was unknown, how much the performance decreases by this change. I did now some tests to get an impression of this subject. As an example, I used the data of potsdam_grosse_weinmeisterstrasse_49d, which is some kind of average sized building. On my computer, one execution of polygonize() with this data requires 10.8 ms (average of 500 runs). Note that on a Windows computer the timing is not very accurate, but I think it will be enough to get an idea. In this example, 436 cross-products and 293 dot-products were computed for one run. The setup for my measurements for cross- and dot-products was as follows:
Then I used three different test snippets to measure the execution time (here shown for the cross-product and similar for the dot-product):
The results were as follows:
Conclusions:
Note: Using double precision will not solve the issues of floating point precision for computational geometry, it will only shift them to another level. EDIT: It is not true that prague_wenzigova_458 looks better than with single-precision arithmetic. I forgot to remove a decreased mergeRange in my test code of bpypolyskel.py with double-precision cross- and dot-products. I like to postpone this proposition for now. |
Anyway, it would be good to have several variants of calculations in the bpypolyskel library if there will be more than one of them. |
This issue shall be used to discuss propositions for optimizations of the project.
The text was updated successfully, but these errors were encountered: