Skip to content

Commit

Permalink
Updated index
Browse files Browse the repository at this point in the history
  • Loading branch information
wi-re committed May 9, 2024
1 parent 5adfc1e commit 6df58d7
Show file tree
Hide file tree
Showing 2 changed files with 34 additions and 0 deletions.
14 changes: 14 additions & 0 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,20 @@ <h1 id="symmetric-fourier-basis-convolutions-for-learning-lagrangian-fluid-simul
<p>Accepted at: International Conference on Learning Representation (ICLR) 2024 - Vienna (as a Poster)</p>
<p>If you have any questions on the paper itself or Continuous Convolutions in general, feel free to reach out to me via <a href="mailto:[email protected]">[email protected]</a></p>
<p>Frequently asked questions about the paper will appear here as they are asked.</p>
<p>Q: Can the method handle oversampled or undersampled data during inference related to training?<br>
A: No. The internal formulation of the Graph Convolution does not account for the volume of the contributing nodes, which works for our purposes as we are only dealing with uniformly sized particles and thus the volume is a constant term that can be ignored. However, if you wanted to do this you would need to change the Graph Convolution formulation to include volume.</p>
<p>Q: What about scaling to other resolutions?<br>
A: Our method cannot handle this as there is some general uncertainty about particle resolutions as (a) scalar quantities in our SPH data scale with <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msup><mi>h</mi><mrow><mo></mo><mi>d</mi></mrow></msup></mrow><annotation encoding="application/x-tex">h^{-d}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8491em;"></span><span class="mord"><span class="mord mathnormal">h</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.8491em;"><span style="top:-3.063em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mtight"></span><span class="mord mathnormal mtight">d</span></span></span></span></span></span></span></span></span></span></span></span> and gradients with <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msup><mi>h</mi><mrow><mo></mo><mi>d</mi><mo></mo><mn>1</mn></mrow></msup></mrow><annotation encoding="application/x-tex">h^{-d-1}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8491em;"></span><span class="mord"><span class="mord mathnormal">h</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.8491em;"><span style="top:-3.063em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mtight"></span><span class="mord mathnormal mtight">d</span><span class="mbin mtight"></span><span class="mord mtight">1</span></span></span></span></span></span></span></span></span></span></span></span> and thus learning resolution varying quantities would require including this support much more tightly into the training. While this could potentially work, we did not investigate this for now.</p>
<p>Q: What about adaptive resolutions?<br>
A: Similar to the prior question, this could conceptually be added to the network but makes the training significantly more complicated as there are many potential constellations of relative sizes and distributions a particle can see. This would make the dataset generation significantly harder and was beyond our scope.</p>
<p>Q: What is the resolution to Fourier Neural Operators/Do you use FFTs?<br>
A: No. Our method works in normal coordinate space and not in a global frequency space. Instead our method works on a local limited convolution where we use the Fourier Terms to represent a filter function.</p>
<p>Q: What about other appications?<br>
A: We did try our method on some general Pattern Recognitiion tasks but including these results was beyond our scope. Our Codebase can be applied to general graph tasks similar to how pyTorch geometric works.</p>
<p>Q: What about larger simulations?<br>
A: Our method can readily do this due to its local Graph Convolution architecture. Accordingly, we can simply expand the simulation domain to be larger. Furthermore, the memory consumption during inference is relatively small so it would be possible to use multiple orders of magnitude more particles during inference if so desired.</p>
<p>Q: How do you encode boundaries?<br>
A: Periodic boundaries are modeled by connecting particles across the periodic boundary with appropriate modular distances. Rigid boundaries are included with rigid particles with a seperate CConv on the first network layer.</p>
<p>Repository: <a href="https://github.com/tum-pbs/SFBC">https://github.com/tum-pbs/SFBC</a><br>
ArXiV Paper: <a href="https://arxiv.org/abs/2403.16680">https://arxiv.org/abs/2403.16680</a></p>

Expand Down
20 changes: 20 additions & 0 deletions index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,26 @@ If you have any questions on the paper itself or Continuous Convolutions in gene

Frequently asked questions about the paper will appear here as they are asked.

Q: Can the method handle oversampled or undersampled data during inference related to training?
A: No. The internal formulation of the Graph Convolution does not account for the volume of the contributing nodes, which works for our purposes as we are only dealing with uniformly sized particles and thus the volume is a constant term that can be ignored. However, if you wanted to do this you would need to change the Graph Convolution formulation to include volume.

Q: What about scaling to other resolutions?
A: Our method cannot handle this as there is some general uncertainty about particle resolutions as (a) scalar quantities in our SPH data scale with $h^{-d}$ and gradients with $h^{-d-1}$ and thus learning resolution varying quantities would require including this support much more tightly into the training. While this could potentially work, we did not investigate this for now.

Q: What about adaptive resolutions?
A: Similar to the prior question, this could conceptually be added to the network but makes the training significantly more complicated as there are many potential constellations of relative sizes and distributions a particle can see. This would make the dataset generation significantly harder and was beyond our scope.

Q: What is the resolution to Fourier Neural Operators/Do you use FFTs?
A: No. Our method works in normal coordinate space and not in a global frequency space. Instead our method works on a local limited convolution where we use the Fourier Terms to represent a filter function.

Q: What about other appications?
A: We did try our method on some general Pattern Recognitiion tasks but including these results was beyond our scope. Our Codebase can be applied to general graph tasks similar to how pyTorch geometric works.

Q: What about larger simulations?
A: Our method can readily do this due to its local Graph Convolution architecture. Accordingly, we can simply expand the simulation domain to be larger. Furthermore, the memory consumption during inference is relatively small so it would be possible to use multiple orders of magnitude more particles during inference if so desired.

Q: How do you encode boundaries?
A: Periodic boundaries are modeled by connecting particles across the periodic boundary with appropriate modular distances. Rigid boundaries are included with rigid particles with a seperate CConv on the first network layer.



Expand Down

0 comments on commit 6df58d7

Please sign in to comment.