Skip to content

Commit

Permalink
Merge pull request #97 from tum-pbs/develop
Browse files Browse the repository at this point in the history
2.2.6
  • Loading branch information
holl- authored Dec 19, 2022
2 parents 7c587c7 + f6520e4 commit 1f152c1
Show file tree
Hide file tree
Showing 30 changed files with 389 additions and 165 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/unit-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.6', '3.8', '3.10']
python-version: ['3.7', '3.8', '3.10']

steps:
- name: Checkout 🛎️
Expand Down
16 changes: 12 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,9 +50,10 @@ See the [detailed installation instructions](https://tum-pbs.github.io/PhiFlow/I

To get started, check out our YouTube tutorial series and the following Jupyter notebooks:

* [<img src="https://www.tensorflow.org/images/colab_logo_32px.png" height=16> Fluids](https://colab.research.google.com/github/tum-pbs/PhiFlow/blob/develop/docs/Fluids_Tutorial.ipynb): Introduction to core classes and fluid-related functions.
* [<img src="https://www.tensorflow.org/images/colab_logo_32px.png" height=16> Solar System](https://colab.research.google.com/github/tum-pbs/PhiFlow/blob/develop/docs/Planets_Tutorial.ipynb): Visualize a many-body system with Newtonian gravity.
* [<img src="https://www.tensorflow.org/images/colab_logo_32px.png" height=16> Learning to Throw](https://colab.research.google.com/github/tum-pbs/PhiFlow/blob/develop/docs/Learn_to_Throw_Tutorial.ipynb): Train a neural network to hit a target, comparing supervised and differentiable physics losses.
* [<img src="https://www.tensorflow.org/images/colab_logo_32px.png" height=16>](https://colab.research.google.com/github/tum-pbs/PhiFlow/blob/develop/docs/Math_Introduction.ipynb) [Tensors](https://tum-pbs.github.io/PhiFlow/Math_Introduction.html): Introduction to tensors.
* [<img src="https://www.tensorflow.org/images/colab_logo_32px.png" height=16>](https://colab.research.google.com/github/tum-pbs/PhiFlow/blob/develop/docs/Fluids_Tutorial.ipynb) [Fluids](https://tum-pbs.github.io/PhiFlow/Fluids_Tutorial.html): Introduction to core classes and fluid-related functions.
* [<img src="https://www.tensorflow.org/images/colab_logo_32px.png" height=16>](https://colab.research.google.com/github/tum-pbs/PhiFlow/blob/develop/docs/Planets_Tutorial.ipynb) [Solar System](https://tum-pbs.github.io/PhiFlow/Planets_Tutorial.html): Visualize a many-body system with Newtonian gravity.
* [<img src="https://www.tensorflow.org/images/colab_logo_32px.png" height=16>](https://colab.research.google.com/github/tum-pbs/PhiFlow/blob/develop/docs/Learn_to_Throw_Tutorial.ipynb) [Learn to Throw](https://tum-pbs.github.io/PhiFlow/Learn_to_Throw_Tutorial.html): Train a neural network to hit a target, comparing supervised and differentiable physics losses.

If you like to work with an IDE, like PyCharm or VS Code, the following demos will also be helpful:

Expand All @@ -61,13 +62,20 @@ If you like to work with an IDE, like PyCharm or VS Code, the following demos wi

## Publications

We have recently submitted a whitepaper.
We will upload a whitepaper, soon.
In the meantime, please cite the ICLR 2020 paper.

* [Learning to Control PDEs with Differentiable Physics](https://ge.in.tum.de/publications/2020-iclr-holl/), *Philipp Holl, Vladlen Koltun, Nils Thuerey*, ICLR 2020.
* [Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers](https://arxiv.org/abs/2007.00016), *Kiwon Um, Raymond Fei, Philipp Holl, Robert Brand, Nils Thuerey*, NeurIPS 2020.
* [Φ<sub>Flow</sub>: A Differentiable PDE Solving Framework for Deep Learning via Physical Simulations](https://montrealrobotics.ca/diffcvgp/), *Nils Thuerey, Kiwon Um, Philipp Holl*, DiffCVGP workshop at NeurIPS 2020.
* [Physics-based Deep Learning](https://physicsbaseddeeplearning.org/intro.html) (book), *Nils Thuerey, Philipp Holl, Maximilian Mueller, Patrick Schnell, Felix Trost, Kiwon Um*, DiffCVGP workshop at NeurIPS 2020.
* [Half-Inverse Gradients for Physical Deep Learning](https://arxiv.org/abs/2203.10131), *Patrick Schnell, Philipp Holl, Nils Thuerey*, ICLR 2022.
* [Scale-invariant Learning by Physics Inversion](https://arxiv.org/abs/2109.15048), *Philipp Holl, Vladlen Koltun, Nils Thuerey*, NeurIPS 2022.

Φ<sub>Flow</sub> has been used in the following data sets:

* [PDEBench](https://github.com/pdebench/PDEBench)
* [PDEarena](https://microsoft.github.io/pdearena/)

## Version History

Expand Down
3 changes: 1 addition & 2 deletions demos/pipe.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@
from phi.flow import *

DT = 1.
INFLOW_BC = extrapolation.combine_by_direction(normal=1, tangential=0)
velocity = StaggeredGrid(0, extrapolation.combine_sides(x=(INFLOW_BC, extrapolation.BOUNDARY), y=0), x=50, y=32)
velocity = StaggeredGrid(0, extrapolation.combine_sides(x=(vec(x=1, y=0), extrapolation.BOUNDARY), y=0), x=50, y=32)
pressure = None

for _ in view('velocity, pressure', namespace=globals()).range():
Expand Down
2 changes: 1 addition & 1 deletion phi/VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
2.2.5
2.2.6
19 changes: 16 additions & 3 deletions phi/field/_grid.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from typing import TypeVar, Any
from typing import TypeVar, Any, Tuple

from phi import math, geom
from phi.geom import Box, Geometry, GridCell
Expand Down Expand Up @@ -70,6 +70,19 @@ def __value_attrs__(self):
def __variable_attrs__(self):
return '_values',

def __expand__(self, dims: Shape, **kwargs) -> 'Grid':
return self.with_values(math.expand(self.values, dims, **kwargs))

def __replace_dims__(self, dims: Tuple[str, ...], new_dims: Shape, **kwargs) -> 'Grid':
for dim in dims:
if dim in self._resolution:
return NotImplemented
values = math.rename_dims(self.values, dims, new_dims)
extrapolation = math.rename_dims(self.extrapolation, dims, new_dims, **kwargs)
bounds = math.rename_dims(self.bounds, dims, new_dims, **kwargs)
return type(self)(values, extrapolation=extrapolation, bounds=bounds, resolution=self._resolution)


def __eq__(self, other):
if not type(self) == type(other):
return False
Expand Down Expand Up @@ -487,10 +500,10 @@ def expand_staggered(values: Tensor, resolution: Shape, extrapolation: Extrapola

def resolution_from_staggered_tensor(values: Tensor, extrapolation: Extrapolation):
any_dim = values.shape.spatial.names[0]
x = values.vector[any_dim]
x_shape = values.shape.after_gather({'vector': any_dim})
ext_lower, ext_upper = extrapolation.valid_outer_faces(any_dim)
delta = int(ext_lower) + int(ext_upper) - 1
resolution = x.shape.spatial._replace_single_size(any_dim, x.shape.get_size(any_dim) - delta)
resolution = x_shape.spatial._replace_single_size(any_dim, x_shape.get_size(any_dim) - delta)
return resolution


Expand Down
11 changes: 10 additions & 1 deletion phi/field/_point_cloud.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import warnings
from typing import Any
from typing import Any, Tuple

from phi import math
from phi.geom import Geometry, GridCell, Box, Point
Expand Down Expand Up @@ -82,6 +82,15 @@ def __value_attrs__(self):
def __variable_attrs__(self):
return '_values', '_elements'

def __expand__(self, dims: Shape, **kwargs) -> 'PointCloud':
return self.with_values(math.expand(self.values, dims, **kwargs))

def __replace_dims__(self, dims: Tuple[str, ...], new_dims: Shape, **kwargs) -> 'PointCloud':
elements = math.rename_dims(self.elements, dims, new_dims)
values = math.rename_dims(self.values, dims, new_dims)
extrapolation = math.rename_dims(self.extrapolation, dims, new_dims, **kwargs)
return PointCloud(elements, values, extrapolation, self._add_overlapping, self._bounds, self._color)

def __eq__(self, other):
if not type(self) == type(other):
return False
Expand Down
14 changes: 7 additions & 7 deletions phi/field/_scene.py
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@ def at(directory: str or tuple or list or math.Tensor or 'Scene', id: int or mat
id = math.wrap(id)
paths = math.map(lambda d, i: join(d, f"sim_{i:06d}"), directory, id)
# test all exist
for path in math.flatten(paths):
for path in math.flatten(paths, flatten_batch=True):
if not isdir(path):
raise IOError(f"There is no scene at '{path}'")
return Scene(paths)
Expand Down Expand Up @@ -266,7 +266,7 @@ def exist_properties(self):
if self._properties is not None:
return True # must have been written or read
else:
json_file = join(next(iter(math.flatten(self._paths))), "description.json")
json_file = join(next(iter(math.flatten(self._paths, flatten_batch=True))), "description.json")
return isfile(json_file)

def exists_config(self):
Expand Down Expand Up @@ -454,31 +454,31 @@ def copy_calling_script(self, full_trace=False, include_context_information=True
text = "\n\n".join(blocks)
self.copy_src_text('ipython.py', text)
if include_context_information:
for path in math.flatten(self._paths):
for path in math.flatten(self._paths, flatten_batch=True):
with open(join(path, 'src', 'context.json'), 'w') as context_file:
json.dump({
'phi_version': phi_version,
'argv': sys.argv
}, context_file)

def copy_src(self, script_path, only_external=True):
for path in math.flatten(self._paths):
for path in math.flatten(self._paths, flatten_batch=True):
if not only_external or not _is_phi_file(script_path):
shutil.copy(script_path, join(path, 'src', basename(script_path)))

def copy_src_text(self, filename, text):
for path in math.flatten(self._paths):
for path in math.flatten(self._paths, flatten_batch=True):
target = join(path, 'src', filename)
with open(target, "w") as file:
file.writelines(text)

def mkdir(self):
for path in math.flatten(self._paths):
for path in math.flatten(self._paths, flatten_batch=True):
isdir(path) or os.mkdir(path)

def remove(self):
""" Deletes the scene directory and all contained files. """
for p in math.flatten(self._paths):
for p in math.flatten(self._paths, flatten_batch=True):
p = abspath(p)
if isdir(p):
shutil.rmtree(p)
Expand Down
25 changes: 25 additions & 0 deletions phi/geom/_geom.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from numbers import Number
from typing import Tuple

import numpy as np

Expand Down Expand Up @@ -515,6 +516,30 @@ def __stack__(self, values: tuple, dim: Shape, **kwargs) -> 'Geometry':
else:
return Geometry.__stack__(self, values, dim, **kwargs)

def __concat__(self, values: tuple, dim: str, **kwargs) -> 'Point':
if all(isinstance(v, Point) for v in values):
return Point(math.concat([v.center for v in values], dim, **kwargs))
else:
return NotImplemented

def __replace_dims__(self, dims: Tuple[str, ...], new_dims: Shape, **kwargs) -> 'Point':
return Point(math.rename_dims(self._location, dims, new_dims, **kwargs))

def __expand__(self, dims: Shape, **kwargs) -> 'Point':
return Point(math.expand(self._location, dims, **kwargs))

def __pack_dims__(self, dims: Tuple[str, ...], packed_dim: Shape, pos: int or None, **kwargs) -> 'Point':
return Point(math.pack_dims(self._location, dims, packed_dim, pos, **kwargs))

def __unpack_dim__(self, dim: str, unpacked_dims: Shape, **kwargs) -> 'Point':
return Point(math.unpack_dim(self._location, dim, unpacked_dims, **kwargs))

def __flatten__(self, flat_dim: Shape, flatten_batch: bool, **kwargs) -> 'Point':
dims = self.shape.without('vector')
if not flatten_batch:
dims = dims.non_batch
return Point(math.pack_dims(self._location, dims, flat_dim, **kwargs))


def assert_same_rank(rank1, rank2, error_message):
""" Tests that two objects have the same spatial rank. Objects can be of types: `int`, `None` (no check), `Geometry`, `Shape`, `Tensor` """
Expand Down
29 changes: 25 additions & 4 deletions phi/geom/_sphere.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,8 @@
import warnings
from typing import Dict
from typing import Tuple

from phi import math

from ._geom import Geometry, _keep_vector
from ..math import wrap, Tensor, Shape
from ..math.backend import PHI_LOGGER
from ..math.magic import slicing_dict


Expand Down Expand Up @@ -120,6 +117,30 @@ def __stack__(self, values: tuple, dim: Shape, **kwargs) -> 'Geometry':
else:
return Geometry.__stack__(self, values, dim, **kwargs)

def __concat__(self, values: tuple, dim: str, **kwargs) -> 'Sphere':
if all(isinstance(v, Sphere) for v in values):
return Sphere(math.concat([v.center for v in values], dim, **kwargs), radius=math.concat([v.radius for v in values], dim, **kwargs))
else:
return NotImplemented

def __replace_dims__(self, dims: Tuple[str, ...], new_dims: Shape, **kwargs) -> 'Sphere':
return Sphere(math.rename_dims(self._center, dims, new_dims, **kwargs), radius=math.rename_dims(self._radius, dims, new_dims, **kwargs))

def __expand__(self, dims: Shape, **kwargs) -> 'Sphere':
return Sphere(math.expand(self._center, dims, **kwargs), radius=self._radius)

def __pack_dims__(self, dims: Tuple[str, ...], packed_dim: Shape, pos: int or None, **kwargs) -> 'Sphere':
return Sphere(math.pack_dims(self._center, dims, packed_dim, pos, **kwargs), radius=math.pack_dims(self._radius, dims, packed_dim, pos, **kwargs))

def __unpack_dim__(self, dim: str, unpacked_dims: Shape, **kwargs) -> 'Sphere':
return Sphere(math.unpack_dim(self._center, dim, unpacked_dims, **kwargs), radius=math.unpack_dim(self._radius, dim, unpacked_dims, **kwargs))

def __flatten__(self, flat_dim: Shape, flatten_batch: bool, **kwargs) -> 'Shapable':
dims = self.shape.without('vector')
if not flatten_batch:
dims = dims.non_batch
return Sphere(math.pack_dims(self._center, dims, flat_dim, **kwargs), radius=math.pack_dims(self._radius, dims, flat_dim, **kwargs))

def push(self, positions: Tensor, outward: bool = True, shift_amount: float = 0) -> Tensor:
raise NotImplementedError()

Expand Down
18 changes: 12 additions & 6 deletions phi/math/_functional.py
Original file line number Diff line number Diff line change
Expand Up @@ -1522,9 +1522,12 @@ def native_function(x_flat):
else:
y = f(x)
_, y_tensors = disassemble_tree(y)
assert not non_batch(
y_tensors[0]), f"Failed to minimize '{f.__name__}' because it returned a non-scalar output {shape(y_tensors[0])}. Reduce all non-batch dimensions, e.g. using math.l2_loss()"
return y_tensors[0].sum, (reshaped_native(y_tensors[0], [batch_dims]),)
assert not non_batch(y_tensors[0]), f"Failed to minimize '{f.__name__}' because it returned a non-scalar output {shape(y_tensors[0])}. Reduce all non-batch dimensions, e.g. using math.l2_loss()"
try:
loss_native = reshaped_native(y_tensors[0], [batch_dims])
except AssertionError:
raise AssertionError(f"Failed to minimize '{f.__name__}' because its output loss {shape(y_tensors[0])} has more batch dimensions than the initial guess {batch_dims}.")
return y_tensors[0].sum, (loss_native,)

atol = backend.to_float(reshaped_native(solve.absolute_tolerance, [batch_dims], force_expand=True))
maxi = backend.to_int32(reshaped_native(solve.max_iterations, [batch_dims], force_expand=True))
Expand Down Expand Up @@ -1766,13 +1769,16 @@ def f(x):
`identity(value)` which when differentiated, prints the gradient vector.
"""

def print_grad(_x, _y, dx):
if all_available(_x, dx):
def print_grad(params: dict, _y, dx):
param_name, x = next(iter(params.items()))
if all_available(x, dx):
if detailed:
print_(dx, name=name)
else:
print(f"{name}: \t{dx}")
return dx,
else:
print(f"Cannot print gradient for {param_name}, data not available.")
return {param_name: dx}

identity = custom_gradient(lambda x: x, print_grad)
return identity(value)
Expand Down
Loading

0 comments on commit 1f152c1

Please sign in to comment.