Skip to content

Commit

Permalink
Merge branch 'main' of https://github.com/philippwulff/D-NeRF into main
Browse files Browse the repository at this point in the history
  • Loading branch information
philippwulff committed Jul 13, 2022
2 parents f8c816c + fa0bb9c commit a28abda
Showing 1 changed file with 6 additions and 3 deletions.
9 changes: 6 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,13 @@ If you want to directly explore the models or use our training data, you can dow
```

*Download Datasets*

**DeepDeform**. This is a RGB-D dataset of dynamic scenes with fixed camera poses. You can request access on the project's [GitHub page](https://github.com/AljazBozic/DeepDeform).

**Own Data**. Download from [here](https://drive.google.com/drive/folders/1hUv1UZfxtmqVtushTH2_obexMv7mVu8L?usp=sharing).

**Generate Own scenes** Own scenes can be easily generated and integrated. We used an iPAD with Lidar Sensor (App: Record3d --> export Videos as EXR + RGB). Extract dataset to correct format by running load_owndataset.py (Specifiy correct args in main and create a scene configuration entry).

## How to Use It

If you have downloaded the pre-trained, you can test models without training them. Otherwise, download our or you own data to train a model.
Expand All @@ -41,7 +45,6 @@ You can use these jupyter notebooks to explore the model.
| ----------- | ----------- |
| Synthesize novel views at an arbitrary point in time. (Requires trained model) | render.ipynb|
| Reconstruct the mesh at an arbitrary point in time. (Requires trained model) | reconstruct.ipynb|
| Quantitatively evaluate trained models. | metrics.ipynb|
| See the camera trajectory of the training frames or novel views. | eda_virtual_camera.ipynb|
| Visualize the sampling along camera rays. (Requires training logs) | eda_ray_sampling.ipynb|

Expand All @@ -51,12 +54,12 @@ First download the dataset. Then,
conda activate dnerf
export PYTHONPATH='path/to/D-NeRF'
export CUDA_VISIBLE_DEVICES=0
python run_dnerf.py --config configs/mutant.txt
python run_dnerf.py --config configs/johannes.txt
```

### Test
First download pre-trained weights and dataset. Then,
```
python run_dnerf.py --config configs/johannes.txt --render_only --render_test
```
This command will run the `johannes` experiment. When finished, results are saved to `./logs/johannes/renderonly_test_799999`. To quantitatively evaluate model run the `metrics.ipynb` notebook.
This command will run the `johannes` experiment. When finished, results are saved to `./logs/johannes/renderonly_test_799999`. The quantitative results are stored inside the testrun folder under "metrics.txt".

0 comments on commit a28abda

Please sign in to comment.