A versatile simulator for advancing research in event-based and RGB-event data fusion.
Automatically generated results where objects are randomly selected from a pool and then placed and moved according to some pre-defined rules (also the camera):
V1 (i.e. BlinkFlow):
V2 (need to checkout v2 branch):
Note: need to checkout v2 branch
- Event simulation: event data simulated from high-frequency rendering data
- Simulation of low dynamic range, motion blur, defocus blur and atmospheric effect
- Dense point tracking: provide tracking ground truth for each pixel at any frame and any object
- Forward/backward optical flow
- Depth maps
Datas that are not shown in the demo but are also accessible
- Normal maps
- Instance segmentation
- Camera poses and intrinsic
- Object poses
- Install Blender, recommended version 3.3, link: https://www.blender.org/download/lts/3-3/
- Install Python dependencies
conda env create -f environment.yml
- Prepare data and put them under
data/
. The data includes:
1. ADE20K dataset, or other image dataset that can be used as texture
2. ShapeNetCore.v2 dataset, or other 3D model dataset
3. hdri dataset, we provide a download script in scripts/download_hdri.py
We provide sample data for fast testing. You can download them using the following command:
python scripts/download_hf_data.py
- (Optional) If you are running rendering on a headless machine, you will need to start an xserver. To do this, run:
sudo apt-get install xserver-xorg
sudo python3 scripts/start_xserver.py start
export DISPLAY=:0.{id} # for example, to use the GPU card 0, it should be DISPLAY=:0.0
- Run the main script
If you want to use the default config (need to prepare full dataset), you can run:
python main.py
Else if you want to use the sample data, you can run:
python main.py --config configs/blinkflow_v1_example.yaml
If it runs successfully, you will see the similar result under output
folder:
output/train/000000
├── events_left
├── forward_flow
├── hdr
└── hdr.mp4
If you find this code useful for your research, please use the following BibTeX entry.
@inproceedings{blinkflow_iros2023,
title={BlinkFlow: A Dataset to Push the Limits of Event-based Optical Flow Estimation},
author={Yijin Li, Zhaoyang Huang, Shuo Chen, Xiaoyu Shi, Hongsheng Li, Hujun Bao, Zhaopeng Cui, Guofeng Zhang},
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
month = {October},
year = {2023},
}
@inproceedings{blinkvision_eccv2024,
title={BlinkVision: A Benchmark for Optical Flow, Scene Flow and Point Tracking Estimation using RGB Frames and Events},
author={Yijin Li, Yichen Shen, Zhaoyang Huang, Shuo Chen, Weikang Bian, Xiaoyu Shi, Fu-Yun Wang, Keqiang Sun, Hujun Bao, Zhaopeng Cui, Guofeng Zhang, Hongsheng Li},
booktitle={European Conference on Computer Vision (ECCV)},
year={2024}
}