This repo contains all our code for using the Vision Transformer models on the imaging multi-regression task of parameter and uncertainty estimation for strong lensing systems. The paper is published in the ECCV 2022 Workshops and this is the arXiv link.
Authors: Kuan-Wei Huang, Geoff Chih-Fan Chen, Po-Wen Chang, Sheng-Chieh Lin, ChiaJung Hsu, Vishal Thengane, and Joshua Yao-Yu Lin
Tag v3.0.0 is the code version when the paper was submitted.
- This notebook is used to generate the images (data) and paramters (targets) as the dataset for the strong lensing systems.
- This notebook is used to process the dataset: data split and target normalization.
train_model.py
is the main code to train models (ViT and ResNet).- The
src
folder contains scripts for helper functions.
- This notebook is used to train a ViT model.
- The
training_eccv
folder contains the notebooks used to train models for our ECCV paper. - The
training_icml
folder contains the notebooks used to train models for our ICML paper.
predict.py
is the source code to make prediction using a trained model.- This notebook uses
predict.py
to make predictions for our ECCV paper. visualization.py
contains objects and functions for visulization.- This notebook uses
visualization.py
to make figures for our ECCV paper.