ECVR-MVS: Enhancing Cost Volume Representation with Window Binary Search and Ratio Laplacian Measure for Multi-View Stereo
- We design the Window Binary Search (WBS) method, which chooses binary windows with the highest probability along the sampled depth planes as the sampling regions for the next stage. Ensuring the cost volume is built on more beneficial depth planes to improve availability.
- We encode ground truth (GT) with the proposed Two-hot method to supervise the probability volumes associated with WBS. This approach ensures that depth planes closer to the actual depth have higher probabilities, aligning better with actual matching patterns.
- We introduce the Ratio Laplacian Measure for feature volume aggregation, achieving a view-weighted point similarity measure to improve the matching results of the multi-view features.
Our work mainly uses DTU, BlendedMVS, and Tanks and Temples datasets to train and evaluate our models.
- DTU training set(640x512): Aliyun Netdisk
- DTU test set(1600x1200): Baidu Netdisk code:6au3
- BlendedMVS dataset(768x576): Official Website
- Tanks and Temples dataset: Baidu Netdisk code:taj1
- Eth3d dataset
-
Check the configuration:
args/base.py
root_dir: root directory of all datasets.
args/dtu.py
DTUTrain.dataset_path: DTU Training set directory.DTUTrain.pair_path: DTU "pair.txt" file path.DTUVal.dataset_path: DTU validation set directory.DTUVal.pair_path: DTU "pair.txt" file path.
args/bld.py
BlendedMVSTrain.dataset_path: BlendedMVS Training set directory.BlendedMVSVal.dataset_path: BlendedMVS validation set directory.
-
Run the script for training.
# for DTU
python train.py -d dtu
# for BlendedMVS
python train.py -d bld
# fine-tuned on the BlendedMVS
python train.py -d bld -p pth/dtu_11_136100.pth
-
Check the configuration:
args/base.py
root_dir: root directory of all datasetsoutput_path: output directory
args/dtu.py
DTUTest.dataset_path: DTU test set directoryDTUTest.pair_path: DTU "pair.txt" file path
args/tanks.py
TanksTest.dataset_path: Tanks and Temples dataset directoryTanksTest.scence_list: Tanks and Temples dataset all scenes list
args/eth3d.py
Eth3dTest.dataset_path: Eth3d dataset directoryEth3dTest.scence_list: Eth3d dataset all scenes list
args/custom.py
CustomTest.dataset_path: custom dataset directoryCustomTest.scene_list: custom dataset all scenes list
-
Run the script for the test.
# DTU
python test.py -p pth/dtu_11_136100.pth -d dtu
# Tanks and Temples
python test.py -p pth/bld_9_74100.pth -d tanks
# Eth3d
python test.py -p pth/bld_9_74100.pth -d eth3d
# Custom dataset
python test.py -p pth/bld_9_74100.pth -d custom
-
Check the configuration.
tools/filter/conf.py
dataset: select dataset, such as dtu, tanks-inter, tanks-adv, customdataset_root: current dataset root directorytest_folder: the root path wheretest.pyoutputs depth maps, confidence maps, etc.outply_folder: output point cloud save pathscenes: scenes included in the current dataset
-
Run.
cd tools/filter
python dypcd.py
| Acc(mm) | Comp(mm) | Overall(mm) | Time(s/view) | Memory(GB) |
|---|---|---|---|---|
| 0.339 | 0.245 | 0.292 | 0.255 | 2.97 |
| Fam. | Fra. | Hor. | Lig. | M60 | Pan. | Pla. | Tra. | Mean↑ |
|---|---|---|---|---|---|---|---|---|
| 82.28 | 69.48 | 62.92 | 64.48 | 66.06 | 62.13 | 62.58 | 60.07 | 66.25 |
| Aud. | Bal. | Cou. | Mus. | Pal. | Tem. | Mean↑ |
|---|---|---|---|---|---|---|
| 33.17 | 46.23 | 41.11 | 53.40 | 36.39 | 39.71 | 41.67 |
Our work is partially based on these opening source work: MVSNet, MVSNet-pytorch, CasMVSNet, D2HC-RMVSNet. We appreciate their contributions to the MVS community.
If you find this project useful for your research, please cite:
@article{
}