Imaging and Vision Lab

Dynamic Fluid Surface Reconstruction Using Deep Neural Network

Simron Thapa, Nianyi Li, and Jinwei Ye

Abstract

Recovering the dynamic fluid surface is a long-standing challenging problem in computer vision. Most existing image-based methods require multiple views or a dedicated imaging system. Here we present a learning-based single-image approach for 3D fluid surface reconstruction. Specifically, we design a deep neural network that estimates the depth and normal maps of a fluid surface by analyzing the refractive distortion of a reference background pattern. Due to the dynamic nature of fluid surfaces, our network uses recurrent layers that carry temporal information from previous frames to achieve spatio-temporally consistent reconstruction given a video input. Due to the lack of fluid data, we synthesize a large fluid dataset using physics-based fluid modeling and rendering techniques for network training and validation. Through experiments on simulated and real captured fluid images, we demonstrate that our proposed deep neural network trained on our fluid dataset can recover dynamic 3D fluid surfaces with high accuracy.

Network Architecture

Our fluid surface reconstruction network (FSRN) consists of two sub-nets:

  1. an encoder-decoder based convolutional neural network (FSRN-CNN) for per-frame depth and normal estimation;
  2. a recurrent neural network (FSRN-RNN) for enforcing the temporal consistency across multiple frames.


Physics-based fluid image dataset

It is challenging to acquire fluid dataset with ground truth surface depths and normals using physical devices. We resort to physics-based modeling and rendering to synthesize a large fluid dataset for our network training. We use fluid equations derived from the Navier-Stokes to model realistic fluid surfaces and implement a physics-based renderer to simulate refraction images. Our dataset contains over 45,000 refraction images (75 fluid sequences) with the ground truth depth and normal maps. We also use a variety of reference patterns to enrich our dataset, including noise patterns (e.g., Perlin, Simplex, and Worley), checkerboards with different sizes, and miscellaneous textures (e.g., bricks, tiles etc.). Sample images from our dataset are shown in the figure below.


Results

Synthetic Results

We first evaluate our approach on our synthetic fluid dataset. Our validation set contains 5,000 refraction images (20 unique dynamic fluid videos) that doesn’t overlap with the training set.


Real Results

We also perform real experiment to evaluate our network. We show our real experiment setup, the results on real fluid images and the re-rendered refraction images in comparision with the real captured ones.

More Details

  • Technical paper. [PDF]
  • Supplementary material. [PDF]
  • Source code & dataset. [GitHub]
  • Spotlight video. [YouTube]
  • Talk video. [YouTube]

Citation

  • Simron Thapa, Nianyi Li, and Jinwei Ye, "Dynamic Fluid Surface Reconstruction Using Deep Neural Network", in proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  • @InProceedings{Thapa_2020_CVPR,
    author = {Thapa, Simron and Li, Nianyi and Ye, Jinwei},
    title = {Dynamic Fluid Surface Reconstruction Using Deep Neural Network},
    booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2020}
    }