MIP-plicits: Level of Detail Factorization of Neural Implicits Sphere Tracing

1PUC-Rio
2IMPA
3University of Coimbra

Coarse

Neural Implicit Normal Mapping

Baseline

MIP-plicits are sets of Neural Implicits trained on the same data and sorted by capacity. They support Level of Detail and animation without the need for spatial data structures. This is achieved by letting the Sphere Tracing infer distance from the MIP-plicit, a procedure we call Multiscale Sphere Tracing. This approach also enables normals to be transferred between Neural Implicits, what we call Neural Implicit Normal Mapping.

News


description [Jan 25th 2022] Page online.
description [Jan 26th 2022] Results video is available.
description [Jan 26th 2022] Live video is available.

Abstract


We introduce MIP-plicits, a novel approach for rendering 3D and 4D Neural Implicits that divide the problem into macro and meso components. We rely on the iterative nature of the sphere tracing algorithm, the spatial continuity of the Neural Implicit representation, and the association of the network architecture complexity with the details it can represent. This approach does not rely on spatial data structures, and can be used to mix Neural Implicits trained previously and separately as detail levels.

We also introduce Neural Implicit Normal Mapping, which is a core component of the problem factorization. This concept is very close and analogous to the classic normal mapping on meshes, broadly used in Computer Graphics.

Finally, we derive an analytic equation and an algorithm to simplify the normal calculation of Neural Implicits, adapted to be evaluated by the General Matrix Multiply algorithm (GEMM). Current approaches rely on finite differences, which impose additional inferences on auxiliary points and discretization error.

Overview


The key idea behind MIP-plicits is to explore the iterative nature of the Sphere Tracing, and the fact that it outputs 3D points in each iteration. Those points are defined in the underlying space of the SDF of a Neural Implicit, but also in the space of any slightly different SDF of another Neural Implicit trained on the same data (i.e. with different capacity). We can use those 3D points to transit between the underlying spaces of two SDFs, given that the zero level-set of the finer Neural Implicit is in the neighborhood of the zero level-set of the coarser Neural Implicit. We call this the Level of Detail (LOD) condition. It can also be used to map the normals between the Neural Implicits, which we call Neural Implicit Normal Mapping. Differently from classic normal mapping, this approach is volumetric. Thus, it does not need parametrizations, neither does need to deal with distortions from projection. The LOD condition can be used with both 3D and 4D (3D plus time) Neural Implicits.

Suppose we want to find the intersection of a ray with the zero level-set of a finer Neural Implicit Surface Sj + 1 using the intersection with a coarser Neural Implicit Surface Sj as acceleration. To ensure that the ray does not miss any meaninful intersection, it suffices to intersect with the zero level-set of the neighborhood of Sj that contains Sj + 1. This is the LOD condition.

Results


Armadillo


Coarse

Neural Implicit Normal Mapping

Baseline

Happy Buddha


Coarse

Neural Implicit Normal Mapping

Baseline

Lucy


Coarse

Neural Implicit Normal Mapping

Baseline

Neural Implicit Normal Mapping the using a MIP-plicit (center). The normals of the detailed Neural Implicit on the right are mapped into the coarse version on the left.

Paper


MIP-plicits: Level of Detail Factorization of Neural Implicits Sphere Tracing

Vinícius da Silva, Tiago Novello, Guilherme Schardong, Luiz Schirmer, Hélio Lopes and Luiz Velho

description Paper preprint (PDF, 4.2 MB)
description arXiv version
insert_comment BibTeX
videocam Video
videocam Live

Please send feedback and questions to Vinícius da Silva.

Citation


@article{silva2022mip-plicits,
	title = {MIP-plicits: Level of Detail Factorization of Neural Implicits Sphere Tracing},
	author = {da Silva, Vin\'icius and Novello, Tiago and Schardong, Guilherme and Schirmer,
		Luiz and Lopes, H\'elio and Velho, Luiz},
	journal = {arXiv:2201.09147},
	year = {2022},
	month = jan
}

Acknowledgements


We would like to thank Towaki Takikawa, Joey Litalien, Kangxue Yin, Karsten Kreis, Charles Loop, Derek Nowrouzezahrai, Alec Jacobson, Morgan McGuire and Sanja Fidler for licensing the code of the paper Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Surfaces and project page under the MIT License. This website is based on that page.

We also thank the Stanford Computer Graphics Laboratory for the Bunny, Dragon, Armadillo, Happy Buddha, and Lucy models, acquired through the Stanford 3D scan repository.