A Bag of Tricks for Efficient Implicit Neural Point Clouds
Vision, Modeling, and Visualization 2025

1TU Braunschweig 2FAU Erlangen-Nürnberg 3University of New Mexico

TL;DR: Faster INPC training and rendering with less VRAM and better quality.

Learn more about the original INPC method here.

Abstract


Implicit Neural Point Cloud (INPC) is a recent hybrid representation that combines the expressiveness of neural fields with the efficiency of point-based rendering, achieving state-of-the-art image quality in novel view synthesis. However, as with other high-quality approaches that query neural networks during rendering, the practical usability of INPC is limited by comparatively slow rendering.
In this work, we present a collection of optimizations that significantly improve both the training and inference performance of INPC without sacrificing visual fidelity. The most significant modifications are an improved rasterizer implementation, more effective sampling techniques, and the incorporation of pre-training for the convolutional neural network used for hole-filling. Furthermore, we demonstrate that points can be modeled as small Gaussians during inference to further improve quality in extrapolated, e.g., close-up views of the scene.
We design our implementations to be broadly applicable beyond INPC and systematically evaluate each modification in a series of experiments. Our optimized INPC pipeline achieves up to 25% faster training, 2× faster rendering, and 20% reduced VRAM usage paired with slight image quality improvements.

INPC Rendering with Bilinear Point Splatting and Small Gaussian Splats


Replacing the bilinear splatting used in INPC with a Gaussian splatting approach, only slightly impacts quality metrics but makes a major difference when exploring a scene in a graphical user interface. This is especially noticeable in close-up views and independent of the used sampling approach.

View-Specific Multisampling

Bilinear Splatting Small Gaussians

Our ring buffer-based approach for view-specific multisampling reuses points from previous frames to speed up rendering by more than 2×.
Values for INPC are copied from the original publication.

Global Pre-Extraction

Bilinear Splatting Small Gaussians

Our proposed algorithm for global pre-extraction leads to significantly improved quality, while using half the number of points (33M vs. 67M).
Values for INPC are copied from the original publication.

Citation


@inproceedings{hahlbohm2025inpcv2,
  title     = {A Bag of Tricks for Efficient Implicit Neural Point Clouds},
  author    = {Hahlbohm, Florian and Franke, Linus and Overkämping, Leon and Wespe, Paula and Castillo, Susana and Eisemann, Martin and Magnor, Marcus},
  booktitle = {Vision, Modeling, and Visualization},
  year      = {2025},
  doi       = {tbd}
}

Acknowledgements


We would like to thank Timon Scholz for his help with the evaluation, as well as Brent Zoomers and Moritz Kappel for their valuable suggestions and early discussions.
This work was partially funded by the DFG (“Real-Action VR”, ID 523421583) and the L3S Research Center, Hanover, Germany.

The bicycle scene shown above is from the Mip-NeRF360 dataset. The website template was adapted from Zip-NeRF, who borrowed from Michaël Gharbi and Ref-NeRF. For the comparison sliders we follow RadSplat and use img-comparison-slider.