Chenliang Zhou†, Zheyuan Hu†, Cengiz Öztireli.
Department of Computer Science and Technology,
University of Cambridge.
† denotes equal contribution.
[Project page] [Paper] [MERL dataset]
Overview of our FreNBRDF architecture.
Accurate material modeling is crucial for achieving photorealistic rendering, bridging the gap between computer-generated imagery and real-world photographs. While traditional approaches rely on tabulated BRDF data, recent work has shifted towards implicit neural representations, which offer compact and flexible frameworks for a range of tasks. However, their behavior in the frequency domain remains poorly understood.
To address this, we introduce FreNBRDF, a frequency-rectified neural material representation. By leveraging spherical harmonics, we integrate frequency-domain considerations into neural BRDF modeling. We propose a novel frequency-rectified loss, derived from a frequency analysis of neural materials, and incorporate it into a generalizable and adaptive reconstruction and editing pipeline. This framework enhances fidelity, adaptability, and efficiency.
Extensive experiments demonstrate that FreNBRDF improves the accuracy and robustness of material appearance reconstruction and editing compared to state-of-the-art baselines, enabling more structured and interpretable downstream tasks and applications.
For the network, we adopt a set encoder [21] with permutation invariance and flexibility of input size. It takes an arbitrary set of samples as input, which is the concatenation of BRDF values and coordinates, containing four fully connected layers with two hidden layers of 128 hidden neurons and ReLU activation function.
The reconstruction loss between two NBRDFs is defined to be the sum of the L1 loss between samples of the two underlying BRDFs and the two regularization terms for NBRDF weights w and latent embeddings z.
A summary of the main methodology, i.e. how Frequency-Rectified Neural BRDFs are constructed and optimized based on prior work.
Frequency Recitification The key insight here is that these frequency coefficients now contain the extracted frequency information at each degree l and order m. Therefore, we can define a frequency-rectified loss on BRDFs based on the mean squared error of frequency coefficients and consequently, we can incorporate this loss into the reconstruction loss Eq. (1).
Inspired by prior work FrePolad (ECCV'24), HyperBRDF (ECCV'24) and NeuMaDiff, we adopt MERL (2003) from here, which contains reflectance functions of 100 different materials, as our main dataset. This dataset is ideal due to its diversity and data-driven nature, making it suitable for both statistical and neural-network-based methods.
It contains 100 measured real-world materials. Each BRDF is represented as a 90 × 90 × 180 × 3 floating-point array, mapping uniformly sampled input angles (θ_H , θ_D , ϕ_D) under Rusinkiewicz reparametrization to reflectance values in R^3.
For demo code, please refer to src/fre_loss.py and add the main components to the HyperBRDF fre_hypernetwork_loss() for frequency-aware material reconstruction.
Please be aware that the code attached is just one viable method for extracting frequency information. There are other approaches that have been explored, as detailed in the paper. Meanwhile, we welcome any innovative ideas you may have.
For more details, please refer to our paper or project page. Thanks~
Please feel free to contact us if you have any questions or suggestions.
If you found the paper or code useful, please consider citing,
@misc{zhou2025FreNBRDF,
title={FreNBRDF: A Frequency-Rectified Neural Material Representation},
author={Chenliang Zhou and Zheyuan Hu and Cengiz Oztireli},
year={2025},
eprint={2507.00476},
archivePrefix={arXiv},
primaryClass={cs.GR},
url={https://arxiv.org/abs/2507.00476},
}

