You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to write a customized BSDF using a PyTorch model, such as employing an MLP to map incoming and outgoing directions to BRDF values. Furthermore, I want to use the inverse rendering pipeline to optimize the PyTorch model.
Following the "Inverse Rendering Tutorial," I implemented a customized BSDF class as shown below:
Code
Customized BSDF Class
classCookTorranceBRDF(mi.BSDF):
def__init__(self, props):
mi.BSDF.__init__(self, props)
self.roughness=props.get('roughness', 0.5)
self.roughness=mi.Float(self.roughness)
# Fresnel IOR (eta), SH coefficients, and tint from the original codeself.eta=props.get('eta', 1.33)
self.eta=mi.Float(self.eta)
self.m_flags=mi.BSDFFlags.GlossyReflection|mi.BSDFFlags.FrontSide|mi.BSDFFlags.BackSide# Other methods: sample, eval, cook_torrance, etc.
Optimization Code for Mitsuba Framework
definverse_optimization(scene, params, param_ref, keys, args, ref_image, output_path):
""" Perform inverse optimization on multiple parameters and compute PSNR along with MSE. """# Initialize optimizer and lossesopt=mi.ad.Adam(lr=args.optimizer_lr)
losses= []
psnrs= []
forkeyinkeys:
opt[key] =params[key]
params.update(opt)
# Optimization loopforitinrange(args.iteration_count):
image=mi.render(scene, params, spp=args.train_spp)
loss=mse(image, ref_image)
dr.backward(loss)
opt.step()
params.update(opt)
# Compute errors and metrics# ...# Save and plot results# ...returnlosses, final_psnr
After successfully optimizing the roughness and eta parameters, I followed the guide on "Mitsuba and PyTorch Compatibility" to implement a PyTorch-based model within the customized BSDF class:
The forward pass works as expected, but I am unsure how to optimize the PyTorch model parameters (e.g., MLP weights). Specifically, I don't know:
How to register the learnable parameters (e.g., mapped_eta) for optimization, either within the Mitsuba optimization framework or by using a decorator for the PyTorch framework.
How to combine Mitsuba's traverse method for registering parameters with PyTorch's optimization pipeline.
There's a community-created tutorial regarding implementing neural representations of spatially-varying BRDF parameters in Mitsuba 3 which I think aligns with what you're after. Just be aware that it was created before the Mitsuba 3.6 release, so parts may be out of date and some of the code may have to ported (e.g. dr.wrap_ad to dr.wrap) but it should nonetheless be a useful starting point.
Description
I would like to write a customized BSDF using a PyTorch model, such as employing an MLP to map incoming and outgoing directions to BRDF values. Furthermore, I want to use the inverse rendering pipeline to optimize the PyTorch model.
Following the "Inverse Rendering Tutorial," I implemented a customized BSDF class as shown below:
Code
Customized BSDF Class
Optimization Code for Mitsuba Framework
After successfully optimizing the
roughness
andeta
parameters, I followed the guide on "Mitsuba and PyTorch Compatibility" to implement a PyTorch-based model within the customized BSDF class:PyTorch Model Integration
Updated BSDF with PyTorch Model
Issue
The forward pass works as expected, but I am unsure how to optimize the PyTorch model parameters (e.g., MLP weights). Specifically, I don't know:
mapped_eta
) for optimization, either within the Mitsuba optimization framework or by using a decorator for the PyTorch framework.traverse
method for registering parameters with PyTorch's optimization pipeline.System Information
Any guidance on properly integrating and optimizing the PyTorch model parameters within this framework would be greatly appreciated. Thank you!
The text was updated successfully, but these errors were encountered: