Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimizing a PyTorch Model Within a Customized BSDF Class #1458

Closed
colinzhenli opened this issue Jan 13, 2025 · 1 comment
Closed

Optimizing a PyTorch Model Within a Customized BSDF Class #1458

colinzhenli opened this issue Jan 13, 2025 · 1 comment

Comments

@colinzhenli
Copy link

Description

I would like to write a customized BSDF using a PyTorch model, such as employing an MLP to map incoming and outgoing directions to BRDF values. Furthermore, I want to use the inverse rendering pipeline to optimize the PyTorch model.

Following the "Inverse Rendering Tutorial," I implemented a customized BSDF class as shown below:


Code

Customized BSDF Class

class CookTorranceBRDF(mi.BSDF):
    def __init__(self, props):
        mi.BSDF.__init__(self, props)

        self.roughness = props.get('roughness', 0.5)
        self.roughness = mi.Float(self.roughness)

        # Fresnel IOR (eta), SH coefficients, and tint from the original code
        self.eta = props.get('eta', 1.33)
        self.eta = mi.Float(self.eta)
        
        self.m_flags = mi.BSDFFlags.GlossyReflection | mi.BSDFFlags.FrontSide | mi.BSDFFlags.BackSide

    # Other methods: sample, eval, cook_torrance, etc.

Optimization Code for Mitsuba Framework

def inverse_optimization(scene, params, param_ref, keys, args, ref_image, output_path):
    """
    Perform inverse optimization on multiple parameters and compute PSNR along with MSE.
    """
    # Initialize optimizer and losses
    opt = mi.ad.Adam(lr=args.optimizer_lr)
    losses = []
    psnrs = []

    for key in keys:
        opt[key] = params[key]
    params.update(opt)

    # Optimization loop
    for it in range(args.iteration_count):
        image = mi.render(scene, params, spp=args.train_spp)
        loss = mse(image, ref_image)
        dr.backward(loss)
        opt.step()
        params.update(opt)

        # Compute errors and metrics
        # ...
        
    # Save and plot results
    # ...
    return losses, final_psnr

After successfully optimizing the roughness and eta parameters, I followed the guide on "Mitsuba and PyTorch Compatibility" to implement a PyTorch-based model within the customized BSDF class:

PyTorch Model Integration

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.eta = nn.Parameter(torch.randn(1))
        self.mlp = nn.Linear(1, 1)

    def forward(self, x):
        ret = self.mlp(x.unsqueeze(0))
        return ret

@dr.wrap_ad(source='drjit', target='torch')
def pass_mlp(eta):
    model = Model().cuda()
    ret = model(eta)
    return ret

Updated BSDF with PyTorch Model

class CookTorranceBRDF(mi.BSDF):
    def __init__(self, props):
        mi.BSDF.__init__(self, props)

        self.roughness = props.get('roughness', 0.5)
        self.roughness = mi.Float(self.roughness)

        self.eta = props.get('eta', 1.33)
        self.eta = mi.Float(self.eta)

        self.m_flags = mi.BSDFFlags.GlossyReflection | mi.BSDFFlags.FrontSide | mi.BSDFFlags.BackSide

    def sample(self, ctx, si, sample1, sample2, active):
        mapped_eta = pass_mlp(dr.cuda.ad.TensorXf(self.eta))
        # Sampling logic with mapped_eta

Issue

The forward pass works as expected, but I am unsure how to optimize the PyTorch model parameters (e.g., MLP weights). Specifically, I don't know:

  1. How to register the learnable parameters (e.g., mapped_eta) for optimization, either within the Mitsuba optimization framework or by using a decorator for the PyTorch framework.
  2. How to combine Mitsuba's traverse method for registering parameters with PyTorch's optimization pipeline.
def traverse(self, callback):
    callback.put_parameter('roughness', self.roughness, mi.ParamFlags.Differentiable)
    callback.put_parameter('eta', self.eta, mi.ParamFlags.Differentiable)

System Information

  • OS: Ubuntu 22.04
  • GPU: Nvidia RTX 3090
  • Python version: 3.8.20
  • CUDA version: ...
  • Dr.Jit version: ...
  • Mitsuba version: ...
  • Compiled with: ...
  • Variants compiled: ...

Any guidance on properly integrating and optimizing the PyTorch model parameters within this framework would be greatly appreciated. Thank you!

@rtabbara
Copy link
Contributor

Hi @colinzhenli ,

There's a community-created tutorial regarding implementing neural representations of spatially-varying BRDF parameters in Mitsuba 3 which I think aligns with what you're after. Just be aware that it was created before the Mitsuba 3.6 release, so parts may be out of date and some of the code may have to ported (e.g. dr.wrap_ad to dr.wrap) but it should nonetheless be a useful starting point.

@mitsuba-renderer mitsuba-renderer locked and limited conversation to collaborators Jan 23, 2025
@rtabbara rtabbara converted this issue into discussion #1466 Jan 23, 2025

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants