Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2DGS batched rendering of normals seems to have a bug #528

Open
nlml opened this issue Jan 5, 2025 · 2 comments
Open

2DGS batched rendering of normals seems to have a bug #528

nlml opened this issue Jan 5, 2025 · 2 comments

Comments

@nlml
Copy link

nlml commented Jan 5, 2025

Hello,

I am currently running gsplat==1.4.0

I have noticed that when I plot the normals obtained from rendering a batch of cameras with rasterization_2dgs, the result appears to be corrupted. See this result from plotting a single gaussian, from two cameras in a batch (the left, red image is the gaussian rendering, and the right are the normals):

image

Whereas, here is the result I get if I only render one camera (the first one) in the batch:
image

If you plot the normals of a single camera of a scene or object, and they look good, try plotting a batch of cameras. You should easily reproduce this issue. You can also use this script below to reproduce it:

import torch
from gsplat import rasterization_2dgs
import matplotlib.pyplot as plt

cam0 = torch.tensor([
    [-1.,  0.,  0., -0.],
    [ 0.,  1.,  0., -0.],
    [ 0.,  0., -1.,  2.],
    [ 0.,  0.,  0.,  1.]
])
cam1 = torch.tensor([
    [-1.0000,  0.0000,  0.0000, -0.0000],
    [ 0.0000,  0.7071, -0.7071, -0.0000],
    [-0.0000, -0.7071, -0.7071,  2.0000],
    [ 0.0000,  0.0000,  0.0000,  1.0000]
])
w2cs = torch.stack([cam0, cam1])
# uncomment this next line to see usual single-camera behaviour
# w2cs = w2cs[:1]
num_cams = len(w2cs)
quats = torch.zeros(1, 4)
quats[:, 0] = 1
width = height = 512
means = torch.zeros(1, 3)
num_points = means.shape[0]
scales = torch.ones(num_points, 3) * 0.1
opacities = torch.ones(num_points) * 1
colors = torch.tensor([1, 0, 0]).unsqueeze(0).expand(num_points, -1).float()

Ks = torch.tensor([
    [width * 3, 0, width / 2],
    [0, height * 3, height / 2],
    [0, 0, 1],
]).float()

with torch.no_grad():
    (
        renders_color,
        render_alphas,
        render_normals,
        render_normals_from_depth,
        render_distort,
        render_median,
        render_info,
    ) = rasterization_2dgs(
        means=means.cuda(),
        quats=quats.cuda(),
        scales=scales.cuda(),
        opacities=opacities.cuda(),
        colors=colors.cuda(),
        viewmats=w2cs.cuda(),
        Ks=Ks.cuda().unsqueeze(0).expand(num_cams, -1, -1),
        backgrounds=torch.zeros(num_cams, 3).cuda(),
        width=width,
        height=height,
        render_mode="RGB",
    )

for i in range(len(w2cs)):
    toshow = renders_color[i]
    a = render_alphas[i]
    b = (render_normals[i] / 2 + 0.5) * a
    b = render_normals[i]
    toshow = torch.cat([toshow, b], 1).cpu()

    plt.figure(figsize=(10, 10))
    plt.imshow(toshow)
    plt.axis("off")
    plt.show()

Looking at the sorts of results that I have gotten my suspicion is there is some bug in terms of the ordering of axes assumed when converting flattened data back to a tensor in CUDA backend (or something like this). It could be something else, but it appears it may be this type of bug, to me at least. Have not had time to investigate more deeply myself, making this issue in the hopes that someone else more versed in the backend code here can more easily fix it!

Thanks a lot for this library by the way!
Liam

@nlml
Copy link
Author

nlml commented Jan 5, 2025

PS: here are some more real-world illustrations of the bug

Normals from a gaussians fit to a human head with a single camera in the rendering batch:
image

Versus rendering a 4-camera batch:
image

Note that the rendered RGB looks fine from the 4-camera batch. It's only the normals that appear to suffer from this. It looks to me like the normal renderings are still there, just the pixels from the different batch elements have been interleaved somehow.

@saikrn112
Copy link

I havent looked at this issue, but possibly related #425

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants