Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sharded multiresolution meshes index file size error #291

Closed
SridharJagannathan opened this issue Mar 18, 2021 · 4 comments
Closed

Sharded multiresolution meshes index file size error #291

SridharJagannathan opened this issue Mar 18, 2021 · 4 comments

Comments

@SridharJagannathan
Copy link
Contributor

Hi @jbms ,
I'm currently trying to implement sharded multi level meshes.
As per the documentation, I have added fragment data first and then appended the manifest data next and then compute shards from there on. However, when I try to see them in neuroglancer I get an error in the developer tools like

Error retrieving chunk 200: Error: Invalid index file size for 1842801453 lods: 35636

I think I'm making a simple mistake in offsets here, can you please help out.
I have attached the sharded multilevel mesh here
The unsharded multilevel implementation that works fine is attached here

@SridharJagannathan
Copy link
Contributor Author

Some additional details:
The sample code that I use was adapted from here #272

# start with some settings..
    quantization_bits = quant_bits
    lods = np.array([0, 1, 2])  # number of level of details, keep 3 for now..
    # set the chunk shape, grid origin acc to vertex data, so there is no need for offsets later on..
    chunk_shape = (vertices.max(axis=0) - vertices.min(axis=0))/2**lods.max()
    grid_origin = vertices.min(axis=0)
    lod_scales = np.array([2**lod for lod in lods])
    num_lods = len(lod_scales)
    vertex_offsets = np.array([[0., 0., 0.] for _ in range(num_lods)])

    fragment_offsets = []
    fragment_positions = []

    fragmentdata = BytesIO()

    # write the mesh fragment data file first..
    # for each level now decompose the mesh into submeshes with lower resolution..
    for scale in lod_scales[::-1]:
        # start with lod-0, which is the highest resolution possible..
        lod_offsets = []
        nodes, submeshes = decompose_meshes(vertices.copy(), faces.copy(), scale, quantization_bits)
        # convert each submesh into google draco format and append them..
        for mesh in submeshes:
            draco = trimesh.exchange.ply.export_draco(mesh, bits=quant_bits)
            fragmentdata.write(draco)
            lod_offsets.append(len(draco))

        fragment_positions.append(np.array(nodes))
        fragment_offsets.append(np.array(lod_offsets))

    num_fragments_per_lod = np.array([len(nodes) for nodes in fragment_positions])

    offsetvalue = len(fragmentdata.getvalue())

    # write the mesh manifest file now..
    fragmentdata.write(chunk_shape.astype('<f').tobytes())
    fragmentdata.write(grid_origin.astype('<f').tobytes())
    fragmentdata.write(struct.pack('<I', num_lods))
    fragmentdata.write(lod_scales.astype('<f').tobytes())
    fragmentdata.write(vertex_offsets.astype('<f').tobytes(order='C'))
    fragmentdata.write(num_fragments_per_lod.astype('<I').tobytes())
    for frag_pos, frag_offset in zip(fragment_positions, fragment_offsets):
        fragmentdata.write(frag_pos.T.astype('<I').tobytes(order='C'))
        fragmentdata.write(frag_offset.astype('<I').tobytes(order='C'))

after this, I make shards, minishards using the data in fragmentdata

@davidackerman
Copy link
Contributor

If it helps, here was my attempt: https://github.com/davidackerman/multiresolutionMeshes.

@jbms
Copy link
Collaborator

jbms commented Mar 19, 2021

I haven't had chance to debug this in detail, but since you say that the unsharded multi-level meshes work, then it sounds like the problem is surely in the conversion from non-sharded format to sharded format, for which you haven't included your code. I'd recommend modifying your generation code to output the relevant offsets, and then checking the Range headers in chrome dev tools network tab to see which byte ranges Neuroglancer requests, and compare to the expected byte ranges, so that you can see where it is going wrong.

@SridharJagannathan
Copy link
Contributor Author

@davidackerman: thanks for your example, it is helpful.
@jbms : you are right in that it was an offset problem (incorrect index for the minishards), i implemented a solution in cloud-volume which seems to work:
see here:
seung-lab/cloud-volume#477
also here:
seung-lab/cloud-volume#475

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants