Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encode Morph Targets with Sparse Accessors #1346

Closed
Christopher-Hayes opened this issue Mar 7, 2021 · 12 comments · Fixed by #1980
Closed

Encode Morph Targets with Sparse Accessors #1346

Christopher-Hayes opened this issue Mar 7, 2021 · 12 comments · Fixed by #1980

Comments

@Christopher-Hayes
Copy link

Christopher-Hayes commented Mar 7, 2021

Sparse Accessors and Blender
Currently, gltf-pipeline and fbx2gltf do not support sparse accessors at all (links are to the feature requests). It does look like glTF-Blender-IO can import models with sparse accessors. However, unless I'm mistaken Blender is not capable of encoding a model with sparse accessors that previously did not have them?

No Available Tools can Encode Sparse Accessors
I receive my models as FBXs and convert them to glTF. Morph targets double the model size of the model. The morph targets in my models simply interpolate scaling/translations at fixed intervals along a linear curve. It would seem that sparse accessors would be hugely advantageous, but I have yet to find a glTF exporter/optimizer that is capable of this.

Feature Request
Option to encode morph targets using sparse accessors to reduce model size.

@donmccurdy
Copy link
Contributor

Are you able to share an example (glTF or .blend) of a model you are hoping this feature would optimize?

@scurest
Copy link
Contributor

scurest commented Mar 7, 2021

@Christopher-Hayes
Copy link
Author

@donmccurdy I could email it to you if you want. But, since originally making this ticket I'm realizing I might've misunderstood how Sparse Accessors worked. For my use-case, I don't have any animations at all. It's 100% morph targets. So, there wouldn't be any room for optimization between animation key frames.

I also did a breakdown of how the morph targets impacted the file size, and I'm now realizing that a 2x increase in model size on a model that heavily uses blend shapes actually isn't that unexpected. Below is a random mesh selected from the model, this mesh morphs along the X axis and the Z axis:

Position: 1158x * 12 B (Vec3) = 14KB
Normal: 1158x * 12 B (Vec3) = 14 KB
Texcoord_0: 1158x * 8 B (Vec2) = 9 KB
Indices: 2964x * 2 B (Scalar) = 6 KB
Morph Target (Base Positions): 1158x * 12 B (Vec3) = 14 KB
Morph Target (Positions morphed along the X axis): 1158x * 12B (Vec3) = 14 KB
Morph Target (Positions morphed along the Z axis): 1158x * 12B (Vec3) = 14 KB

Total Mesh Size: 71 KB

The meshes vary, but in general the above shows that morphs accounted for 42 KB of 71 KB (60%) on this particular mesh.

@scurest Thank you for your work on this feature, I did try your branch and I didn't notice any differences in the file size. I searched in the GLB for the "sparse" keyword to see if sparse accessors were used anywhere, but didn't find any. The exported model doesn't have any animations, so I'm now realizing sparse accessors are probably more for animations and not so much morph targets by themselves.

@scurest
Copy link
Contributor

scurest commented Mar 7, 2021

In order to use a sparse accessor, most of the accessor items must be zero. For morph positions, that means most of the vertices in the shapekey need to be unmoved from their Basis position.

@donmccurdy
Copy link
Contributor

donmccurdy commented Mar 7, 2021

At one point I had implemented sparse accessors for shape key animation samplers in THREE.GLTFExporter (for https://threejs.org/). For various reasons, with the models I had available to test it did not seem to make much of a difference, so I didn't end up merging that change.

As @scurest mentions, a model where only a few vertices move during morphing would get some benefit from sparse accessors in the morph vertex data. If that's not the case in your model, you could also try https://github.com/zeux/meshoptimizer. It has two relevant features (MeshOpt compression and accessor quantization) either of which could probably reduce the size of your morph targets further. Such optimizations would require the application loading the model to support the glTF EXT_meshopt_compression and KHR_mesh_quantization extensions, respectively.

@fire
Copy link

fire commented Jul 19, 2021

What is the status of this for Blender 3.0?

Is https://github.com/scurest/glTF-Blender-IO/tree/sparse-morph ready to merge?

@julienduroure
Copy link
Collaborator

Hello,
No plan on this for now.
(And it seems that @scurest deleted this branch)

I will try to focus on bugs, any feature that can be done by external tools will come later.

@donmccurdy
Copy link
Contributor

I'm tracking addition of this feature for glTF-Transform in donmccurdy/glTF-Transform#351. There is another Blender addon that runs glTF-Transform post-processing optimizations on Blender glTF exports without additional manual work, so once the feature lands there it may be a good solution.

I should also note that morph targets and Draco compression are not really compatible. If you're using Draco compression to optimize other parts of your asset, this fix will probably not help you unfortunately. Meshopt compression might be a better option for such models; you can already test that by using gltfpack or gltf-transform on exported GLB models. You'll typically want to use both meshopt and gzip for best results.

@Christopher-Hayes
Copy link
Author

Thanks for the update @donmccurdy! I'll leave this issue open if it helps others and you guys are planning to implement the feature.

On my side, I actually ended up doing exactly as you recommend. MeshOpt worked perfectly for a morphing object that needed Draco-like compression. As part of a PlayCanvas WebGL project, almost all assets were GZIP'd. So, that solution worked out perfect for me. I saw 2.5x decreases in 3D asset sizes (before PlayCanvas even supported KHR_QUANTIZATION) I quickly realized I might be too focused on optimizing the model, when the real bottleneck is the textures.

I put more info on this process in a PlayCanvas post for anyone curious: https://forum.playcanvas.com/t/tricks-to-decrease-morph-target-sizes/18628/9?u=chris
And created a sample PlayCanvas project as well using MeshOpt: https://playcanvas.com/project/779762/overview/load-glb-model-with-meshopt

@lyuma
Copy link

lyuma commented May 10, 2023

I ran into this issue today. glTF files exported from blender are extremely large when they contain blend shapes.

Blender glTF export - 16.630MB bin + 155KB gltf

This model is very low poly and has no reason to be large other than blend shapes: 8595 tris, and one material, plus a few primitive debug meshes.

To contrast, FBX and FBX2glTF conversions are able to sparsely encode blend shapes.
FBX - 656KB fbx
FBX2glTF conversion (sparse accessors) - 855KB bin + 178KB gltf

blendshape_size_issue.zip

Here is the blend file:
mesh_parent_test_2a7_rotated.blend.zip

(Model is permissively licensed and available from https://booth.pm/ja/items/2019040 "2A-7-4 / XXXX Coolk")

Given this, I think sparse accessors are important to implement, or failing that, automatic compression of gltf exports (such as zip). I was unable to find the branch by scurest, but I suspect it would not be too difficult to implement. If it's not desired by default, perhaps it could be made an export checkbox to reduce blendshape size.

@donmccurdy
Copy link
Contributor

As additional context – I tried comparing the Blender→glTF and the Blender→FBX→glTF versions, they have roughly the same vertex counts and are otherwise similar. The difference does seem to be entirely in the use of sparse accessors. If I run the Blender→glTF version through...

gltf-transform sparse in.glb out.glb

... then the Blender export is reduced to the same size as the FBX2glTF export. The combination of sparse accessors and Meshopt compression seems to be ideal here, reducing the size further to about 470 KB.

I think it would be OK for sparse accessors to be enabled by default for blend shapes.

@scurest
Copy link
Contributor

scurest commented May 10, 2023

The patch just looks like this (not tested extensively)

Patch
diff --git a/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitive_attributes.py b/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitive_attributes.py
index 71fe2970..d5c847f4 100644
--- a/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitive_attributes.py
+++ b/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitive_attributes.py
@@ -45,7 +45,23 @@ def gather_primitive_attributes(blender_primitive, export_settings):
     return attributes
 
 
-def array_to_accessor(array, component_type, data_type, include_max_and_min=False):
+def array_to_accessor(
+    array,
+    component_type,
+    data_type,
+    include_max_and_min=False,
+    try_sparse=False,
+):
+    buffer_view = None
+    sparse = None
+
+    if try_sparse:
+        sparse = __try_sparse_accessor(array)
+    if not sparse:
+        buffer_view = gltf2_io_binary_data.BinaryData(
+            array.tobytes(),
+            gltf2_io_constants.BufferViewTarget.ARRAY_BUFFER,
+        )
 
     amax = None
     amin = None
@@ -54,7 +70,7 @@ def array_to_accessor(array, component_type, data_type, include_max_and_min=Fals
         amin = np.amin(array, axis=0).tolist()
 
     return gltf2_io.Accessor(
-        buffer_view=gltf2_io_binary_data.BinaryData(array.tobytes(), gltf2_io_constants.BufferViewTarget.ARRAY_BUFFER),
+        buffer_view=buffer_view,
         byte_offset=None,
         component_type=component_type,
         count=len(array),
@@ -64,10 +80,79 @@ def array_to_accessor(array, component_type, data_type, include_max_and_min=Fals
         min=amin,
         name=None,
         normalized=None,
-        sparse=None,
+        sparse=sparse,
         type=data_type,
     )
 
+
+def __try_sparse_accessor(array):
+    """
+    Returns an AccessorSparse for array, or None if
+    writing a dense accessor would be better.
+    """
+    # Find indices of non-zero elements
+    nonzero_indices = np.where(np.any(array, axis=1))[0]
+
+    # For all-zero arrays, omitting sparse entirely is legal but poorly
+    # supported, so force nonzero_indices to be nonempty.
+    if len(nonzero_indices) == 0:
+        nonzero_indices = np.array([0])
+
+    # How big of indices do we need?
+    if nonzero_indices[-1] <= 255:
+        indices_type = gltf2_io_constants.ComponentType.UnsignedByte
+    elif nonzero_indices[-1] <= 65535:
+        indices_type = gltf2_io_constants.ComponentType.UnsignedShort
+    else:
+        indices_type = gltf2_io_constants.ComponentType.UnsignedInt
+
+    # Cast indices to appropiate type (if needed)
+    nonzero_indices = nonzero_indices.astype(
+        gltf2_io_constants.ComponentType.to_numpy_dtype(indices_type),
+        copy=False,
+    )
+
+    # Calculate size if we don't use sparse
+    one_elem_size = len(array[:1].tobytes())
+    dense_size = len(array) * one_elem_size
+
+    # Calculate approximate size if we do use sparse
+    indices_size = (
+        len(nonzero_indices[:1].tobytes()) *
+        len(nonzero_indices)
+    )
+    values_size = len(nonzero_indices) * one_elem_size
+    json_increase = 170  # sparse makes the JSON about this much bigger
+    penalty = 64  # further penalty avoids sparse in marginal cases
+    sparse_size = indices_size + values_size + json_increase + penalty
+
+    if sparse_size >= dense_size:
+        return None
+
+    return gltf2_io.AccessorSparse(
+        count=len(nonzero_indices),
+        extensions=None,
+        extras=None,
+        indices=gltf2_io.AccessorSparseIndices(
+            buffer_view=gltf2_io_binary_data.BinaryData(
+                nonzero_indices.tobytes()
+            ),
+            byte_offset=None,
+            component_type=indices_type,
+            extensions=None,
+            extras=None,
+        ),
+        values=gltf2_io.AccessorSparseValues(
+            buffer_view=gltf2_io_binary_data.BinaryData(
+                array[nonzero_indices].tobytes()
+            ),
+            byte_offset=None,
+            extensions=None,
+            extras=None,
+        ),
+    )
+
+
 def __gather_skins(blender_primitive, export_settings):
     attributes = {}
 
diff --git a/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitives.py b/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitives.py
index f2f5ae61..04e823dd 100644
--- a/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitives.py
+++ b/addons/io_scene_gltf2/blender/exp/gltf2_blender_gather_primitives.py
@@ -216,6 +216,7 @@ def __gather_targets(blender_primitive, blender_mesh, modifiers, export_settings
                         component_type=gltf2_io_constants.ComponentType.Float,
                         data_type=gltf2_io_constants.DataType.Vec3,
                         include_max_and_min=True,
+                        try_sparse=True,
                     )
 
                     if export_settings['gltf_normals'] \
@@ -227,6 +228,7 @@ def __gather_targets(blender_primitive, blender_mesh, modifiers, export_settings
                             internal_target_normal,
                             component_type=gltf2_io_constants.ComponentType.Float,
                             data_type=gltf2_io_constants.DataType.Vec3,
+                            try_sparse=True,
                         )
 
                     if export_settings['gltf_tangents'] \
@@ -237,6 +239,7 @@ def __gather_targets(blender_primitive, blender_mesh, modifiers, export_settings
                             internal_target_tangent,
                             component_type=gltf2_io_constants.ComponentType.Float,
                             data_type=gltf2_io_constants.DataType.Vec3,
+                            try_sparse=True,
                         )
                     targets.append(target)
                     morph_index += 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants