Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optionally return empty LabeledFrames when no predictions within a frame #396

Closed
iteal opened this issue Sep 4, 2020 · 2 comments
Closed
Labels
enhancement New feature or request

Comments

@iteal
Copy link
Contributor

iteal commented Sep 4, 2020

Hi!

I have an issue with the BottomupPredictor for one of my videos. I get the following warning during inference:

2020-09-04 15:22:04.048511: W tensorflow/core/framework/op_kernel.cc:1655] OP_REQUIRES failed at iterator_ops.cc:941 : Invalid argument: {{function_node __inference_Dataset_map_group_instances_2271}} Tried to stack elements of an empty list with non-fully-defined element_shape: <unknown>
	 [[{{node TensorArrayV2Stack_1/TensorListStack}}]]
INFO:sleap.nn.inference:ERROR in sample index 8989
INFO:sleap.nn.inference:{{function_node __inference_Dataset_map_group_instances_2271}} Tried to stack elements of an empty list with non-fully-defined element_shape: <unknown>
	 [[{{node TensorArrayV2Stack_1/TensorListStack}}]]

This is my code:

    # crops is a numpy array such as len(crops) = 11123
    crops_filepath = os.path.join(temp_dir, "crops.h5")
    with h5py.File(crops_filepath, "w") as f:
        f.create_dataset("frames", data=crops)

    video_reader = VideoReader.from_filepath(filename=crops_filepath, dataset="frames", input_format="channels_last",)

    predictor = BottomupPredictor.from_trained_models(model_path)

    predictions = predictor.predict(video_reader)
    # here len(predictions) = 11118, doesn't match the input
    # predictions[0].frame_idx = 0 but:
    # predictions[8989].frame_index = 8990

The problem is the length of the result predictions don't match the length of my input...
I think I can use "frame_idx" in the LabeledFrame object to find out which frame corresponds to what, but this is not super user friendly.
Would it be possible to modify the implementation to return the same amount of LabeledFrame as inputs? Even if the LabeledFrame instances are empty? And maybe explain what this "Tried to stack elements" error mean too?

Thanks!

@talmo
Copy link
Collaborator

talmo commented Sep 4, 2020

Hi @iteal,

So regarding the first error: this will be fixed in 1.1 along with a variety of other fixes related to inference. This happens when no instances are found on a particular frame, but in the future we'll just skip them.

The second point: Sure, we can add an option like that. If you make them into a sleap.Labels dataset instead of a list of frames (by passing make_labels=True to predictor.predict() or predictions = sleap.Labels(predictions)), you'll have a bunch of utilities to make it easier to find frames, like sleap.Labels.find().

We'll add some more tutorials for fancier access patterns with 1.1, but I'll keep this issue open to track the feature request of returning blank LabeledFrames.

@talmo talmo added the enhancement New feature or request label Sep 4, 2020
@talmo talmo changed the title Not very user friendly prediction results when error Optionally return empty LabeledFrames when no predictions within a frame Sep 4, 2020
talmo added a commit that referenced this issue Feb 5, 2021
- Rewrote CLI.
    - Now uses more standardized methods for data loading, model
      building, and inference.
    - Remove most of the dynamically generated args in favor of a flat
      list of args.
    - Deprecate a bunch of redundant args. These still work, they're now
      just hidden from the help.
    - Enable single provider inference for labels rather than predicting
      video-by-video.
    - More informative logging.
    - Add option for removing empty frames. By default it keeps empty
      frames (#396)
    - Add a lot more provenance information.
- Unified inference progress bar for GUI, console AND notebooks (#453)!
- JSON progress output for custom handling via stdout capture.
- Add Predictor.from_model_paths() constructor for single entrypoint
  instantiation of subclasses from paths.
- Remove unused imports and MockPredictor class.
- Add peak_threshold to load_model() high level API.
- Docstrings and typing
@talmo
Copy link
Collaborator

talmo commented Mar 6, 2021

Closing because this is now the default in inference in v1.1.0 with a CLI arg to revert it (--no-empty-frames).

@talmo talmo closed this as completed Mar 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants