Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Point Tracking dataset.py crashes while iterating through the dataset #263

Closed
wenhsuanchu opened this issue Aug 10, 2022 · 3 comments · Fixed by #264
Closed

Point Tracking dataset.py crashes while iterating through the dataset #263

wenhsuanchu opened this issue Aug 10, 2022 · 3 comments · Fixed by #264

Comments

@wenhsuanchu
Copy link

wenhsuanchu commented Aug 10, 2022

Hello, I've noticed that the dataset.py file for point tracking (https://github.com/google-research/kubric/blob/main/challenges/point_tracking/dataset.py) throws errors on my end as it's iterating through the dataset. I have made the following changes to run the file:

  1. Adding this code snippet at the start of the file to get around shape checks in tensorflow_graphics
module = sys.modules['tensorflow_graphics.util.shape']
def _get_dim(tensor, axis):
        """Returns dimensionality of a tensor for a given axis."""
        return tf.compat.v1.dimension_value(tensor.shape[axis])

module._get_dim = _get_dim
sys.modules['tensorflow_graphics.util.shape'] = module
  1. Loading the movi_e dataset from disk, where DATA_DIR is where the dataset has been downloaded to
ds = tfds.load(
      'movi_e/256x256',
      data_dir=DATA_DIR,
      shuffle_files=shuffle_buffer_size is not None,
      **kwargs)
  1. And lastly, modifying the main() function so it doesn't quit after iterating through 10 samples and disabling random cropping:
def main():
  ds = tfds.as_numpy(create_point_tracking_dataset(shuffle_buffer_size=None, random_crop=False))
  for i, data in enumerate(ds):
    print(i)

The script crashes with

W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at constant_op.cc:171 : INVALID_ARGUMENT: Dimension -1 must be >= 0

Traceback (most recent call last):
  File "/home/wenhsuac/multipoint_tracking/testk.py", line 810, in <module>
    main()
  File "/home/wenhsuac/multipoint_tracking/testk.py", line 800, in main
    for i, data in enumerate(ds):
  File "/home/wenhsuac/anaconda3/envs/tf_tracking/lib/python3.9/site-packages/tensorflow_datasets/core/dataset_utils.py", line 65, in _eager_dataset_iterator
    for elem in ds:
  File "/home/wenhsuac/anaconda3/envs/tf_tracking/lib/python3.9/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 766, in __next__
    return self._next_internal()
  File "/home/wenhsuac/anaconda3/envs/tf_tracking/lib/python3.9/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 749, in _next_internal
    ret = gen_dataset_ops.iterator_get_next(
  File "/home/wenhsuac/anaconda3/envs/tf_tracking/lib/python3.9/site-packages/tensorflow/python/ops/gen_dataset_ops.py", line 3017, in iterator_get_next
    _ops.raise_from_not_ok_status(e, name)
  File "/home/wenhsuac/anaconda3/envs/tf_tracking/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 7164, in raise_from_not_ok_status
    raise core._status_to_exception(e) from None  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension -1 must be >= 0
         [[{{node zeros}}]] [Op:IteratorGetNext]

Any help or pointers to resolve this issue would be much appreciated. Thanks!

@cdoersch
Copy link
Collaborator

I'm not sure I understand your setup. Is this the error that you get without changes, or do your changes trigger this issue? Why did you need to override _get_dim in tensorflow graphics? Are there any configurations that work (e.g. does it work with random cropping)?

I'm traveling for the next 10 days and have extremely limited time to debug. The more specific you can be about the exact changes that triggered the error, the better.

@cdoersch
Copy link
Collaborator

cdoersch commented Sep 4, 2022

I think I've found the root cause of this error: it occurs when MAX_SEG_ID is smaller than the largest segment id in the image. Therefore, the solution is to set MAX_SEG_ID higher: 25 seems to work for me. Hopefully this fixes the issue for you.

@wenhsuanchu
Copy link
Author

Yep, this seems to work well. It looks like when the MAX_SEG_ID is too small it's trying to sample a negative amount of points which crashes the script.

cdoersch added a commit that referenced this issue Sep 21, 2022
Convert constants to params for more configurable point tracking.  Also fixes #263
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging a pull request may close this issue.

2 participants