Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set videos to RGB/Grayscale from the GUI #625

Closed
talmo opened this issue Jan 6, 2022 · 5 comments
Closed

Set videos to RGB/Grayscale from the GUI #625

talmo opened this issue Jan 6, 2022 · 5 comments
Assignees
Labels
enhancement New feature or request

Comments

@talmo
Copy link
Collaborator

talmo commented Jan 6, 2022

Currently you can only specify whether videos are RGB or Grayscale when importing videos. This can lead to issues when importing predictions and getting mixed channel project files.

Here's code to do this programatically:

import sleap
labels = sleap.load_file("labels.v000.slp")
for vid in labels.videos:
    vid.backend.grayscale = True
labels.save("labels.v000.slp")

But this should be easier to change from the GUI.

MVP: Buttons to set videos as grayscale/RGB from Videos panel like we have in the Import Videos GUI.

@talmo talmo added the enhancement New feature or request label Jan 6, 2022
@talmo
Copy link
Collaborator Author

talmo commented Jan 6, 2022

Also, we should pass through setters for video backend attributes, e.g.:

vid.grayscale = Truevid.backend.grayscale = True

@laurelrr
Copy link

This suggestion worked really well on 99% of my videos and I was able to set all the channels to 1.

But it did not fix an RGP/grayscale issue where the video had backend=SingleImageVideo. There, I was left with channels=3. This particular video was where I had imported labeled frames from another animal tracking program and it created a video made up of specific frames (as I am sure you know).

Do you have an easy fix for this situation? Can I force vid.backend.channels=1?

@laurelrr
Copy link

Well, it seemed as though setting
vid.backend.channels_=1
was going to work, as I can see that all channels appear to be set to 1 in the GUI.

However, I was not able to train properly and got the following error when attempting to train the first part of the top down model. Note, I tried setting ensure_grayscale = false the first time and get the same error.

Any advice greatly appreciated!

$ sleap-train centroid.json fm_MedPC_Halo+Calc_v01_9ptSkeleton_MERGE.v002.slp --first-gpu
INFO:sleap.nn.training:Versions:
SLEAP: 1.2.0a6
TensorFlow: 2.6.3
Numpy: 1.19.5
Python: 3.7.12
OS: Linux-5.10.0-13-amd64-x86_64-with-debian-11.3
INFO:sleap.nn.training:Training labels file: fm_MedPC_Halo+Calc_v01_9ptSkeleton_MERGE.v002.slp
INFO:sleap.nn.training:Training profile: centroid.json
INFO:sleap.nn.training:
INFO:sleap.nn.training:Arguments:
INFO:sleap.nn.training:{
"training_job_path": "centroid.json",
"labels_path": "fm_MedPC_Halo+Calc_v01_9ptSkeleton_MERGE.v002.slp",
"video_paths": [
""
],
"val_labels": null,
"test_labels": null,
"tensorboard": false,
"save_viz": false,
"zmq": false,
"run_name": "",
"prefix": "",
"suffix": "",
"cpu": false,
"first_gpu": true,
"last_gpu": false,
"gpu": 0
}
INFO:sleap.nn.training:
INFO:sleap.nn.training:Training job:
INFO:sleap.nn.training:{
"data": {
"labels": {
"training_labels": null,
"validation_labels": null,
"validation_fraction": 0.1,
"test_labels": null,
"split_by_inds": false,
"training_inds": null,
"validation_inds": null,
"test_inds": null,
"search_path_hints": [],
"skeletons": []
},
"preprocessing": {
"ensure_rgb": false,
"ensure_grayscale": true,
"imagenet_mode": null,
"input_scaling": 0.5,
"pad_to_stride": null,
"resize_and_pad_to_target": true,
"target_height": null,
"target_width": null
},
"instance_cropping": {
"center_on_part": "haunch",
"crop_size": null,
"crop_size_detection_padding": 16
}
},
"model": {
"backbone": {
"leap": null,
"unet": {
"stem_stride": null,
"max_stride": 16,
"output_stride": 2,
"filters": 16,
"filters_rate": 2.0,
"middle_block": true,
"up_interpolate": true,
"stacks": 1
},
"hourglass": null,
"resnet": null,
"pretrained_encoder": null
},
"heads": {
"single_instance": null,
"centroid": {
"anchor_part": "haunch",
"sigma": 3.0,
"output_stride": 2,
"offset_refinement": false
},
"centered_instance": null,
"multi_instance": null
}
},
"optimization": {
"preload_data": true,
"augmentation_config": {
"rotate": true,
"rotation_min_angle": -15.0,
"rotation_max_angle": 15.0,
"translate": false,
"translate_min": -5,
"translate_max": 5,
"scale": false,
"scale_min": 0.9,
"scale_max": 1.1,
"uniform_noise": false,
"uniform_noise_min_val": 0.0,
"uniform_noise_max_val": 10.0,
"gaussian_noise": false,
"gaussian_noise_mean": 5.0,
"gaussian_noise_stddev": 1.0,
"contrast": false,
"contrast_min_gamma": 0.5,
"contrast_max_gamma": 2.0,
"brightness": false,
"brightness_min_val": 0.0,
"brightness_max_val": 10.0,
"random_crop": false,
"random_crop_height": 256,
"random_crop_width": 256,
"random_flip": false,
"flip_horizontal": true
},
"online_shuffling": true,
"shuffle_buffer_size": 128,
"prefetch": true,
"batch_size": 8,
"batches_per_epoch": null,
"min_batches_per_epoch": 200,
"val_batches_per_epoch": null,
"min_val_batches_per_epoch": 10,
"epochs": 200,
"optimizer": "adam",
"initial_learning_rate": 0.0001,
"learning_rate_schedule": {
"reduce_on_plateau": true,
"reduction_factor": 0.5,
"plateau_min_delta": 1e-06,
"plateau_patience": 5,
"plateau_cooldown": 3,
"min_learning_rate": 1e-08
},
"hard_keypoint_mining": {
"online_mining": false,
"hard_to_easy_ratio": 2.0,
"min_hard_keypoints": 2,
"max_hard_keypoints": null,
"loss_scale": 5.0
},
"early_stopping": {
"stop_training_on_plateau": true,
"plateau_min_delta": 1e-08,
"plateau_patience": 20
}
},
"outputs": {
"save_outputs": true,
"run_name": "220425_164010",
"run_name_prefix": "",
"run_name_suffix": ".centroid",
"runs_folder": "models",
"tags": [
""
],
"save_visualizations": true,
"delete_viz_images": true,
"zip_outputs": false,
"log_to_csv": true,
"checkpointing": {
"initial_model": false,
"best_model": true,
"every_epoch": false,
"latest_model": false,
"final_model": false
},
"tensorboard": {
"write_logs": false,
"loss_frequency": "epoch",
"architecture_graph": false,
"profile_graph": false,
"visualizations": true
},
"zmq": {
"subscribe_to_controller": false,
"controller_address": "tcp://127.0.0.1:9000",
"controller_polling_timeout": 10,
"publish_updates": false,
"publish_address": "tcp://127.0.0.1:9001"
}
},
"name": "",
"description": "",
"sleap_version": "1.2.0a6",
"filename": "centroid.json"
}
INFO:sleap.nn.training:
INFO:sleap.nn.training:Using the first GPU for acceleration.
INFO:sleap.nn.training:Disabled GPU memory pre-allocation.
INFO:sleap.nn.training:System:
GPUs: 1/1 available
Device: /physical_device:GPU:0
Available: True
Initalized: False
Memory growth: True
INFO:sleap.nn.training:
INFO:sleap.nn.training:Initializing trainer...
INFO:sleap.nn.training:Loading training labels from: fm_MedPC_Halo+Calc_v01_9ptSkeleton_MERGE.v002.slp
INFO:sleap.nn.training:Creating training and validation splits from validation fraction: 0.1
INFO:sleap.nn.training: Splits: Training = 2772 / Validation = 308.
INFO:sleap.nn.training:Setting up for training...
INFO:sleap.nn.training:Setting up pipeline builders...
INFO:sleap.nn.training:Setting up model...
INFO:sleap.nn.training:Building test pipeline...
2022-04-25 16:47:49.463765: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-04-25 16:47:49.995222: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 10553 MB memory: -> device: 0, name: NVIDIA TITAN V, pci bus id: 0000:3b:00.0, compute capability: 7.0
2022-04-25 16:47:50.398046: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
INFO:sleap.nn.training:Loaded test example. [9.543s]
INFO:sleap.nn.training: Input shape: (480, 640, 1)
INFO:sleap.nn.training:Created Keras model.
INFO:sleap.nn.training: Backbone: UNet(stacks=1, filters=16, filters_rate=2.0, kernel_size=3, stem_kernel_size=7, convs_per_block=2, stem_blocks=0, down_blocks=4, middle_block=True, up_blocks=3, up_interpolate=True, block_contraction=False)
INFO:sleap.nn.training: Max stride: 16
INFO:sleap.nn.training: Parameters: 1,953,105
INFO:sleap.nn.training: Heads:
INFO:sleap.nn.training: [0] = CentroidConfmapsHead(anchor_part='haunch', sigma=3.0, output_stride=2, loss_weight=1.0)
INFO:sleap.nn.training: Outputs:
INFO:sleap.nn.training: [0] = KerasTensor(type_spec=TensorSpec(shape=(None, 240, 320, 1), dtype=tf.float32, name=None), name='CentroidConfmapsHead_0/BiasAdd:0', description="created by layer 'CentroidConfmapsHead_0'")
INFO:sleap.nn.training:Setting up data pipelines...
INFO:sleap.nn.training:Training set: n = 2772
INFO:sleap.nn.training:Validation set: n = 308
INFO:sleap.nn.training:Setting up optimization...
INFO:sleap.nn.training: Learning rate schedule: LearningRateScheduleConfig(reduce_on_plateau=True, reduction_factor=0.5, plateau_min_delta=1e-06, plateau_patience=5, plateau_cooldown=3, min_learning_rate=1e-08)
INFO:sleap.nn.training: Early stopping: EarlyStoppingConfig(stop_training_on_plateau=True, plateau_min_delta=1e-08, plateau_patience=20)
INFO:sleap.nn.training:Setting up outputs...
INFO:sleap.nn.training:Created run path: models/220425_164010.centroid
INFO:sleap.nn.training:Setting up visualization...
2022-04-25 16:47:54.347976: W tensorflow/core/grappler/costs/op_level_cost_estimator.cc:690] Error in PredictCost() for the op: op: "CropAndResize" attr { key: "T" value { type: DT_FLOAT } } attr { key: "extrapolation_value" value { f: 0 } } attr { key: "method" value { s: "bilinear" } } inputs { dtype: DT_FLOAT shape { dim { size: -34 } dim { size: -35 } dim { size: -36 } dim { size: 1 } } } inputs { dtype: DT_FLOAT shape { dim { size: -2 } dim { size: 4 } } } inputs { dtype: DT_INT32 shape { dim { size: -2 } } } inputs { dtype: DT_INT32 shape { dim { size: 2 } } } device { type: "GPU" vendor: "NVIDIA" model: "NVIDIA TITAN V" frequency: 1455 num_cores: 80 environment { key: "architecture" value: "7.0" } environment { key: "cuda" value: "11020" } environment { key: "cudnn" value: "8100" } num_registers: 65536 l1_cache_size: 24576 l2_cache_size: 4718592 shared_memory_size_per_multiprocessor: 98304 memory_size: 11066540032 bandwidth: 652800000 } outputs { dtype: DT_FLOAT shape { dim { size: -2 } dim { size: -37 } dim { size: -38 } dim { size: 1 } } }
2022-04-25 16:47:56.534941: W tensorflow/core/grappler/costs/op_level_cost_estimator.cc:690] Error in PredictCost() for the op: op: "CropAndResize" attr { key: "T" value { type: DT_FLOAT } } attr { key: "extrapolation_value" value { f: 0 } } attr { key: "method" value { s: "bilinear" } } inputs { dtype: DT_FLOAT shape { dim { size: -34 } dim { size: -35 } dim { size: -36 } dim { size: 1 } } } inputs { dtype: DT_FLOAT shape { dim { size: -2 } dim { size: 4 } } } inputs { dtype: DT_INT32 shape { dim { size: -2 } } } inputs { dtype: DT_INT32 shape { dim { size: 2 } } } device { type: "GPU" vendor: "NVIDIA" model: "NVIDIA TITAN V" frequency: 1455 num_cores: 80 environment { key: "architecture" value: "7.0" } environment { key: "cuda" value: "11020" } environment { key: "cudnn" value: "8100" } num_registers: 65536 l1_cache_size: 24576 l2_cache_size: 4718592 shared_memory_size_per_multiprocessor: 98304 memory_size: 11066540032 bandwidth: 652800000 } outputs { dtype: DT_FLOAT shape { dim { size: -2 } dim { size: -37 } dim { size: -38 } dim { size: 1 } } }
INFO:sleap.nn.training:Finished trainer set up. [16.5s]
INFO:sleap.nn.training:Creating tf.data.Datasets for training data generation...
Traceback (most recent call last):
File "/home/lkeyes/anaconda3/envs/sleap1_2_0a6/bin/sleap-train", line 33, in
sys.exit(load_entry_point('sleap==1.2.0a6', 'console_scripts', 'sleap-train')())
File "/home/lkeyes/anaconda3/envs/sleap1_2_0a6/lib/python3.7/site-packages/sleap/nn/training.py", line 1621, in main
trainer.train()
File "/home/lkeyes/anaconda3/envs/sleap1_2_0a6/lib/python3.7/site-packages/sleap/nn/training.py", line 879, in train
training_ds = self.training_pipeline.make_dataset()
File "/home/lkeyes/anaconda3/envs/sleap1_2_0a6/lib/python3.7/site-packages/sleap/nn/data/pipelines.py", line 282, in make_dataset
ds = transformer.transform_dataset(ds)
File "/home/lkeyes/anaconda3/envs/sleap1_2_0a6/lib/python3.7/site-packages/sleap/nn/data/dataset_ops.py", line 318, in transform_dataset
self.examples = list(iter(ds))
File "/home/lkeyes/anaconda3/envs/sleap1_2_0a6/lib/python3.7/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 761, in next
return self._next_internal()
File "/home/lkeyes/anaconda3/envs/sleap1_2_0a6/lib/python3.7/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 747, in _next_internal
output_shapes=self._flat_output_shapes)
File "/home/lkeyes/anaconda3/envs/sleap1_2_0a6/lib/python3.7/site-packages/tensorflow/python/ops/gen_dataset_ops.py", line 2728, in iterator_get_next
_ops.raise_from_not_ok_status(e, name)
File "/home/lkeyes/anaconda3/envs/sleap1_2_0a6/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 6941, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: Shape of tensor EagerPyFunc [479,639,3] is not compatible with expected shape [?,?,1].
[[{{node EnsureShape}}]] [Op:IteratorGetNext]
2022-04-25 16:47:59.425971: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to dataset.cache().take(k).repeat(). You should use dataset.take(k).cache().repeat() instead.

@talmo
Copy link
Collaborator Author

talmo commented Apr 26, 2022

Hi @laurelrr,

I think the problem is that SingleImageVideo might not support the grayscale property correctly.

We'll look into this -- hopefully it's an easy fix.

Talmo

@roomrys roomrys self-assigned this May 5, 2022
@roomrys roomrys self-assigned this May 19, 2022
@roomrys roomrys added the fixed in future release Fix or feature is merged into develop and will be available in future release. label Jun 13, 2022
@roomrys
Copy link
Collaborator

roomrys commented Jun 29, 2022

Hi @laurelrr ,

The new release of SLEAP v1.2.4 is now available for installation and includes this fix.

Thanks,
Liezl

@roomrys roomrys closed this as completed Jun 29, 2022
@roomrys roomrys removed the fixed in future release Fix or feature is merged into develop and will be available in future release. label Jun 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants