Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error to convert TensorFlow trained model using Inference Engine. #75

Closed
Bahramudin opened this issue Jan 22, 2019 · 12 comments
Closed
Labels
category: MO Model Optimizer

Comments

@Bahramudin
Copy link

I trying to convert the model which was trained by using a pre-trained of the TensorFlow Object Detection API (faster_rcnn_inception_v2_coco) using Inference Engine.
In the documentation of the OpenVINO™ Toolkit there shows how to convert, when I tried, for example, the command below:
python3 mo_tf.py --input_model <INPUT_MODEL>.pb
Or
mo_tf.py --saved_model_dir <SAVED_MODEL_DIRECTORY>
Or
mo_tf.py --input_model <INFERENCE_GRAPH>.pb --input_checkpoint <INPUT_CHECKPOINT>
These all throw the error below:

Model Optimizer arguments:
Common parameters:
- Path to the Input Model: None
- Path for generated IR: C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer.
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Offload unsupported operations: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 1.5.12.49d067a0
[ FRAMEWORK ERROR ] Cannot load input model: No op named GatherV2 in defined operations.

But when I run the following command:
<INSTALL_DIR>/deployment_tools/model_optimizer/mo_tf.py --input_model=C:\workspace\output/frozen_inference_graph.pb --tensorflow_use_custom_operations_config <INSTALL_DIR>/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config C:\workspace\pipeline.config --reverse_input_channels

Then it throws this error:

[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (600, 600).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The graph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the documentation for information about this layer.
[ ERROR ] Cannot infer shapes or values for node "ToFloat_3".
[ ERROR ] NodeDef mentions attr 'Truncate' not in Op<name=Cast; signature=x:SrcT -> y:DstT; attr=SrcT:type; attr=DstT:type>; NodeDef: ToFloat_3 = CastDstT=DT_FLOAT, SrcT=DT_UINT8, Truncate=false, _device="/job:localhost/replica:0/task:0/cpu:0". (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: ToFloat_3 = CastDstT=DT_FLOAT, SrcT=DT_UINT8, Truncate=false, _device="/job:localhost/replica:0/task:0/cpu:0"]]

Caused by op 'ToFloat_3', defined at:
File "C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\mo_tf.py", line 31, in
sys.exit(main(get_tf_cli_parser(), 'tf'))
File "C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\mo\main.py", line 325, in main
return driver(argv)
File "C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\mo\main.py", line 267, in driver
mean_scale_values=mean_scale)
File "C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\mo\pipeline\tf.py", line 256, in tf2nx
partial_infer(graph)
File "C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\mo\middle\passes\infer.py", line 153, in partial_infer
node.infer(node)
File "C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\mo\front\tf\partial_infer\tf.py", line 60, in tf_native_tf_node_infer
tf_subgraph_infer(tmp_node)
File "C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\mo\front\tf\partial_infer\tf.py", line 135, in tf_subgraph_infer
all_constants, output_tensors = get_subgraph_output_tensors(node)
File "C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\mo\front\tf\partial_infer\tf.py", line 115, in get_subgraph_output_tensors
tf.import_graph_def(graph_def, name='')
File "C:\Users\bahra\Anaconda3\lib\site-packages\tensorflow\python\framework\importer.py", line 311, in import_graph_def
op_def=op_def)
File "C:\Users\bahra\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2506, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Users\bahra\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1269, in init
self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): NodeDef mentions attr 'Truncate' not in Op<name=Cast; signature=x:SrcT -> y:DstT; attr=SrcT:type; attr=DstT:type>; NodeDef: ToFloat_3 = CastDstT=DT_FLOAT, SrcT=DT_UINT8, Truncate=false, _device="/job:localhost/replica:0/task:0/cpu:0". (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: ToFloat_3 = CastDstT=DT_FLOAT, SrcT=DT_UINT8, Truncate=false, _device="/job:localhost/replica:0/task:0/cpu:0"]]

[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x00000222F0BB4378>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Stopped shape/value propagation at "ToFloat_3" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

So I have installed OpenVINO™ Toolkit correctly, I have test everything goes well, and detected successfully using C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools

I am uding OpenVINO ltest version (R5)

Note: during running the commands above, it throws error that requires TensorFlow 1.2, but in my machine I have installed 1.12, so I uninstalled and re installed the 1.2 version. If it is possible, please also support TF version 1.12 (the latest version) so there wont be need for the users to use OpenVINO to uninstall their TF.

Thanks!

@lazarevevgeny
Copy link
Contributor

@Bahramudin the Model Optimizer needs TensorFlow >= 1.2, not exactly 1.2. The error that is shown in the your second attempt is because of too old version of the TensorFlow, so, please, install 1.12 again and conversion should work.

@Bahramudin
Copy link
Author

@lazarevevgeny I reinstalled TF 1.12, now when I run this command:
mo_tf.py --input_model <INFERENCE_GRAPH>.pb --input_checkpoint <INPUT_CHECKPOINT>
it throws this error:

[ FRAMEWORK ERROR ] Cannot load input model: Unable to open table file C:\tf\models\research\object_detection\inference_graph\checkpoint: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?

and when to run threse commands:
mo_tf.py --saved_model_dir <SAVED_MODEL_DIRECTORY>
python3 mo_tf.py --input_model <INPUT_MODEL>.p
it throws this error:

C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\mo\ops\slice.py:111: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use arr[tuple(seq)] instead of arr[seq]. In the future this will be interpreted as an array index, arr[np.array(seq)], which will result either in an error or a different result.
value = value[slice_idx]
[ ERROR ] Shape [-1 -1 -1 3] is not fully defined for output 0 of "image_tensor". Use --input_shape with positive integers to override model input shapes.
[ ERROR ] Cannot infer shapes or values for node "image_tensor".
[ ERROR ] Not all output shapes were inferred or fully defined for node "image_tensor".
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function tf_placeholder_ext.. at 0x000001DBD7413EA0>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Stopped shape/value propagation at "image_tensor" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

What can be the reason!?

@lazarevevgeny
Copy link
Contributor

lazarevevgeny commented Jan 22, 2019

You should use command line parameters as specified in the documentation: https://docs.openvinotoolkit.org/2018_R5/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html

@Bahramudin
Copy link
Author

@lazarevevgeny Thanks a lot! After a lot of trying, uninstalling and reinstalling, finally successfully created .bin and .xml files. BTW I want to ask, after converting the model, then the speed was the same as before converting, I am using faster_rcnn_inception_v2, so is there anything I did forget?
I have done these steps:

  • Creating data-set

  • Train the data-set by using one of the pre-trained models of Object Detection API (in my case faster_rcnn_inception_v2)

  • Export inference_graph

  • Then converting to .bin and .xml (IR)

Are these steps enough? Or there can be something other optimization operation to do?

Very appreciate!

@lazarevevgeny
Copy link
Contributor

@Bahramudin what do you mean when say "the speed was the same as before converting"? Did you compare performance of TF on CPU and converted the model on CPU?

@Bahramudin
Copy link
Author

Bahramudin commented Jan 23, 2019

@lazarevevgeny Yes, when the model train finished, then I run it using CPU with TF, and then I used this wiki to use the model in OpenCV application by using cv::dnn::Net net = cv::dnn::readNetFromTensorflow(pb, pbtxt);, the speed remains the same as TF, and then converted the model to IR, and again used in OpenCV this time bu using cv::dnn::Net::readFromModelOptimizer(xml, bin); again the speed was the same as it was running in TF application. There was no implicit difference between these three runs.

So I asked the steps how did I produce the model, I am not sure if I forgot some step to do?!

@lazarevevgeny
Copy link
Contributor

@Bahramudin could you try to run the IR with "object_detection_sample_ssd" from the samples directory and measure the performance?

@dkurt could you take a look at the previous comment from @Bahramudin ?

@dkurt
Copy link
Contributor

dkurt commented Jan 23, 2019

@Bahramudin, Please provide reproducer of how you measure efficiency

  • With TensorFlow python script
  • With OpenCV and origin model
  • With OpenCV and IR model

You may use samples from wiki as a reference.

@agileselenium
Copy link

@lazarevevgeny Thanks a lot! After a lot of trying, uninstalling and reinstalling, finally successfully created .bin and .xml files. BTW I want to ask, after converting the model, then the speed was the same as before converting, I am using faster_rcnn_inception_v2, so is there anything I did forget?
I have done these steps:

  • Creating data-set
  • Train the data-set by using one of the pre-trained models of Object Detection API (in my case faster_rcnn_inception_v2)
  • Export inference_graph
  • Then converting to .bin and .xml (IR)

Are these steps enough? Or there can be something other optimization operation to do?

Very appreciate!

Can you please provide what is the command to generate the .bin and .xml using with the .pb file?

@s0r2637
Copy link

s0r2637 commented May 20, 2019

@agileselenium: Follow the command below. Hope this helps. I should warn you that using the frozen.pb and pbtxt format runs faster than the .xml and .bin format that you are trying to create.
sudo python3 mo_tf.py -m ~/INCISIVE/ssd_mobilenet_v1_coco_2018_01_28/frozen_inference_graph.pb -o ~/INCISIVE/ssd_mobilenet_v1_coco_2018_01_28/ --tensorflow_use_custom_operations_config ./extensions/front/tf/ssd_support.json --tensorflow_object_detection_api_pipeline_config ~/INCISIVE/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --data_type=FP32

@dlliu0422
Copy link

How to fix this problem ? I have the same problem.

@lazarevevgeny Thanks a lot! After a lot of trying, uninstalling and reinstalling, finally successfully created .bin and .xml files. BTW I want to ask, after converting the model, then the speed was the same as before converting, I am using faster_rcnn_inception_v2, so is there anything I did forget?
I have done these steps:

Creating data-set

Train the data-set by using one of the pre-trained models of Object Detection API (in my case faster_rcnn_inception_v2)

Export inference_graph

Then converting to .bin and .xml (IR)

Are these steps enough? Or there can be something other optimization operation to do?

Very appreciate!

@lazarevevgeny
Copy link
Contributor

@dlliu0422 There are nothing to do. I close the ticket as it seems to be resolved.

@lazarevevgeny lazarevevgeny added the category: MO Model Optimizer label May 25, 2020
eshoguli pushed a commit to eshoguli/openvino that referenced this issue Jun 1, 2021
andrew-k-park pushed a commit to andrew-k-park/openvino that referenced this issue Jun 14, 2022
* [GPU][Dynamic] Fix cldnn unit tests for gemm and depth_to_space
- Modify test params in gemm gpu test
- Modify test params in depth_to_space gpu tests
- Modify test params in strided_splice gpu tests
- Add Simple PartialShape test

* [GPU][Dynamic] Fix premute gpu test issue
- Modify check condition in layout_optimizer::can_fuse_reorder_to_prev

* [GPU][Dynamic] Fix data_layout_test.size_check issues
- implement layout::get_ordered_dims

* [GPU][Dynamic] Fix eltwsie fusing in fusings_gpu issues
- Change get_dims() to get_tensor().sizes()

* [GPU][Dynamic] Fix test params
- experimental_detectron_roi_feature_extractor_gpu_fp32.multiple_feature_extractor_op_with_different_number_of_inputs
- reorder_gpu_f32.bfwzyx_bfyx_chain

* [GPU][Dynamic] Fix reshape test issues
- Remove checking dynamic with 2 input for reshape
- Add wait_for_event to get real data after input node execution
- Modify test params for reshape test

* [GPU][Dynamic] Fix reduce test issues
- Fix output layout calculation in reduce_ist::calc_output_layout

* [GPU][Dynamic] Fix gather_nd and gatter issue
- Fix batch_dim issue which always convert negative value to positive without calcuation
- Fix test expected output result
- Ajust output shape by output format
- Add gather unit test for dynamic shape

* [GPU][Dynamic] Fix fusings_gpu issues
- Fix permute order
- Fix test params in gather, loop, and permute fustion test
- Fix reference function and test params in one_hot_gpu_test
- Skip one_hot_error.basic_error_bad_shape test
- Fix cache_test multi-threading issue
- Uncomment fully connected kernel selectors

* [GPU][Dynamic] Fix scatter update issues
- Change type of axis from int to size_t
- Fix axis in fusion test and single test
ahnyoung-paul referenced this issue in ahnyoung-paul/openvino Jun 15, 2022
* [GPU][Dynamic] Fix cldnn unit tests for gemm and depth_to_space
- Modify test params in gemm gpu test
- Modify test params in depth_to_space gpu tests
- Modify test params in strided_splice gpu tests
- Add Simple PartialShape test

* [GPU][Dynamic] Fix premute gpu test issue
- Modify check condition in layout_optimizer::can_fuse_reorder_to_prev

* [GPU][Dynamic] Fix data_layout_test.size_check issues
- implement layout::get_ordered_dims

* [GPU][Dynamic] Fix eltwsie fusing in fusings_gpu issues
- Change get_dims() to get_tensor().sizes()

* [GPU][Dynamic] Fix test params
- experimental_detectron_roi_feature_extractor_gpu_fp32.multiple_feature_extractor_op_with_different_number_of_inputs
- reorder_gpu_f32.bfwzyx_bfyx_chain

* [GPU][Dynamic] Fix reshape test issues
- Remove checking dynamic with 2 input for reshape
- Add wait_for_event to get real data after input node execution
- Modify test params for reshape test

* [GPU][Dynamic] Fix reduce test issues
- Fix output layout calculation in reduce_ist::calc_output_layout

* [GPU][Dynamic] Fix gather_nd and gatter issue
- Fix batch_dim issue which always convert negative value to positive without calcuation
- Fix test expected output result
- Ajust output shape by output format
- Add gather unit test for dynamic shape

* [GPU][Dynamic] Fix fusings_gpu issues
- Fix permute order
- Fix test params in gather, loop, and permute fustion test
- Fix reference function and test params in one_hot_gpu_test
- Skip one_hot_error.basic_error_bad_shape test
- Fix cache_test multi-threading issue
- Uncomment fully connected kernel selectors

* [GPU][Dynamic] Fix scatter update issues
- Change type of axis from int to size_t
- Fix axis in fusion test and single test
vladimir-paramuzov pushed a commit to vladimir-paramuzov/openvino that referenced this issue Jul 26, 2022
* [GPU][Dynamic] Fix cldnn unit tests for gemm and depth_to_space
- Modify test params in gemm gpu test
- Modify test params in depth_to_space gpu tests
- Modify test params in strided_splice gpu tests
- Add Simple PartialShape test

* [GPU][Dynamic] Fix premute gpu test issue
- Modify check condition in layout_optimizer::can_fuse_reorder_to_prev

* [GPU][Dynamic] Fix data_layout_test.size_check issues
- implement layout::get_ordered_dims

* [GPU][Dynamic] Fix eltwsie fusing in fusings_gpu issues
- Change get_dims() to get_tensor().sizes()

* [GPU][Dynamic] Fix test params
- experimental_detectron_roi_feature_extractor_gpu_fp32.multiple_feature_extractor_op_with_different_number_of_inputs
- reorder_gpu_f32.bfwzyx_bfyx_chain

* [GPU][Dynamic] Fix reshape test issues
- Remove checking dynamic with 2 input for reshape
- Add wait_for_event to get real data after input node execution
- Modify test params for reshape test

* [GPU][Dynamic] Fix reduce test issues
- Fix output layout calculation in reduce_ist::calc_output_layout

* [GPU][Dynamic] Fix gather_nd and gatter issue
- Fix batch_dim issue which always convert negative value to positive without calcuation
- Fix test expected output result
- Ajust output shape by output format
- Add gather unit test for dynamic shape

* [GPU][Dynamic] Fix fusings_gpu issues
- Fix permute order
- Fix test params in gather, loop, and permute fustion test
- Fix reference function and test params in one_hot_gpu_test
- Skip one_hot_error.basic_error_bad_shape test
- Fix cache_test multi-threading issue
- Uncomment fully connected kernel selectors

* [GPU][Dynamic] Fix scatter update issues
- Change type of axis from int to size_t
- Fix axis in fusion test and single test
vladimir-paramuzov pushed a commit to vladimir-paramuzov/openvino that referenced this issue Jul 26, 2022
* [GPU][Dynamic] Fix cldnn unit tests for gemm and depth_to_space
- Modify test params in gemm gpu test
- Modify test params in depth_to_space gpu tests
- Modify test params in strided_splice gpu tests
- Add Simple PartialShape test

* [GPU][Dynamic] Fix premute gpu test issue
- Modify check condition in layout_optimizer::can_fuse_reorder_to_prev

* [GPU][Dynamic] Fix data_layout_test.size_check issues
- implement layout::get_ordered_dims

* [GPU][Dynamic] Fix eltwsie fusing in fusings_gpu issues
- Change get_dims() to get_tensor().sizes()

* [GPU][Dynamic] Fix test params
- experimental_detectron_roi_feature_extractor_gpu_fp32.multiple_feature_extractor_op_with_different_number_of_inputs
- reorder_gpu_f32.bfwzyx_bfyx_chain

* [GPU][Dynamic] Fix reshape test issues
- Remove checking dynamic with 2 input for reshape
- Add wait_for_event to get real data after input node execution
- Modify test params for reshape test

* [GPU][Dynamic] Fix reduce test issues
- Fix output layout calculation in reduce_ist::calc_output_layout

* [GPU][Dynamic] Fix gather_nd and gatter issue
- Fix batch_dim issue which always convert negative value to positive without calcuation
- Fix test expected output result
- Ajust output shape by output format
- Add gather unit test for dynamic shape

* [GPU][Dynamic] Fix fusings_gpu issues
- Fix permute order
- Fix test params in gather, loop, and permute fustion test
- Fix reference function and test params in one_hot_gpu_test
- Skip one_hot_error.basic_error_bad_shape test
- Fix cache_test multi-threading issue
- Uncomment fully connected kernel selectors

* [GPU][Dynamic] Fix scatter update issues
- Change type of axis from int to size_t
- Fix axis in fusion test and single test
ahnyoung-paul referenced this issue in ahnyoung-paul/openvino Aug 2, 2022
* [GPU][Dynamic] Fix cldnn unit tests for gemm and depth_to_space
- Modify test params in gemm gpu test
- Modify test params in depth_to_space gpu tests
- Modify test params in strided_splice gpu tests
- Add Simple PartialShape test

* [GPU][Dynamic] Fix premute gpu test issue
- Modify check condition in layout_optimizer::can_fuse_reorder_to_prev

* [GPU][Dynamic] Fix data_layout_test.size_check issues
- implement layout::get_ordered_dims

* [GPU][Dynamic] Fix eltwsie fusing in fusings_gpu issues
- Change get_dims() to get_tensor().sizes()

* [GPU][Dynamic] Fix test params
- experimental_detectron_roi_feature_extractor_gpu_fp32.multiple_feature_extractor_op_with_different_number_of_inputs
- reorder_gpu_f32.bfwzyx_bfyx_chain

* [GPU][Dynamic] Fix reshape test issues
- Remove checking dynamic with 2 input for reshape
- Add wait_for_event to get real data after input node execution
- Modify test params for reshape test

* [GPU][Dynamic] Fix reduce test issues
- Fix output layout calculation in reduce_ist::calc_output_layout

* [GPU][Dynamic] Fix gather_nd and gatter issue
- Fix batch_dim issue which always convert negative value to positive without calcuation
- Fix test expected output result
- Ajust output shape by output format
- Add gather unit test for dynamic shape

* [GPU][Dynamic] Fix fusings_gpu issues
- Fix permute order
- Fix test params in gather, loop, and permute fustion test
- Fix reference function and test params in one_hot_gpu_test
- Skip one_hot_error.basic_error_bad_shape test
- Fix cache_test multi-threading issue
- Uncomment fully connected kernel selectors

* [GPU][Dynamic] Fix scatter update issues
- Change type of axis from int to size_t
- Fix axis in fusion test and single test
andrew-k-park pushed a commit to andrew-k-park/openvino that referenced this issue Aug 3, 2022
* [GPU][Dynamic] Fix cldnn unit tests for gemm and depth_to_space
- Modify test params in gemm gpu test
- Modify test params in depth_to_space gpu tests
- Modify test params in strided_splice gpu tests
- Add Simple PartialShape test

* [GPU][Dynamic] Fix premute gpu test issue
- Modify check condition in layout_optimizer::can_fuse_reorder_to_prev

* [GPU][Dynamic] Fix data_layout_test.size_check issues
- implement layout::get_ordered_dims

* [GPU][Dynamic] Fix eltwsie fusing in fusings_gpu issues
- Change get_dims() to get_tensor().sizes()

* [GPU][Dynamic] Fix test params
- experimental_detectron_roi_feature_extractor_gpu_fp32.multiple_feature_extractor_op_with_different_number_of_inputs
- reorder_gpu_f32.bfwzyx_bfyx_chain

* [GPU][Dynamic] Fix reshape test issues
- Remove checking dynamic with 2 input for reshape
- Add wait_for_event to get real data after input node execution
- Modify test params for reshape test

* [GPU][Dynamic] Fix reduce test issues
- Fix output layout calculation in reduce_ist::calc_output_layout

* [GPU][Dynamic] Fix gather_nd and gatter issue
- Fix batch_dim issue which always convert negative value to positive without calcuation
- Fix test expected output result
- Ajust output shape by output format
- Add gather unit test for dynamic shape

* [GPU][Dynamic] Fix fusings_gpu issues
- Fix permute order
- Fix test params in gather, loop, and permute fustion test
- Fix reference function and test params in one_hot_gpu_test
- Skip one_hot_error.basic_error_bad_shape test
- Fix cache_test multi-threading issue
- Uncomment fully connected kernel selectors

* [GPU][Dynamic] Fix scatter update issues
- Change type of axis from int to size_t
- Fix axis in fusion test and single test
andrew-k-park pushed a commit to andrew-k-park/openvino that referenced this issue Aug 4, 2022
* [GPU][Dynamic] Fix cldnn unit tests for gemm and depth_to_space
- Modify test params in gemm gpu test
- Modify test params in depth_to_space gpu tests
- Modify test params in strided_splice gpu tests
- Add Simple PartialShape test

* [GPU][Dynamic] Fix premute gpu test issue
- Modify check condition in layout_optimizer::can_fuse_reorder_to_prev

* [GPU][Dynamic] Fix data_layout_test.size_check issues
- implement layout::get_ordered_dims

* [GPU][Dynamic] Fix eltwsie fusing in fusings_gpu issues
- Change get_dims() to get_tensor().sizes()

* [GPU][Dynamic] Fix test params
- experimental_detectron_roi_feature_extractor_gpu_fp32.multiple_feature_extractor_op_with_different_number_of_inputs
- reorder_gpu_f32.bfwzyx_bfyx_chain

* [GPU][Dynamic] Fix reshape test issues
- Remove checking dynamic with 2 input for reshape
- Add wait_for_event to get real data after input node execution
- Modify test params for reshape test

* [GPU][Dynamic] Fix reduce test issues
- Fix output layout calculation in reduce_ist::calc_output_layout

* [GPU][Dynamic] Fix gather_nd and gatter issue
- Fix batch_dim issue which always convert negative value to positive without calcuation
- Fix test expected output result
- Ajust output shape by output format
- Add gather unit test for dynamic shape

* [GPU][Dynamic] Fix fusings_gpu issues
- Fix permute order
- Fix test params in gather, loop, and permute fustion test
- Fix reference function and test params in one_hot_gpu_test
- Skip one_hot_error.basic_error_bad_shape test
- Fix cache_test multi-threading issue
- Uncomment fully connected kernel selectors

* [GPU][Dynamic] Fix scatter update issues
- Change type of axis from int to size_t
- Fix axis in fusion test and single test
andrew-k-park pushed a commit to andrew-k-park/openvino that referenced this issue Aug 16, 2022
* [GPU][Dynamic] Fix cldnn unit tests for gemm and depth_to_space
- Modify test params in gemm gpu test
- Modify test params in depth_to_space gpu tests
- Modify test params in strided_splice gpu tests
- Add Simple PartialShape test

* [GPU][Dynamic] Fix premute gpu test issue
- Modify check condition in layout_optimizer::can_fuse_reorder_to_prev

* [GPU][Dynamic] Fix data_layout_test.size_check issues
- implement layout::get_ordered_dims

* [GPU][Dynamic] Fix eltwsie fusing in fusings_gpu issues
- Change get_dims() to get_tensor().sizes()

* [GPU][Dynamic] Fix test params
- experimental_detectron_roi_feature_extractor_gpu_fp32.multiple_feature_extractor_op_with_different_number_of_inputs
- reorder_gpu_f32.bfwzyx_bfyx_chain

* [GPU][Dynamic] Fix reshape test issues
- Remove checking dynamic with 2 input for reshape
- Add wait_for_event to get real data after input node execution
- Modify test params for reshape test

* [GPU][Dynamic] Fix reduce test issues
- Fix output layout calculation in reduce_ist::calc_output_layout

* [GPU][Dynamic] Fix gather_nd and gatter issue
- Fix batch_dim issue which always convert negative value to positive without calcuation
- Fix test expected output result
- Ajust output shape by output format
- Add gather unit test for dynamic shape

* [GPU][Dynamic] Fix fusings_gpu issues
- Fix permute order
- Fix test params in gather, loop, and permute fustion test
- Fix reference function and test params in one_hot_gpu_test
- Skip one_hot_error.basic_error_bad_shape test
- Fix cache_test multi-threading issue
- Uncomment fully connected kernel selectors

* [GPU][Dynamic] Fix scatter update issues
- Change type of axis from int to size_t
- Fix axis in fusion test and single test
mvafin pushed a commit to mvafin/openvino that referenced this issue Jan 2, 2023
prim::max transformation for ListConstruct
Retribution98 pushed a commit to Retribution98/openvino that referenced this issue Jan 15, 2025
github-merge-queue bot pushed a commit that referenced this issue Jan 20, 2025
Bumps [pytest-dependency](https://github.com/RKrahl/pytest-dependency)
from 0.5.1 to 0.6.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/RKrahl/pytest-dependency/blob/develop/CHANGES.rst">pytest-dependency's
changelog</a>.</em></p>
<blockquote>
<p>0.6.0 (2023-12-31)</p>
<pre><code>
Documentation
-------------
<ul>

<li><code>[#39](https://github.com/RKrahl/pytest-dependency/issues/39)</code><em>,
<code>[#41](https://github.com/RKrahl/pytest-dependency/issues/41)</code></em>,
<code>[#59](https://github.com/RKrahl/pytest-dependency/issues/59)</code>_:
Review documentation</li>
</ul>
<h2>Incompatible changes</h2>
<ul>
<li>Drop support for Python 2.</li>
</ul>
<h2>Bug fixes and minor changes</h2>
<ul>

<li><code>[#40](https://github.com/RKrahl/pytest-dependency/issues/40)</code>_:
add logging.</li>

<li><code>[#50](https://github.com/RKrahl/pytest-dependency/issues/50)</code><em>,
<code>[#51](https://github.com/RKrahl/pytest-dependency/issues/51)</code></em>:
test suite incompatibility with pytest 6.2.0.</li>

<li><code>[#58](https://github.com/RKrahl/pytest-dependency/issues/58)</code>_:
declare the type of automark_dependency ini-option correctly
as bool.</li>
</ul>
<h2>Internal</h2>
<ul>

<li><code>[#75](https://github.com/RKrahl/pytest-dependency/issues/75)</code>_:
review build tool chain.</li>
</ul>
<p>.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/39">#39</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/39">RKrahl/pytest-dependency#39</a>
.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/40">#40</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/40">RKrahl/pytest-dependency#40</a>
.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/41">#41</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/41">RKrahl/pytest-dependency#41</a>
.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/50">#50</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/50">RKrahl/pytest-dependency#50</a>
.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/51">#51</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/pull/51">RKrahl/pytest-dependency#51</a>
.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/58">#58</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/pull/58">RKrahl/pytest-dependency#58</a>
.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/59">#59</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/pull/59">RKrahl/pytest-dependency#59</a>
.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/75">#75</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/pull/75">RKrahl/pytest-dependency#75</a>
</code></pre></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/2cae58956bff640bbfbbdc8212cae97b7967453c"><code>2cae589</code></a>
Merge branch 'develop'</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/def647eda5778fc5b0ffebb7251c81e804f50089"><code>def647e</code></a>
Prepare release 0.6.0</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/2baac9b866777f8fa353defe6c4a770f0df90d85"><code>2baac9b</code></a>
Merge branch 'doc' into develop</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/38baf8cb5f561dcf16539c8304b0f71898a625f6"><code>38baf8c</code></a>
Update changelog</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/e2edf54383b3b8f2c137d78677debfd63b23ba95"><code>e2edf54</code></a>
Explicitely set language to 'en'</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/f11cf56ca553992891d80ce79bbbac82aa40d285"><code>f11cf56</code></a>
Rewrite introduction to the debugging guide</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/346a3441efbf26dfad83e277ab6ccac26cfe6d75"><code>346a344</code></a>
Move the changelog to the end, after the API reference</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/463227e4519e444701f22acadf3442c1b45e5214"><code>463227e</code></a>
Review README and bump copyright year</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/eb48f326af4428f7e292bc980c80a02686650832"><code>eb48f32</code></a>
Fixup 695ea27: trailing whitespace</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/695ea2742af78fe65a5b17426170b481666ec0a2"><code>695ea27</code></a>
Update install instructions</li>
<li>Additional commits viewable in <a
href="https://github.com/RKrahl/pytest-dependency/compare/0.5.1...0.6.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pytest-dependency&package-manager=pip&previous-version=0.5.1&new-version=0.6.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
MirceaDan99 pushed a commit to MirceaDan99/openvino that referenced this issue Jan 22, 2025
…#28549)

Bumps [pytest-dependency](https://github.com/RKrahl/pytest-dependency)
from 0.5.1 to 0.6.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/RKrahl/pytest-dependency/blob/develop/CHANGES.rst">pytest-dependency's
changelog</a>.</em></p>
<blockquote>
<p>0.6.0 (2023-12-31)</p>
<pre><code>
Documentation
-------------
<ul>

<li><code>[openvinotoolkit#39](https://github.com/RKrahl/pytest-dependency/issues/39)</code><em>,
<code>[openvinotoolkit#41](https://github.com/RKrahl/pytest-dependency/issues/41)</code></em>,
<code>[openvinotoolkit#59](https://github.com/RKrahl/pytest-dependency/issues/59)</code>_:
Review documentation</li>
</ul>
<h2>Incompatible changes</h2>
<ul>
<li>Drop support for Python 2.</li>
</ul>
<h2>Bug fixes and minor changes</h2>
<ul>

<li><code>[openvinotoolkit#40](https://github.com/RKrahl/pytest-dependency/issues/40)</code>_:
add logging.</li>

<li><code>[openvinotoolkit#50](https://github.com/RKrahl/pytest-dependency/issues/50)</code><em>,
<code>[openvinotoolkit#51](https://github.com/RKrahl/pytest-dependency/issues/51)</code></em>:
test suite incompatibility with pytest 6.2.0.</li>

<li><code>[openvinotoolkit#58](https://github.com/RKrahl/pytest-dependency/issues/58)</code>_:
declare the type of automark_dependency ini-option correctly
as bool.</li>
</ul>
<h2>Internal</h2>
<ul>

<li><code>[openvinotoolkit#75](https://github.com/RKrahl/pytest-dependency/issues/75)</code>_:
review build tool chain.</li>
</ul>
<p>.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/39">#39</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/39">RKrahl/pytest-dependency#39</a>
.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/40">#40</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/40">RKrahl/pytest-dependency#40</a>
.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/41">#41</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/41">RKrahl/pytest-dependency#41</a>
.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/50">#50</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/50">RKrahl/pytest-dependency#50</a>
.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/51">#51</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/pull/51">RKrahl/pytest-dependency#51</a>
.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/58">#58</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/pull/58">RKrahl/pytest-dependency#58</a>
.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/59">#59</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/pull/59">RKrahl/pytest-dependency#59</a>
.. _<a
href="https://redirect.github.com/RKrahl/pytest-dependency/issues/75">#75</a>:
<a
href="https://redirect.github.com/RKrahl/pytest-dependency/pull/75">RKrahl/pytest-dependency#75</a>
</code></pre></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/2cae58956bff640bbfbbdc8212cae97b7967453c"><code>2cae589</code></a>
Merge branch 'develop'</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/def647eda5778fc5b0ffebb7251c81e804f50089"><code>def647e</code></a>
Prepare release 0.6.0</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/2baac9b866777f8fa353defe6c4a770f0df90d85"><code>2baac9b</code></a>
Merge branch 'doc' into develop</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/38baf8cb5f561dcf16539c8304b0f71898a625f6"><code>38baf8c</code></a>
Update changelog</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/e2edf54383b3b8f2c137d78677debfd63b23ba95"><code>e2edf54</code></a>
Explicitely set language to 'en'</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/f11cf56ca553992891d80ce79bbbac82aa40d285"><code>f11cf56</code></a>
Rewrite introduction to the debugging guide</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/346a3441efbf26dfad83e277ab6ccac26cfe6d75"><code>346a344</code></a>
Move the changelog to the end, after the API reference</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/463227e4519e444701f22acadf3442c1b45e5214"><code>463227e</code></a>
Review README and bump copyright year</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/eb48f326af4428f7e292bc980c80a02686650832"><code>eb48f32</code></a>
Fixup 695ea27: trailing whitespace</li>
<li><a
href="https://github.com/RKrahl/pytest-dependency/commit/695ea2742af78fe65a5b17426170b481666ec0a2"><code>695ea27</code></a>
Update install instructions</li>
<li>Additional commits viewable in <a
href="https://github.com/RKrahl/pytest-dependency/compare/0.5.1...0.6.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pytest-dependency&package-manager=pip&previous-version=0.5.1&new-version=0.6.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: MO Model Optimizer
Projects
None yet
Development

No branches or pull requests

6 participants