Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Justinchu/clang tidy test #12655

Closed
wants to merge 20 commits into from
Closed

Justinchu/clang tidy test #12655

wants to merge 20 commits into from

Conversation

justinchuby
Copy link
Contributor

@justinchuby justinchuby commented Aug 19, 2022

Do not merge.

Test clang-tidy CI

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clang-tidy made some suggestions

There were too many comments to post at once. Showing the first 25 out of 102. Check the log or trigger a new build to see more.

@@ -1130,7 +1130,7 @@

// Adapter to convert the legacy OrtCUDAProviderOptions to the latest OrtCUDAProviderOptionsV2
OrtCUDAProviderOptionsV2 OrtCUDAProviderOptionsToOrtCUDAProviderOptionsV2(const OrtCUDAProviderOptions* legacy_cuda_options) {
OrtCUDAProviderOptionsV2 cuda_options_converted;
OrtCUDAProviderOptionsV2 cuda_options_converted{};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: variable 'cuda_options_converted' is not initialized [cppcoreguidelines-init-variables]

Suggested change
OrtCUDAProviderOptionsV2 cuda_options_converted{};
OrtCUDAProviderOptionsV2 cuda_options_converted = 0{};

// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.

#include "orttraining/lazy_tensor/accelerator.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: 'orttraining/lazy_tensor/accelerator.h' file not found [clang-diagnostic-error]

#include "orttraining/lazy_tensor/accelerator.h"
         ^

namespace onnxruntime {

namespace python {
Environment& GetTrainingORTEnv();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: use a trailing return type for this function [modernize-use-trailing-return-type]

Environment& GetTrainingORTEnv();
             ^

// Be aware of GIL issue.
// The returned value is the path to exported
// ONNX file.
static std::string ExportToOnnx(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: use a trailing return type for this function [modernize-use-trailing-return-type]

Suggested change
static std::string ExportToOnnx(
static auto ExportToOnnx(

orttraining/orttraining/lazy_tensor/accelerator.cc:238:

-     const at::ArrayRef<c10::IValue>& args) {
+     const at::ArrayRef<c10::IValue>& args) -> std::string {

// ONNX exporter modifies the graph in-place, so we
// need to clone it to avoid interaction between
// Pytorch's JIT mechanism and ONNX graph.
std::shared_ptr<torch::jit::Graph> new_subgraph(graph->copyUnique().release());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: variable 'new_subgraph' is not initialized [cppcoreguidelines-init-variables]

Suggested change
std::shared_ptr<torch::jit::Graph> new_subgraph(graph->copyUnique().release());
std::shared_ptr<torch::jit::Graph> new_subgraph = 0(graph->copyUnique().release());

onnxruntime::MLDataType element_type = CreateOrtScalarType(tensor.scalar_type());
onnxruntime::TensorShape shape(tensor.sizes().vec());
OrtDevice device = CreateOrtDevice(tensor.device());
OrtMemoryInfo memory_info = OrtMemoryInfo("LTC", OrtAllocatorType::OrtDeviceAllocator, device, device.Id());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: variable 'memory_info' is not initialized [cppcoreguidelines-init-variables]

Suggested change
OrtMemoryInfo memory_info = OrtMemoryInfo("LTC", OrtAllocatorType::OrtDeviceAllocator, device, device.Id());
OrtMemoryInfo memory_info = 0 = OrtMemoryInfo("LTC", OrtAllocatorType::OrtDeviceAllocator, device, device.Id());

OrtDevice device = CreateOrtDevice(tensor.device());
OrtMemoryInfo memory_info = OrtMemoryInfo("LTC", OrtAllocatorType::OrtDeviceAllocator, device, device.Id());
// This tensor's life time is controlled by Pytorch.
// TODO: consider to let ORT also own that tensor.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: missing username/bug in TODO [google-readability-todo]

Suggested change
// TODO: consider to let ORT also own that tensor.
// TODO(unknown): consider to let ORT also own that tensor.

element_type, shape,
tensor.data_ptr(), memory_info);

OrtValue ort_value;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: variable 'ort_value' is not initialized [cppcoreguidelines-init-variables]

Suggested change
OrtValue ort_value;
OrtValue ort_value = 0;

return ort_value;
}

c10::IValue CreateC10IvalueTensor(OrtValue value) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: use a trailing return type for this function [modernize-use-trailing-return-type]

c10::IValue CreateC10IvalueTensor(OrtValue value) {
            ^

}

c10::IValue CreateC10IvalueTensor(OrtValue value) {
onnxruntime::Tensor* tensor = value.GetMutable<onnxruntime::Tensor>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: variable 'tensor' is not initialized [cppcoreguidelines-init-variables]

Suggested change
onnxruntime::Tensor* tensor = value.GetMutable<onnxruntime::Tensor>();
onnxruntime::Tensor* tensor = nullptr = value.GetMutable<onnxruntime::Tensor>();

Copy link
Member

@snnn snnn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

Our Linux VMs are Ubuntu 20.04 based, and they have clang 14.0 installed. Once this work is done, we can add it to CI to enforce the rules.

@@ -88,6 +88,13 @@ jobs:
level: warning
flags: --linelength=120 --exclude=java/src/main/native/*.c
filter: "-runtime/references"
- name: Run clang-tidy
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest not using 3rd-party actions when possible, for the possibility of leaking tokens.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call. Although IIRC the token is periodically rotated? We are using a few 3rd party checks so that would be an all or non situation. Would it make sense for us to verify the token access to make sure "posting to the PR thread" (and other necessary access) are the only things allowed?

@justinchuby
Copy link
Contributor Author

@snnn With this test PR I am seeing a few checks that are particularly noisy / may not make sense (onnxruntime::Tensor* tensor = nullptr = value.GetMutable<onnxruntime::Tensor>();)

I am still learning c++ so I don't have good judgement on the suggestions generated. Do you have a recommended rules list we can modify based on the currect .clang-tidy file?

@snnn
Copy link
Member

snnn commented Aug 19, 2022

No, I don't. I will look into it this weekend.

@justinchuby
Copy link
Contributor Author

@snnn With this test PR I am seeing a few checks that are particularly noisy / may not make sense (onnxruntime::Tensor* tensor = nullptr = value.GetMutable<onnxruntime::Tensor>();)

I am still learning c++ so I don't have good judgement on the suggestions generated. Do you have a recommended rules list we can modify based on the current .clang-tidy file?

@edgchen1 do you have any suggestions on .clang-tidy rules?

@edgchen1
Copy link
Contributor

edgchen1 commented Sep 1, 2022

@snnn With this test PR I am seeing a few checks that are particularly noisy / may not make sense (onnxruntime::Tensor* tensor = nullptr = value.GetMutable<onnxruntime::Tensor>();)
I am still learning c++ so I don't have good judgement on the suggestions generated. Do you have a recommended rules list we can modify based on the current .clang-tidy file?

@edgchen1 do you have any suggestions on .clang-tidy rules?

I don't have much at the moment - would need to familiarize myself with them more.

From a quick look at the list, "google" ones seem good since we follow their style guide. Some of the "bugprone", "cppcoreguidlines", "modernize", "performance", and "readability" ones look useful too, though perhaps we should pick and choose from them. For example, I think we can do without modernize-use-trailing-return-type, it's just stylistic.

Looking at the warnings in this PR, it appears that maybe clang-tidy isn't configured correctly. For example, a lot of the warnings from cppcoreguidelines-init-variables are actually referring to initialized variables. It also complains about being unable to find include files. Is the compile_commands.json file getting generated and passed to clang-tidy (-p option)?

@justinchuby
Copy link
Contributor Author

Is the compile_commands.json file getting generated and passed to clang-tidy (-p option)?

No since I wasn't sure what the best way is to generate it. Is there a command I can run to create it?

@edgchen1
Copy link
Contributor

edgchen1 commented Sep 1, 2022

Is the compile_commands.json file getting generated and passed to clang-tidy (-p option)?

No since I wasn't sure what the best way is to generate it. Is there a command I can run to create it?

That's a good question. We do have various build options, which ones should we test with? @snnn
To generate it you can define CMAKE_EXPORT_COMPILE_COMMANDS when calling CMake (e.g., build.py --cmake_extra_defines CMAKE_EXPORT_COMPILE_COMMANDS=ON).

@justinchuby justinchuby force-pushed the justinchu/clang-tidy-test branch from e1d55da to 9759107 Compare September 21, 2022 17:43
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clang-tidy made some suggestions

@@ -3,17 +3,18 @@

#pragma once

#include <limits.h>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: inclusion of deprecated C++ header 'limits.h'; consider using 'climits' instead [modernize-deprecated-headers]

Suggested change
#include <limits.h>
#include <climits>

// type constraints with types supported in this build
const std::unordered_map<std::string, std::vector<MLDataType>>& EnabledTypeConstraints() const {
return enabled_type_constraints_;
const std::unordered_map<std::string, std::vector<MLDataType>>& TypeConstraints() const {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: function 'TypeConstraints' should be marked [[nodiscard]] [modernize-use-nodiscard]

Suggested change
const std::unordered_map<std::string, std::vector<MLDataType>>& TypeConstraints() const {
[[nodiscard]] const std::unordered_map<std::string, std::vector<MLDataType>>& TypeConstraints() const {

@@ -73,7 +67,7 @@
return alias_map_;
}

const optional<std::pair<int, int>>& VariadicAlias() const {
const std::optional<std::pair<int, int>>& VariadicAlias() const {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: function 'VariadicAlias' should be marked [[nodiscard]] [modernize-use-nodiscard]

Suggested change
const std::optional<std::pair<int, int>>& VariadicAlias() const {
[[nodiscard]] const std::optional<std::pair<int, int>>& VariadicAlias() const {

KernelDefBuilder& KernelDefBuilder::SetDomain(const char* domain) {
kernel_def_->op_domain_ = std::string(domain);
KernelDefBuilder& KernelDefBuilder::TypeConstraint(std::string_view arg_name,
std::vector<MLDataType> types) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: parameter 'types' is unused [misc-unused-parameters]

Suggested change
std::vector<MLDataType> types) {
std::vector<MLDataType> /*types*/) {

KernelDefBuilder& KernelDefBuilder::TypeConstraint(const char* arg_name,
MLDataType default_type) {
return TypeConstraint(std::string(arg_name), default_type);
KernelDefBuilder& KernelDefBuilder::TypeConstraint(std::string_view arg_name,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: method 'TypeConstraint' can be made static [readability-convert-member-functions-to-static]

include/onnxruntime/core/framework/kernel_def_builder.h:216:

-   KernelDefBuilder& TypeConstraint(std::string_view arg_name, MLDataType type);
+   static KernelDefBuilder& TypeConstraint(std::string_view arg_name, MLDataType type);

return TypeConstraint(std::string(arg_name), default_type);
KernelDefBuilder& KernelDefBuilder::TypeConstraint(std::string_view arg_name,
MLDataType type) {
std::vector<MLDataType> types{type};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: variable 'types' is not initialized [cppcoreguidelines-init-variables]

Suggested change
std::vector<MLDataType> types{type};
std::vector<MLDataType> types = 0{type};

auto kernel = reinterpret_cast<onnxruntime::OpKernel*>(op);
onnxruntime::standalone::NodeRepo::GetInstance().RemoveNode(kernel);
delete kernel;
std::unique_ptr<onnxruntime::OpKernel> op_ptr{reinterpret_cast<onnxruntime::OpKernel*>(op)};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: variable 'op_ptr' is not initialized [cppcoreguidelines-init-variables]

Suggested change
std::unique_ptr<onnxruntime::OpKernel> op_ptr{reinterpret_cast<onnxruntime::OpKernel*>(op)};
std::unique_ptr<onnxruntime::OpKernel> op_ptr = 0{reinterpret_cast<onnxruntime::OpKernel*>(op)};

@@ -531,8 +531,7 @@

ORT_API(void, OrtApis::ReleaseKernelInfo, _Frees_ptr_opt_ OrtKernelInfo* info_copy) {
if (info_copy) {
auto kernel_info = reinterpret_cast<onnxruntime::OpKernelInfo*>(info_copy);
delete kernel_info;
std::unique_ptr<onnxruntime::OpKernelInfo> info_copy_ptr{reinterpret_cast<onnxruntime::OpKernelInfo*>(info_copy)};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: variable 'info_copy_ptr' is not initialized [cppcoreguidelines-init-variables]

Suggested change
std::unique_ptr<onnxruntime::OpKernelInfo> info_copy_ptr{reinterpret_cast<onnxruntime::OpKernelInfo*>(info_copy)};
std::unique_ptr<onnxruntime::OpKernelInfo> info_copy_ptr = 0{reinterpret_cast<onnxruntime::OpKernelInfo*>(info_copy)};

@edgchen1
Copy link
Contributor

FYI, these warnings seem incorrect:
#12655 (comment)
#12655 (comment)
#12655 (comment)
#12655 (comment)
#12655 (comment)

@justinchuby
Copy link
Contributor Author

@edgchen1 that's a little more noise than what I hope it would produce! Should I turn off

misc-unused-parameters
readability-convert-member-functions-to-static
cppcoreguidelines-init-variables

for now? I am not sure why cppcoreguidelines-init-variables went wrong in particular

@edgchen1
Copy link
Contributor

@edgchen1 that's a little more noise than what I hope it would produce! Should I turn off

misc-unused-parameters
readability-convert-member-functions-to-static
cppcoreguidelines-init-variables

for now? I am not sure why cppcoreguidelines-init-variables went wrong in particular

I think those are reasonable checks and it's ok to leave them on.

there are some errors from clang-tidy:

Output was:
error: unknown warning option '-Wno-nonnull-compare' [clang-diagnostic-unknown-warning-option]
/github/workspace/cmake/external/onnx/onnx/onnx_pb.h:51:10: error: 'onnx/onnx-ml.pb.h' file not found [clang-diagnostic-error]

https://github.com/microsoft/onnxruntime/actions/runs/3116529857/jobs/5054446243

which may be why we get these spurious warnings.

onnx/onnx-ml.pb.h is a generated file. should be able to generate it like this:

cmake --build build/Debug --config Debug --target onnx_proto

after the call to build.py:

- name: Generate compile_commands.json
run: |
python tools/ci_build/build.py \
--cmake_generator "Ninja" \
--build_dir build \
--update \
--cmake_extra_defines CMAKE_EXPORT_COMPILE_COMMANDS=ON

@justinchuby justinchuby marked this pull request as ready for review September 28, 2022 17:27
@justinchuby justinchuby marked this pull request as draft September 28, 2022 17:27
@justinchuby justinchuby reopened this Sep 28, 2022
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clang-tidy made some suggestions

@@ -77,7 +77,18 @@ ORT_API(void, OrtApis::ReleaseKernelInfo, _Frees_ptr_opt_ OrtKernelInfo*) {
namespace onnxruntime {
namespace standalone {

using NodePtr = std::unique_ptr<onnxruntime::Node>;
void ReleaseNode(onnxruntime::Node* node) {
if (node) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: implicit conversion 'onnxruntime::Node *' -> bool [readability-implicit-bool-conversion]

Suggested change
if (node) {
if (node != nullptr) {

using NodePtr = std::unique_ptr<onnxruntime::Node>;
void ReleaseNode(onnxruntime::Node* node) {
if (node) {
for (auto* input_arg : node->InputDefs()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: 'auto *input_arg' can be declared as 'const auto *input_arg' [readability-qualified-auto]

Suggested change
for (auto* input_arg : node->InputDefs()) {
for (const auto* input_arg : node->InputDefs()) {

for (auto* input_arg : node->InputDefs()) {
std::unique_ptr<const onnxruntime::NodeArg>{input_arg};
}
for (auto* output_arg : node->OutputDefs()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: 'auto *output_arg' can be declared as 'const auto *output_arg' [readability-qualified-auto]

Suggested change
for (auto* output_arg : node->OutputDefs()) {
for (const auto* output_arg : node->OutputDefs()) {

@justinchuby
Copy link
Contributor Author

@edgchen1 looking good now (to me)

@justinchuby justinchuby closed this Jan 3, 2023
@justinchuby justinchuby deleted the justinchu/clang-tidy-test branch April 20, 2023 23:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants