-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for named outputs in MLF archive #13704
Conversation
Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.
Generated by tvm-bot |
Following from apache#12789, this adds support for determining the output tensor name from the input model within the MLF metadata json. Co-authored-by: Ashutosh Parkhi <[email protected]>
fc5aec8
to
b32beba
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this requires an RFC, but could you please add some details of the metadata.json new structure to the PR description. Mainly explain how would someone access the output/input names for better visibility.
"test_output_d": {"size": 12, "dtype": "float32"}, | ||
} | ||
|
||
|
||
if __name__ == "__main__": | ||
sys.exit(pytest.main([__file__] + sys.argv[1:])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update this to tvm.testing.main()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like it'd be better done as a blanket change rather than polluting many individual patches?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
either way is fine. We don't have a lint step to catch these, that's why I've trying to notify people to fix it as I see them in the PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather not introduce unrelated changes into my patch, I can try a mass find and replace in a future patch? 😸
Hi @mehrdadh, #12789 adds the functionality in the metadata, this adds support for output names - I haven't changed the structure. |
@tvm-bot rerun |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ashutosh-arm I think we still need #13655 for the checking of the actual constant which was broken before? this just checks the JSON |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! thanks!
This reverts commit 329584b.
fyi @Mousius @mehrdadh this seems to have merge conflicted with something once it was merged in 329584b so CI is now failing, I have #13739 up to revert it, we can wait and see if that fixes the failures we're seeing on example failure: https://ci.tlcpack.ai/blue/organizations/jenkins/tvm-cortexm/detail/main/115/pipeline |
This PR fixes error introduced by #13704
Following from apache#12789, this adds support for determining the output tensor name from the input model within the MLF metadata json. Co-authored-by: Ashutosh Parkhi <[email protected]>
This PR fixes error introduced by apache#13704
Following from #12789, this adds support for determining the output tensor name from the input model within the MLF metadata json.
Co-authored-by: Ashutosh Parkhi [email protected]