-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Update BaseInferencer #773
[Feature] Update BaseInferencer #773
Conversation
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## inference #773 +/- ##
============================================
Coverage ? 78.63%
============================================
Files ? 130
Lines ? 9536
Branches ? 1876
============================================
Hits ? 7499
Misses ? 1716
Partials ? 321
Flags with carried forward coverage won't be shown. Click here to find out more. Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
mmengine/infer/infer.py
Outdated
device: Optional[str] = None, | ||
) -> None: | ||
self.scope = \ | ||
DefaultScope.get_current_instance().scope_name # type: ignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where will the default_scope be initialized?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If model
or model_name
is a built Config
or ConfigDict
, there is no need to initialize the default scope. However, If the model
is a model name defined in metafile
, we must initialize the scope before initializing the inferencer.
Args: | ||
preds (List[Dict]): Predictions of the model. | ||
visualization (np.ndarray): Visualized predictions. | ||
return_datasample (bool): Whether to return results as datasamples. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will change the type of predictions
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the prediction could be transferred from DataSample to dict
mmengine/infer/infer.py
Outdated
|
||
def __init__( | ||
self, | ||
model: Union[ModelType, str], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model_name would be better? Otherwise, I may pass an nn.Module to it by mistake.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The naming of this variable also struggled with me. It could be a model name, config path, or a built Config
. For me, I prefer to it name it config
. @gaotongxiao, what's your opinion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Either model_name
or config
is too accurate to only reflect part of its functionality. We can stick to model
since it's general enough. In future, we can also consider supporting @RangiLyu 's use case, which is a nice plus to Inferencer.
preds = [] | ||
for data in track(inputs, description='Inference'): | ||
preds.extend(self.forward(data, **forward_kwargs)) | ||
visualization = self.visualize( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The visualized image will be saved by VisBackend defined in the config?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe, it will also be influenced by the visualize
and postprocess
implemented in subclass.
postprocess_kwargs, | ||
) = self._dispatch_kwargs(**kwargs) | ||
|
||
inputs = self.preprocess( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this design be compatible with async preprocessing? Or can a child class support async inference easily without overwriting too many methods?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is async preprocessing necessary? Preprocessing
seems also a compute-intensive task, and I think the gain from async will be very limited. I think we could provide a more efficient data preprocessing pipeline referred to PyTorch Dataloader in the future.
mmengine/infer/infer.py
Outdated
|
||
def __init__( | ||
self, | ||
model: Union[ModelType, str], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Either model_name
or config
is too accurate to only reflect part of its functionality. We can stick to model
since it's general enough. In future, we can also consider supporting @RangiLyu 's use case, which is a nice plus to Inferencer.
* Update BaseInferencer * Fix ci * Fix CI and rename iferencer to infer * Fix CI * Add renamed file * Add test file * Adjust interface sequence * refine preprocess * Update unit test Update unit test * Update unit test * Fix unit test * Fix as comment * Minor refine * Fix docstring and support load image from different backend * Support load collate_fn from downstream repos, refine dispatch * Minor refine * Fix lint * refine grammar * Remove FileClient * Refine docstring * add rich * Add list_models * Add list_models * Remove backend args * Minor refine
* Update BaseInferencer * Fix ci * Fix CI and rename iferencer to infer * Fix CI * Add renamed file * Add test file * Adjust interface sequence * refine preprocess * Update unit test Update unit test * Update unit test * Fix unit test * Fix as comment * Minor refine * Fix docstring and support load image from different backend * Support load collate_fn from downstream repos, refine dispatch * Minor refine * Fix lint * refine grammar * Remove FileClient * Refine docstring * add rich * Add list_models * Add list_models * Remove backend args * Minor refine
* [Feature] Add BaseInferencer (#773) * Update BaseInferencer * Fix ci * Fix CI and rename iferencer to infer * Fix CI * Add renamed file * Add test file * Adjust interface sequence * refine preprocess * Update unit test Update unit test * Update unit test * Fix unit test * Fix as comment * Minor refine * Fix docstring and support load image from different backend * Support load collate_fn from downstream repos, refine dispatch * Minor refine * Fix lint * refine grammar * Remove FileClient * Refine docstring * add rich * Add list_models * Add list_models * Remove backend args * Minor refine * Fix typos in docs and type hints (#787) * [Fix] Add _inputs_to_list (#795) * Add preprocess inputs * Add type hint * update api/infer in index.rst * rename preprocess_inputs to _inputs_to_list * Fix doc format * Update infer.py Co-authored-by: Zaida Zhou <[email protected]> * [Fix] Fix alias type (#801) * [Enhance] Support loading model config from checkpoint (#864) * first commit * [Enhance] Support build model from weight * minor refine * Fix type hint * refine comments * Update docstring * refine as comment * Add method * Refine docstring * Fix as comment * refine comments * Refine warning message * Fix unit test and refine comments * replace MODULE2PACKAGE to MODULE2PAKCAGE * Fix typo and syntax error in docstring Co-authored-by: Zaida Zhou <[email protected]> Co-authored-by: Tong Gao <[email protected]>
* [Feature] Add BaseInferencer (#773) * Update BaseInferencer * Fix ci * Fix CI and rename iferencer to infer * Fix CI * Add renamed file * Add test file * Adjust interface sequence * refine preprocess * Update unit test Update unit test * Update unit test * Fix unit test * Fix as comment * Minor refine * Fix docstring and support load image from different backend * Support load collate_fn from downstream repos, refine dispatch * Minor refine * Fix lint * refine grammar * Remove FileClient * Refine docstring * add rich * Add list_models * Add list_models * Remove backend args * Minor refine * Fix typos in docs and type hints (#787) * [Fix] Add _inputs_to_list (#795) * Add preprocess inputs * Add type hint * update api/infer in index.rst * rename preprocess_inputs to _inputs_to_list * Fix doc format * Update infer.py Co-authored-by: Zaida Zhou <[email protected]> * [Fix] Fix alias type (#801) * [Enhance] Support loading model config from checkpoint (#864) * first commit * [Enhance] Support build model from weight * minor refine * Fix type hint * refine comments * Update docstring * refine as comment * Add method * Refine docstring * Fix as comment * refine comments * Refine warning message * Fix unit test and refine comments * add infer.md * minor refine * minor refine * minor refine * Apply suggestions from code review Co-authored-by: Zaida Zhou <[email protected]> * Fix conflicts and minor refine * minor refine * Fix as comment Co-authored-by: Zaida Zhou <[email protected]> Co-authored-by: Tong Gao <[email protected]>
## Motivation Support `MMSegInferencer` for providing an easy and clean interface for single or multiple images inferencing. Ref: open-mmlab/mmengine#773 open-mmlab/mmocr#1608 ## Modification - mmseg/apis/mmseg_inferencer.py - mmseg/visualization/local_visualizer.py - demo/image_demo_with_inferencer.py ## Use cases (Optional) Based on https://github.com/open-mmlab/mmengine/tree/inference Add a new image inference demo with `MMSegInferencer` - demo/image_demo_with_inferencer.py ```shell python demo/image_demo_with_inferencer.py demo/demo.png fcn_r50-d8_4xb2-40k_cityscapes-512x1024 ``` --------- Co-authored-by: MeowZheng <[email protected]>
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
Motivation
Add
BaseInferencer
for providing an easy and clean interface for single or multiple images inferencing.How to use
How to build an inferencer
Based on
BaseInferencer
, the__init__
of subclasses should at least accept 3 arguments:Alias
, the config could also be inferred by theAlias
Config
,ConfigDict
ordict
instance.The standard workflow of inferencer
BaseInferencer
implement the standard inference work flow in__call__
:if DownStream repos want to customize the workflow, they can override the
__call__
methodpreprocess data
prepare the
pipeline
(abstract method)subclasses should override the
_init_pipeline
to customize the pipeline. The returned pipeline will be used to process each single data.prepare the
collate_fn
BaseInferencer
provides a common way to getcollate_fn
from cfg. If you have more custom usage, you can override the method to get targetcollate_fn
prepare the chunked data.
subclasses could override the
prepare
to get a custom chunked data.BaseInferencer
provides a common way to build the chunked data in_get_chunk_data
.preprocess will use the prepared
pipeline
,collate_fn
to return a target chunked data, of which each item could be directly passed tomodel.test_step
forward
Inference with the chunked data.
BaseInferencer
callmodel.test_step in
forwardby default
.visualize(abstract method)
Subclasses should implement visualize to visualize the result and return the visualization result.
postprocess(abstract method)
Subclasses should implement postprocess to get the target format result(DataSample or dict) and the visualization result.
The coverage rate of unit test:
Modification
Please briefly describe what modification is made in this PR.
BC-breaking (Optional)
Does the modification introduce changes that break the backward-compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.
Checklist