Replies: 1 comment
-
How to use the semantic ground-truth image segmentation results in HM3D datasets? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We are excited to announce the Habitat Challenge 2023 for the ObjectNav and ImageNav tasks at the CVPR Embodied AI Workshop.
Habitat.Navigation.Challenge.2023.mp4
ObjectNav focuses on egocentric object/scene recognition and a commonsense understanding of object semantics (where is a bed typically located in a house?). This year we are instantiating ObjectNav on the newly released HM3D-Semantics v0.2 dataset.
![objectnav_spec](https://user-images.githubusercontent.com/29974572/226515462-6fc76fb6-e8c6-40a6-8a5d-395fd4630c1b.gif)
ImageNav focuses on visual reasoning and embodied instance disambiguation (is the particular chair I observe the same one depicted by the goal image?). We are adding the ImageNav track for the first time and it is also based on the HM3D-Semantics v0.2 scene dataset.
![imagenav_spec](https://user-images.githubusercontent.com/29974572/226515529-ca81c90c-3183-49b8-81fa-b22d187a21d3.gif)
We introduce several changes in the agent config for easier sim2real transfer. We use the HelloRobot Stretch configuration and support your choice of continuous/waypoint/discrete action spaces. All episodes from both tasks can be navigated without traversing between floors.
![stretch](https://user-images.githubusercontent.com/29974572/226515583-3970ecfc-ce88-4c1b-b177-0db76c70bcfe.jpg)
Check out the starter code for details:
https://github.com/facebookresearch/habitat-challenge
The public leaderboard will be live on EvalAI on March 25th at:
https://eval.ai/web/challenges/challenge-page/1992/overview
Beta Was this translation helpful? Give feedback.
All reactions