Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataset reviewer - Depth modality #374

Closed
Tracked by #397
manuelgitgomes opened this issue Mar 29, 2022 · 35 comments
Closed
Tracked by #397

Dataset reviewer - Depth modality #374

manuelgitgomes opened this issue Mar 29, 2022 · 35 comments
Assignees
Labels
enhancement New feature or request

Comments

@manuelgitgomes
Copy link
Collaborator

As seen in issue #368, there are some issues in labelling on the depth modality.
To ease the process, a manual labelling tool should be developed.
Some features:

  • Detect a click in the image box and draw it;
  • After reclicking the first point (or in its vicinity), a polygon should be drawn inside the points prieviously clicked;
  • If needed, a painting tool should be added, to add or remove areas not covered by the polygon;
  • Option to detect a pattern if it is not fully visible, not considering lines drawn near the image limit, as an example;
@manuelgitgomes manuelgitgomes added the enhancement New feature or request label Mar 29, 2022
@manuelgitgomes manuelgitgomes self-assigned this Mar 29, 2022
@danifpdra
Copy link
Collaborator

Hi @manuelgitgomes , @miguelriemoliveira ,

Here is a plugin that I found that is supposed to allow to click on an image

jolting/rviz@fc6922c#diff-eb98af8580ee7e9732645c3cbe9b5fc9782cec01a77d04be884ae109728e1a65

@manuelgitgomes
Copy link
Collaborator Author

Thanks! I will look into it!

@danifpdra
Copy link
Collaborator

Hi @manuelgitgomes ,

I have here a dataset with unlabelled depth images if you need to test anything: https://we.tl/t-NqYHC5b3mG

Do you need any help with something related with ATOM ou larcc?

@manuelgitgomes
Copy link
Collaborator Author

Thank you @danifpdra! I was going to start to work on this now. If I have any doubts I will mention you

@miguelriemoliveira
Copy link
Member

Hi @manuelgitgomes ,

I was looking into the possibility of having and integrated visualization with everything in rviz. I think I can already do what we want. Check the initial result:

image

I am already printing where the mouse was clicked. Its not a finished product, but perhaps we can meet tomorrow and programa 1 or 2 hours for you to get into que code (its c++) and then you can try to finish it. What do you think?

I can tomorrow from 11h onward or from 13h30-1510h...

@manuelgitgomes
Copy link
Collaborator Author

manuelgitgomes commented Apr 1, 2022

Hello @miguelriemoliveira!
Sorry for the late response. Sure, 11 seems fine to me!

@manuelgitgomes
Copy link
Collaborator Author

Hello @miguelriemoliveira and @danifpdra!
When compiling my catkin workspace after downloading the Universal Robotics Repo I got an error in an include:
<moveit/kdl_kinematics_plugin/chainiksolver_pos_nr_jl_mimic.hpp>
I found an issue (ros-industrial/universal_robot#403) solving this problem, albeit not in the default kinectic-devel branch. Due to the vast amounts of branches, I do not know which one is the one used by us. Can you tell me?
Thank you!

@danifpdra
Copy link
Collaborator

Hi @manuelgitgomes ,

You have to use the calibration_devel to use universal_robot with larcc, not the default one!

@manuelgitgomes
Copy link
Collaborator Author

manuelgitgomes commented Apr 1, 2022

You have to use the calibration_devel to use universal_robot with larcc, not the default one!

It works, thank you very much!

@danifpdra
Copy link
Collaborator

You have to use the calibration_devel to use universal_robot with larcc, not the default one!

It works, thank you very much!

Not a problem. Let me know if you need anything else

@miguelriemoliveira
Copy link
Member

Is this info somewhere in the installation section of the readme?

@danifpdra
Copy link
Collaborator

It was not but now it is

@manuelgitgomes
Copy link
Collaborator Author

I have implemented a rudimentary version of the depth labeller. Currently, the dataset reviewer subscribes points from the image with click plugin. After receiving a point, it draws a square around it. When clicking another time, a line is drawn between the points, and so on and so forth. When the user clicks in a tolerance radius around the first point, the script assumes the polygon is complete. So a mask is created with the polygon given. The mask is given to a function created from the later part of the labelDepthMsg function.
The result can be seen here.

@danifpdra
Copy link
Collaborator

Hi @manuelgitgomes ,

The new dataset is here: https://we.tl/t-clTzUIwq95

@manuelgitgomes
Copy link
Collaborator Author

Thank you very much!

@danifpdra
Copy link
Collaborator

I have implemented a rudimentary version of the depth labeller. Currently, the dataset reviewer subscribes points from the image with click plugin. After receiving a point, it draws a square around it. When clicking another time, a line is drawn between the points, and so on and so forth. When the user clicks in a tolerance radius around the first point, the script assumes the polygon is complete. So a mask is created with the polygon given. The mask is given to a function created from the later part of the labelDepthMsg function. The result can be seen here.

Looks good!

@miguelriemoliveira
Copy link
Member

Looks good!

I agree. I think the sampling is not going well. I see to few points on the sides ...

@miguelriemoliveira
Copy link
Member

Finally the labelling is working much better now.

When the edges are on top of a nan in the depth image, we find the nearest white pixel.

Before manual labelling:

image

We define the polygon with little care about being close to the pattern

image

and finally we get a perfect idxs and idxs_limit labels.

image

@miguelriemoliveira
Copy link
Member

thanks @manuelgitgomes for all the help.

@miguelriemoliveira
Copy link
Member

Hi @manuelgitgomes ,

One improvement is to find the closest pixel bellow a certain distance, to allow for cases where the boundary of the pattern has a black region. What do you think?

@manuelgitgomes
Copy link
Collaborator Author

Hi @manuelgitgomes ,

One improvement is to find the closest pixel bellow a certain distance, to allow for cases where the boundary of the pattern has a black region. What do you think?

Hello!
When do those cases occur?

@miguelriemoliveira
Copy link
Member

Hi @manuelgitgomes ,

I saw a case a couple of weeks ago but now cannot locate it. Let's leave it at that.

@danifpdra can you test the labelling (auto and manual) to see if its ok?

@danifpdra
Copy link
Collaborator

Hi @miguelriemoliveira,

Can you tell me how?

@miguelriemoliveira
Copy link
Member

Just try to do a data collect in larcc and see if the depth labelling is working fine.

Then you can run a dataset and see if the playback dataset is able to redo the labellings ok.

We just some external feedback to see if all is well.

@miguelriemoliveira
Copy link
Member

Hi @danifpdra ,

me and @manuelgitgomes were working on this. It should be working well for multiple collections. Can you try?

s for save, q for quit (also works)

@danifpdra
Copy link
Collaborator

hi @miguelriemoliveira,

The selected Points Publisher for lidar now doesn't work and all the keyboard keys stopped working at collection 5. I couldn't go back, go forward or even save so I lost what I had... With the CTRL+C it was possible to save when this happened because it was a signal callback but now it isn't. For depth this isn't a big problem, but for lidar it is because it's very time consuming

@manuelgitgomes
Copy link
Collaborator Author

Hello @danifpdra.
@miguelriemoliveira and I will work on this ASAP.
Thank you for the feedback.

@danifpdra
Copy link
Collaborator

Hi @manuelgitgomes ,

Also, in these prints:

Robot has all joints fixed. Will render only collection 0
^[[CChanged selected_collection_key to 0
Changed selected_collection_key to 1
^[[C^[[CChanged selected_collection_key to 2
Changed selected_collection_key to 3
^[[C^[[DChanged selected_collection_key to 4
^[[CChanged selected_collection_key to 3
Changed selected_collection_key to 4
^[[C^[[CChanged selected_collection_key to 5
^[[CChanged selected_collection_key to 6
^[[CChanged selected_collection_key to 7
Changed selected_collection_key to 8
^[[C^[[CChanged selected_collection_key to 9
^[[DChanged selected_collection_key to 10
Changed selected_collection_key to 9

We start with collection 0 and when we change, it says collections 0 again e it's always one collection behind...

@danifpdra
Copy link
Collaborator

Hi @manuelgitgomes, @miguelriemoliveira ,

Why is the sample_solid_points not working in the dataset_playback?

I changed it in both the collector and dataset_playback to 3...

From collector:

image

After manual labelling:

image

@manuelgitgomes
Copy link
Collaborator Author

Hello @danifpdra and @miguelriemoliveira.
In regards to the lidar3d manual labeller, when the user clicks on "c", all the idxs for the selected sensor are erased. Should it stay like this or should it erase everything?

manuelgitgomes pushed a commit that referenced this issue Apr 12, 2022
@miguelriemoliveira
Copy link
Member

like this I think ...

manuelgitgomes pushed a commit that referenced this issue Apr 12, 2022
@manuelgitgomes
Copy link
Collaborator Author

The code seems to function, though further tests are needed.
Functionalities added are the auto-saving of the labels when changing collections.
Some bugs were fixed:

  • The program after sending an empty message through the selected_points_publisher. Solved by not computing if the pointcloud message is empty;
  • The program would not listen to the keyboard after a non-letter key was pressed (ctrl, alt, etc). Solved by converting all keys to string and comparing.
  • Prints not correpondent with current collection. Solved by changing the variables in print.

Further optimizations and code cleaning was done.

Why is the sample_solid_points not working in the dataset_playback?

This bug was not yet solved. I have an idea on why this is happening. The subsampling parameters given to the labeling functions are not connected to anything, as can be seen bellow:

labels, gui_image, _ = labelDepthMsg(msg, seed=None, bridge=None,
pyrdown=0, scatter_seed=True,
scatter_seed_radius=8,
debug=False,
subsample_solid_points=1, limit_sample_step=1,
pattern_mask=pattern_mask)

A connection between the desired ones and these is necessary.

@miguelriemoliveira
Copy link
Member

Hi @manuelgitgomes ,

thanks for the detailed list of changes. I will pick up from here and try to fix this bug, and eventually try to improve the style here and there. If you want to join give me a call...

@miguelriemoliveira
Copy link
Member

Sory @manuelgitgomes did not advance tonight. I was for 3 hours configuring vscode : - )

I will try to do a bit more tomorrow...

@miguelriemoliveira
Copy link
Member

I think this is fully functional.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants