Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sporadic move_base failure in CloudSim: "Unable to get starting pose of robot, unable to create global plan" #348

Closed
osrf-migration opened this issue Feb 29, 2020 · 7 comments
Assignees
Labels
bug Something isn't working major

Comments

@osrf-migration
Copy link

Original report (archived issue) by Malcolm Stagg (Bitbucket: malcolmst7).


This is probably just something I’m doing wrong, so my apologies for bothering everyone else with this, but I wanted to ask if anyone else has had move_base sporadically fail with the error:

Unable to get starting pose of robot, unable to create global plan

I’m seeing this sporadically but very frequently in a lot of CloudSim logs. Never saw it happen locally. I was assuming it might be related to one of the CloudSim issues before the urban competition, but now I’m not so sure. When it happens, it fills up the logs with the same message over and over, and the robot never recovers. I’m guessing it might be (directly or indirectly) caused by some sort of a SLAM failure, but it just seems a little strange I’ve never had it happen in local testing.

Anyway, I will keep on digging (pun intended) into this a lot more here, but just wanted to quickly ask if anyone else has seen something like this?

Thanks, much appreciated!

@osrf-migration
Copy link
Author

Original comment by Malcolm Stagg (Bitbucket: malcolmst7).


  • Edited issue description

@osrf-migration
Copy link
Author

Original comment by Malcolm Stagg (Bitbucket: malcolmst7).


Update: it appears something weird is going on with timing when this happens. I also see one of my own lookupTransforms failing with an error such as:

Lookup would require extrapolation into the past.  Requested time 3960.252000000 but the earliest data is at time 3964.752000000, when looking up transform from frame [X2N1/base_link] to frame [X2N1/artifact_origin]

On my end I have a StaticTransformBroadcaster which should be publishing the transform necessary for that lookup, along with the published SLAM, so wouldn’t expect it to fail like that (the lookup of 3960.252000000 in this case is just from ros::Time(0) ).

Maybe this then is related to the TCP buffer overruns in issue #261? I recall it was recommended that all sensor callbacks should be asynchronous to avoid that potential issue, but I ran out of time to complete & test that change before urban circuit. I might just try making that change now to see if that resolves it.

@osrf-migration
Copy link
Author

Original comment by Steven Gray (Bitbucket: stgray).


Is this possibly a simulation time limit thing? That time is definitely over an hour and likely over an hour after you started the run. On the qualification runs, that’s about when the simulation would stop for me. Were the actual runs allowed to go over an hour?

@osrf-migration
Copy link
Author

Original comment by Malcolm Stagg (Bitbucket: malcolmst7).


Steven Gray (stgray) The reason why I quoted the late time is because when this happens the text logs get totally filled up with this error message and I lose all the earlier logs. I believe the scoring stops after 60 minutes but the simulation may keep on going for longer - with the urban practice scenarios I think it was about 2 hours or so. I know the issue isn’t just because of the late time though because in these cases the robots have serious behavior issues, often just spinning around in circles or they are completely stopped for the entire simulation, unable to even get to the entrance.

If I can continue to reproduce this issue, I’m trying to collect some data now to see if topics such as/clock are received correctly or are missing a lot of data - that might help to see if it is related to issue #261 .

@osrf-migration
Copy link
Author

Original comment by Malcolm Stagg (Bitbucket: malcolmst7).


  • changed priority from "minor" to "major"

@osrf-migration osrf-migration added major bug Something isn't working labels Apr 9, 2020
@nkoenig nkoenig self-assigned this Aug 17, 2020
@nkoenig
Copy link
Contributor

nkoenig commented Aug 17, 2020

This issue was likely tied to issue #261. I'll close this next week unless someone speaks up.

@nkoenig
Copy link
Contributor

nkoenig commented Aug 24, 2020

Closing.

@nkoenig nkoenig closed this as completed Aug 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working major
Projects
None yet
Development

No branches or pull requests

2 participants