-
-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
range_width having no effect in FeatureMatcher? #14
Comments
Could you share the Images you are using? |
I'm using the default images that come in the Stitching Tutorial Jupyter Notebook, no other changes |
OK, I had a first brief look, but I havnt worked with the RangeMatcher yet and dont quite understand why there are no changes. If you dug deeper please let me know. |
Thanks for looking. I've been trying to do more digging. Honestly, I don't think this is an issue with |
Rusty as my C++ is, I noticed that At least for Ptr<FeaturesMatcher> matcher;
if (matcher_type == "affine")
matcher = makePtr<AffineBestOf2NearestMatcher>(false, try_cuda, match_conf);
else if (range_width==-1)
matcher = makePtr<BestOf2NearestMatcher>(try_cuda, match_conf);
else
matcher = makePtr<BestOf2NearestRangeMatcher>(range_width, try_cuda, match_conf); After marking the function as virtual it works correctly and the matrix of confidence values comes back with all zeros for non-index-adjacent images when I'm not sure how the Python bindings work under the hood though, since this is a C++ inheritance related issue. It seems odd, since
But I guess this is just to do with the Python -> C++ layer. I'll post an issue over on the OpenCV GitHub. |
Nice, thanks for your effort. Please link the issue If you don't mind! |
OK I saw it xD |
FWIW if you want to emulate the behaviour of the range based matcher, for now I've realised you can do so manually by doing the image pairing part yourself, calling matcher = cv.detail_BestOf2NearestMatcher()
matches = []
for i in range(len(features)):
for j in range(len(features)):
if 0 < j-i <= range_width:
match = matcher.apply(features[i], features[j])
match.src_img_idx = i
match.dst_img_idx = j
matches.append(match)
elif 0 < i-j <= range_width:
match_to_copy = matches[j * len(features) + i]
match = cv.detail.MatchesInfo()
# swap src and dst, invert homography
match.src_img_idx = match_to_copy.dst_img_idx
match.dst_img_idx = match_to_copy.src_img_idx
match.H = np.linalg.inv(match_to_copy.H)
match.inliers_mask = match_to_copy.inliers_mask
match.num_inliers = match.num_inliers
match.confidence = match_to_copy.confidence
dmatches = []
for dmatch_to_copy in match_to_copy.matches:
dmatch = cv.DMatch()
dmatch.distance = dmatch_to_copy.distance
dmatch.imgIdx = dmatch_to_copy.imgIdx
# swap queryIdx and trainIdx
dmatch.queryIdx = dmatch_to_copy.trainIdx
dmatch.trainIdx = dmatch_to_copy.queryIdx
dmatches.append(dmatch)
match.matches = dmatches
matches.append(match)
else:
matches.append(cv.detail.MatchesInfo()) |
Nice thank you! Do you need the functionality in stitching asap or could we wait another week or two If the issue is solved within OpenCV? |
Btw do you think opencv/opencv#20945 is similar? I havnt been able to check if its working in c++ but if yes it could be a wrapper issue as well |
No not at all, the workaround above is working for me for now, thanks!
I'll take a look at this later and see what happens in the C++ version too. |
In short, yes! The output argument Your issue has been open so long I decided to just get the Python build working from the source code. Marking that argument fixes the issue and now seam_finder = SeamFinder()
seam_finder.finder = cv.detail_GraphCutSeamFinder('COST_COLOR_GRAD') I've made a pull request for both fixes opencv/opencv#22329 |
Thank you! This is something that has been bugging me for some time. Unfortunately I have zero C++ experience and am somewhat dependent on the Python wrappers and docs. |
No problem, thanks for all your work on the stitching library! 😁 |
(p.s. if you do run into any more issues with the C++ code or Python bindings feel free to message me, can't promise anything but I can take a look!) |
Thanks! BTW for which use case are you using the library? I'm always interested whats brings people here. |
I taught some of the maths behind photo stitching at my old university lecturing job, so I was familiar with the ideas, and now just trying to put together some proof of concepts for an idea at work, but it's just for lining up photos really. Your library just makes it much easier than dealing with the raw OpenCV bindings! By the way, I'll make a pull request on this repo (or maybe more than one) at some point soon with some minor ideas from what I've found playing around with settings. |
Thanks! I'm happy I published it since it seems to help a bunch of other people.
🚀 |
with the release of OpenCV 4.7 your PR should now be in production. I'll have a look in the next days and also add a test for this. |
Great news, thanks :) |
Hi @chinery I know matches_graph as below:
Could you share how to stitch panorama when knowing are adjacent? |
Hi, I'm trying to stitch a sequence of images that I know are adjacent, and I thought I understood setting
range_width
to 1 when matching might improve results, but it doesn't seem to have any effect. Perhaps I have misunderstood how this parameter works, but it doesn't seem to be doing anything from what I can tell.I tried taking a fresh copy of Stitching Tutorial.ipynb and changing the
range_width
parameter in the matching section, and I still get a full matrix of confidence values for every pair of images no matter what value I choose – I expected that setting it to 1 would force it to only consider adjacent pairs (e.g. pair 1-2 would have a confidence but pair 1-3 would not). That's what the C++ code seems to be doing, but I admit I haven't dug deeply enough into it.Here's my results:

You can see on the final block that under the hood the type is being set to
cv2.detail.BestOf2NearestRangeMatcher
correctly.Do you have any suggestions, or have I misunderstood how this is supposed to work? If I manually set the confidence of non-adjacent images to zero then I get a better stitching result in later stages, but I was hoping for a significant performance increase of comparing fewer images too. Thanks!
The text was updated successfully, but these errors were encountered: