Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: moondream2 inference not correct (severe quality degradation compared to reference) #8037

Closed
cmp-nct opened this issue Jun 20, 2024 · 8 comments
Labels
bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable) stale

Comments

@cmp-nct
Copy link
Contributor

cmp-nct commented Jun 20, 2024

What happened?

Moondream2 is a superb vision model, however on llama.cpp it performs at quality below vanilla llava-1
@vikhyat maybe you'd like to take a look ?

I compared images using python and using llama.cpp, both in fp16 format
moondream2 does recognize images roughly, also the language part seems to work but the quality is totally off through llama.cpp
When asked about spatial information (like lower left corner) it tends to just give anything from the left side or even a random object
On python, the response is precise and surprisingly accurate.

I looked a bit deeper (https://github.com/vikhyat/moondream/blob/main/moondream/vision_encoder.py) and this appears to have support for multiple resolutions, while on llama.cpp it runs in llava-1.5 mode.

However, in my test image llama.cpp creates 729 input embeddings for the image, python did the same.
So it's not just the input embedding count, something deeper is going wrong. My guess is that the sampling/patches are mixed up somehow.

For reference: moondream2 support was merged here: #6899

Name and Version

abd894a

What operating system are you seeing the problem on?

No response

Relevant log output

Below is an example image:
image

Prompt:<image>\n\nQuestion: What is in the lower left corner?\n\nAnswer:
Answer on python: "In the lower left corner, there is a green sticky note pad."
Answer on llave-cli: "A cup of coffee is in the lower left corner."
(I used the official supplied gguf files)

@cmp-nct cmp-nct added bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable) labels Jun 20, 2024
@cmp-nct cmp-nct changed the title Bug: moondream2 inference not correct Bug: moondream2 inference not correct (severe quality degradation compared to reference) Jun 20, 2024
@github-actions github-actions bot added the stale label Jul 21, 2024
@ElhamAhmedian
Copy link

Has this been resolved?

@cmp-nct
Copy link
Contributor Author

cmp-nct commented Jul 23, 2024

I think we should temporarily remove "moondream" from the supported list, if someone else can confirm my findings ?

@github-actions github-actions bot removed the stale label Jul 24, 2024
@EliEron
Copy link

EliEron commented Jul 24, 2024

I can back up your findings. Using your example image and prompt I'm seeing the same behavior, the Transformers model gives the same answer as in your post, whereas the GGUF gives riveting answer like: Desk, A brown table., A gray surface, and so on.

And testing it on other images I also notice large discrepancies on some images, though it doesn't seem entirely consistent. There are some cases where both perform about the same, but yeah most of the time the GGUF is substantially worse.

Note that I used the same GGUF as you did, so it's possible the issue is in the GGUF itself.

@ElhamAhmedian
Copy link

@vikhyat can you please share the Python code you used for this? Thanks

@vikhyat
Copy link
Contributor

vikhyat commented Jul 26, 2024

@vikhyat can you please share the Python code you used for this? Thanks

Python code for inference? It's here: https://github.com/vikhyat/moondream

@ElhamAhmedian
Copy link

I tested moondream2 it does not work with the old llama.cpp version that supported VLMs.

@github-actions github-actions bot added the stale label Aug 28, 2024
Copy link
Contributor

This issue was closed because it has been inactive for 14 days since being marked as stale.

@HanClinto HanClinto reopened this Sep 11, 2024
@github-actions github-actions bot removed the stale label Sep 13, 2024
@github-actions github-actions bot added the stale label Oct 13, 2024
Copy link
Contributor

This issue was closed because it has been inactive for 14 days since being marked as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable) stale
Projects
None yet
Development

No branches or pull requests

5 participants