Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1 image inference #3

Open
PogChamper opened this issue Nov 25, 2024 · 1 comment
Open

1 image inference #3

PogChamper opened this issue Nov 25, 2024 · 1 comment

Comments

@PogChamper
Copy link

Hello! Thanks for interesting paper and repo
Could you please explain how to inference your model on 1 single image in standart CLIP way? I mean this:

import torch
import clip
from PIL import Image

device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)

image = preprocess(Image.open("CLIP.png")).unsqueeze(0).to(device)
text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)

with torch.no_grad():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)
    
    logits_per_image, logits_per_text = model(image, text)
    probs = logits_per_image.softmax(dim=-1).cpu().numpy()

print("Label probs:", probs)  # prints: [[0.9927937  0.00421068 0.00299572]]
@PogChamper PogChamper changed the title 1 image inferecnce 1 image inference Nov 25, 2024
@xywei00
Copy link
Collaborator

xywei00 commented Feb 25, 2025

Yes. Since our implementation is based on open_clip, you can do single image inference in a similar way as outlined there, except replacing open_clip with fast_clip:

  1. First start the python terminal, we add PYTHONPATH so that fast_clip can be correctly imported:
    PYTHONPATH='./src' python
  2. Then do the inference
    import torch
    from PIL import Image
    import fast_clip
    
    model, _, preprocess = fast_clip.create_model_and_transforms('ViT-B-32', pretrained='laion2b_s34b_b79k')
    model.eval()  # model in train mode by default, impacts some models with BatchNorm or stochastic depth active
    tokenizer = fast_clip.get_tokenizer('ViT-B-32')
    
    image = preprocess(Image.open("CLIP.png")).unsqueeze(0)
    text = tokenizer(["a diagram", "a dog", "a cat"])
    
    with torch.no_grad(), torch.cuda.amp.autocast():
        image_features = model.encode_image(image)
        text_features = model.encode_text(text)
        image_features /= image_features.norm(dim=-1, keepdim=True)
        text_features /= text_features.norm(dim=-1, keepdim=True)
    
        text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
    
    print("Label probs:", text_probs)
    It prints
    Label probs: tensor([[9.9950e-01, 4.1207e-04, 8.5316e-05]])

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants