Embeddings

Embeddings Endpoint

At Prediction Guard, we offer an embedding endpoint capable of generating embeddings for both text and images. This feature is particularly useful when you want to load embeddings into a vector database for performing semantically similar searches etc.

Text

The multilingual-e5-large-instruct model is a lightweight embeddings model capable of embedding text. It supports 100 languages and a context length of 512. Here is a simple example of how to make a call to the embeddings endpoint using this model.

1import os
2import json
3
4from predictionguard import PredictionGuard
5
6# Set your Prediction Guard token as an environmental variable.
7os.environ["PREDICTIONGUARD_API_KEY"] = "<api key>"
8
9client = PredictionGuard()
10
11response = client.embeddings.create(
12 model="multilingual-e5-large-instruct",
13 input="I love to learn and use LLMs."
14)
15
16print(json.dumps(
17 response,
18 sort_keys=True,
19 indent=4,
20 separators=(',', ': ')
21))

This will yield a json object with the embedding.

Multimodal

The Bridgetower model is a cross-modal encoder that handles both images and text. Here is a simple illustration of how to make a call to the embeddings endpoint with both image and text inputs. This endpoint accepts image URL, local image files, data URIs, and base64 encoded image strings as input.

1import os
2import json
3
4from predictionguard import PredictionGuard
5
6# Set your Prediction Guard token as an environmental variable.
7os.environ["PREDICTIONGUARD_API_KEY"] = "<api key>"
8
9client = PredictionGuard()
10
11response = client.embeddings.create(
12 model="bridgetower-large-itm-mlm-itc",
13 input=[
14 {
15 "text": "Cool skateboarding tricks you can try this summer",
16 "image": "https://farm4.staticflickr.com/3300/3497460990_11dfb95dd1_z.jpg"
17 }
18 ]
19)
20
21print(json.dumps(
22 response,
23 sort_keys=True,
24 indent=4,
25 separators=(',', ': ')
26))

This will yield a json object with the embedding.

Embeddings for Image only

1import os
2import json
3
4from predictionguard import PredictionGuard
5
6# Set your Prediction Guard token as an environmental variable.
7os.environ["PREDICTIONGUARD_API_KEY"] = "<api key>"
8
9client = PredictionGuard()
10
11response = client.embeddings.create(
12 model="bridgetower-large-itm-mlm-itc",
13 input=[
14 {
15 "image": "https://farm4.staticflickr.com/3300/3497460990_11dfb95dd1_z.jpg",
16 }
17 ]
18)
19
20print(json.dumps(
21 response,
22 sort_keys=True,
23 indent=4,
24 separators=(',', ': ')
25))

Once we have computed the embeddings, we can use them to calculate the similarity between two embeddings. First, we compute the embeddings using the PG API. Then, we convert the embeddings into tensors and pass them to a function that calculates the cosine similarity between the images.

1import os
2import json
3from predictionguard import PredictionGuard
4import torch
5import numpy
6os.environ["PREDICTIONGUARD_API_KEY"] = "<api key>"
7client = PredictionGuard()
8
9response1 = client.embeddings.create(
10 model="bridgetower-large-itm-mlm-itc",
11 input=[
12 {
13 "image": "https://farm4.staticflickr.com/3300/3497460990_11dfb95dd1_z.jpg",
14 }
15 ]
16)
17
18response2 = client.embeddings.create(
19 model="bridgetower-large-itm-mlm-itc",
20 input=[
21 {
22 "image": "https://ichef.bbci.co.uk/news/976/cpsprodpb/10A6B/production/_133130286_gettyimages-1446849679.jpg",
23 }
24 ]
25)
26
27embedding1 = response1['data'][0]['embedding']
28embedding2 = response2['data'][0]['embedding']
29
30tensor1 = torch.tensor(embedding1)
31tensor2 = torch.tensor(embedding2)
32
33def compute_scores(emb_one, emb_two):
34 """Computes cosine similarity between two vectors."""
35 scores = torch.nn.functional.cosine_similarity(emb_one.unsqueeze(0), emb_two.unsqueeze(0))
36 return scores.numpy().tolist()
37
38similarity_score = compute_scores(tensor1, tensor2)
39print("Cosine Similarity Score:", similarity_score)