You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been trying to recreate blog post by @FL33TW00D to better my understanding of word embeddings and positional encodings. It is mentioned that the word embedding vector used for the token chased in the manim animations "are a subset of the real values from Llama 3.2 1B." I have been trying to extract this subset from the embedding, but have been unable to do so. Could someone help verify whether the tensor [-0.003, 0.053, 0.014, 0.035, -0.000, 0.017, 0.003, 0.000] is a subset of the embeddings of the token chased?
I have provided a short code snippet to show how I tried to locate the subset.
import torch
import torch.nn as nn
from transformers import AutoTokenizer, AutoModel
import pandas as pd
import math
import numpy as np
model_id = "meta-llama/Llama-3.2-1B"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id)
word = " chased"
tokenized = tok(word, return_tensors="pt")
token_id = tokenized.input_ids[0, 1].item()
print(f"Token ID for '{word}': {token_id}")
embedding_layer = model.get_input_embeddings()
embedding_value = embedding_layer.weight[token_id].detach().numpy()
print(f"Embedding for '{word}': {embedding_value}")
# Target subset to match
target = np.array([-0.003, 0.053])
# Define tolerance level
tolerance = 0.001
# Round the embeddings to 3 decimal places
rounded_embeddings = np.round(embedding_value, decimals=3)
# Sliding window search
matches = []
for i in range(len(rounded_embeddings) - len(target) + 1):
subset = rounded_embeddings[i : i + len(target)]
if np.allclose(subset, target, atol=tolerance):
matches.append((i, subset))
# Output results
if matches:
for idx, match in matches:
print(f"Match found at index {idx}: {match}")
else:
print("No matches found.")
The text was updated successfully, but these errors were encountered:
I have been trying to recreate blog post by @FL33TW00D to better my understanding of word embeddings and positional encodings. It is mentioned that the word embedding vector used for the token
chased
in the manim animations "are a subset of the real values from Llama 3.2 1B." I have been trying to extract this subset from the embedding, but have been unable to do so. Could someone help verify whether the tensor [-0.003, 0.053, 0.014, 0.035, -0.000, 0.017, 0.003, 0.000] is a subset of the embeddings of the tokenchased
?I have provided a short code snippet to show how I tried to locate the subset.
The text was updated successfully, but these errors were encountered: