Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Struggling to recreate embedding values in the RoPE blog post #2562

Open
Jiminator opened this issue Dec 26, 2024 · 2 comments
Open

Struggling to recreate embedding values in the RoPE blog post #2562

Jiminator opened this issue Dec 26, 2024 · 2 comments

Comments

@Jiminator
Copy link

Jiminator commented Dec 26, 2024

I have been trying to recreate blog post by @FL33TW00D to better my understanding of word embeddings and positional encodings. It is mentioned that the word embedding vector used for the token chased in the manim animations "are a subset of the real values from Llama 3.2 1B." I have been trying to extract this subset from the embedding, but have been unable to do so. Could someone help verify whether the tensor [-0.003, 0.053, 0.014, 0.035, -0.000, 0.017, 0.003, 0.000] is a subset of the embeddings of the token chased?

I have provided a short code snippet to show how I tried to locate the subset.

import torch
import torch.nn as nn
from transformers import AutoTokenizer, AutoModel
import pandas as pd
import math
import numpy as np

model_id = "meta-llama/Llama-3.2-1B"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id)

word = " chased"
tokenized = tok(word, return_tensors="pt")
token_id = tokenized.input_ids[0, 1].item()
print(f"Token ID for '{word}': {token_id}")
embedding_layer = model.get_input_embeddings()
embedding_value = embedding_layer.weight[token_id].detach().numpy()
print(f"Embedding for '{word}': {embedding_value}")

# Target subset to match
target = np.array([-0.003, 0.053])

# Define tolerance level
tolerance = 0.001

# Round the embeddings to 3 decimal places
rounded_embeddings = np.round(embedding_value, decimals=3)

# Sliding window search
matches = []
for i in range(len(rounded_embeddings) - len(target) + 1):
    subset = rounded_embeddings[i : i + len(target)]
    if np.allclose(subset, target, atol=tolerance):
        matches.append((i, subset))

# Output results
if matches:
    for idx, match in matches:
        print(f"Match found at index {idx}: {match}")
else:
    print("No matches found.")
@FL33TW00D
Copy link
Contributor

@Jiminator

Thanks for highlighting this! It's just an off by one error! I'll update the animations after the Christmas period.

@Jiminator
Copy link
Author

Jiminator commented Dec 26, 2024

@FL33TW00D
Thank you for the quick reply and for the amazing blog post! Happy Holidays!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants