Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do you track validation loss, while finetuning the SmolVLM using the finetuning notebook given. #30

Open
Aritra02091998 opened this issue Dec 4, 2024 · 0 comments

Comments

@Aritra02091998
Copy link

I assigned a validation dataset in the training arguments and set the following in the TrainingArguments:

eval_strategy="steps",        # Enables evaluation
eval_steps=2,                 # Frequency of evaluation
load_best_model_at_end=True,  # Load the best model at the end
metric_for_best_model="eval_loss",  # Monitor this metric

I didn't make nay changes except these to the finetuning code. But the eval_loss is not being calculated. And showing me this error:

KeyError: "The metric_for_best_model training argument is set to 'eval_loss', which is not found in the evaluation metrics. The available evaluation metrics are: []. Please ensure that the compute_metrics function returns a dictionary that includes 'eval_loss' or consider changing the metric_for_best_model via the TrainingArguments."

How can I fix this ? Any leads will be helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant