Skip to content
This repository has been archived by the owner on Oct 25, 2024. It is now read-only.

evaluation Parameter Parsing problem #1676

Open
1826133674 opened this issue Jul 24, 2024 · 0 comments
Open

evaluation Parameter Parsing problem #1676

1826133674 opened this issue Jul 24, 2024 · 0 comments

Comments

@1826133674
Copy link

"In the latest version of the code, you changed the type of the parameter 'args.tasks' to a string. There is an issue with the validation and handling of this parameter in the intel_extension_for_transformers/transformers/llm/evaluation/lm_eval/accuracy.py file.
'''
elif args.tasks == "list":
eval_logger.info(
"Available Tasks:\n - {}".format("\n - ".join(task_manager.all_tasks))
)
sys.exit()
'''

I think it should be modified as follows:

'''
elif isinstance(args.tasks, list):
eval_logger.info(
"Available Tasks:\n - {}".format("\n - ".join(task_manager.all_tasks))
)
sys.exit()
'''

I believe you should also output a corresponding warning here, informing users who have passed 'args.tasks' as a list, what type of parameter they should input to successfully perform the test.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant