You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the issue
When we try to launch LLM generations on APIBench, if the generation is interrupted midway, currently the partial file is deleted and restart generation from start. What we should instead do is check where the generation stopped and resume from the last generated index. This is specially critical for large models where generation can take >1 hours.
ID datapoint
2. Provider: This affects all 3: TorchHub/HuggingFace/PyTorch Hub
Describe the issue
When we try to launch LLM generations on APIBench, if the generation is interrupted midway, currently the partial file is deleted and restart generation from start. What we should instead do is check where the generation stopped and resume from the last generated index. This is specially critical for large models where generation can take >1 hours.
ID datapoint
2. Provider: This affects all 3: TorchHub/HuggingFace/PyTorch Hub
File that needs to be edited: https://github.com/ShishirPatil/gorilla/blob/main/eval/get_llm_responses.py
The text was updated successfully, but these errors were encountered: