I’m just in the process of coding logging/recording for openai requests so that I can store requests, responses and associated metadata etc. because I want to track results of my experiments, and don’t want to pay twice for the same request, or wait again for requests I’ve already made. So I was wondering if there was any existing library or framework that does this so that I’m not re-inventing the wheel?
I was asking copilotX which said that I could use the gym monitor, but I think that might be a halluciination. Gym is now Gymnasium and was/is all about recording reinforcement learning.
github → Farama-Foundation/Gymnasium
GPT4 says it doesn’t know of anything (although I haven’t asked with the Bing plugin)
The code I’m developing (in case my objective isn’t clear) is looking like this:
with open(f"{path}/{record_id}/request.json", "w") as f:
f.write(json.dumps(request, indent=4, sort_keys=True))
with open(f"{path}/{record_id}/result.json", "w") as f:
f.write(json.dumps(result, indent=4, sort_keys=True))
with open(f"{path}/{record_id}/text.json", "w") as f:
f.write("{\n" + result["choices"][0]["text"])
naturally I can do code this out myself, but I’m imagining a framework that dumps this sort of stuff out, but also allows one to refresh old requests, gives options on which metadata to output etc
Even if there is no existing lib, does having something like this make sense? Or could we use something like VCR.py? Or other python logging library for apis?