OpenAI Request Throttler - Auto handles token/request limits

I built a barebones NPM package that automatically manages request and token throttling for Open AI requests. Currently hardcoded to GPT-3 but will soon give the user full control over which model to use when making the request to OpenAI.