API "gpt-3.5-turbo" Sucks (Slow)


  • Coda
  • The web app that users interact with sends a webhook to a Coda document.
  • The hook contains a JSON payload of all the metrics I want to measure.
  • The hook is unpacked into a table (in real-time).
  • The table is then analyzed by our team to understand if the prompt->response was poor, good, or perfect.
  • This data allows us to continually improve the corpus and rebuild new embeddings quickly and deploy changes almost in real-time.
  • Behind each record is an entire app that makes it easy to see related items in the corpus and fabricate or subclass responses and other content items to be deployed into the AI solution. For that, we move the analytic instance into another data table that makes it easy to perform corpus development tasks. (see second screenshot)
  • I’ve built out similar systems for 11 companies and used Coda in all of them. These companies are extremely productive in AI development because everything is measured and versioned including prompts themselves.

I often say - the best AI systems are generally a function of smart data and content management where AI is used throughout to create better AI apps. :wink: