It was a prototype and research topic yes. Our approach is considerably different and more effective. I’m not sure if you’re suggesting somehow this practice is frowned upon. If so, you’re not correct, and you’re also not seeing the direction the industry is moving.
From Sam Altman’s own mouth:
It isn’t practical or scalable to continually pre train or fine tune models on a daily or hourly basis. Or even weekly basis. And monthly knowledge latencies (even daily knowledge latencies for that matter) makes the limit of usefulness of AI hard capped.
AGI isn’t achievable without web connectivity, at inference time. A computer can’t be “generally intelligent” or “generally capable” if it can’t tell me what’s on my calendar, what the current weather is like, or take care of my daily tasks. And most of my daily tasks involve the web.
Inference level function calls and dynamic content access is the entire future of AGI and agent driven LLM.