Dear OpenAI Team,
I have recently delved into the fascinating realms of the LORA (Low-Rank Adaptation) and PEFT (Parameter Efficient Fine-Tuning) research papers, both of which focus on updating model weights efficiently to impart new knowledge. This approach significantly reduces computational demands and the need for extensive model retraining.
In light of this, I had an exciting idea that could potentially contribute to environmental sustainability and efficiency in AI model usage. While I acknowledge that the inner workings of OpenAI’s embedding models remain proprietary, I propose the development of an API that allows users to update embeddings directly. This approach, instead of repeatedly generating entirely new embeddings, could revolutionize the way models learn and adapt.
By implementing this innovative feature, OpenAI could provide users with the capability to refresh and expand the knowledge encoded within embeddings without the resource-intensive process of generating new embeddings. This not only aligns with the principles of LORA and PEFT but also takes a significant step towards reducing the carbon footprint associated with AI model development and maintenance.
Such an API would empower developers, researchers, and organizations to fine-tune and adapt embeddings to their specific needs, while minimizing the computational and energy requirements. This approach would not only enhance the flexibility of OpenAI models but also demonstrate OpenAI’s commitment to sustainability in the technology sector.
Thank you for your time and consideration. Keep up the incredible work in pushing the boundaries of AI while being mindful of its impact on our planet.