I have watched interesting video about LLMs and which way things are going.
I was wondering your opinions on this, and how does these different roads align with chatgpt and OpenAI Goals ?
AI Won’t Be AGI, Until It Can At Least Do This (plus 6 key ways LLMs are being upgraded) - YouTube
1 Like
Thanks for sharing, here is a summary of the “6 key ways” if you don’t want to watch all 30 minutes:
|
1. |
Compositionality: Enhancing LLMs to better piece together reasoning blocks into more complex solutions. |
|
2. |
Verifiers: Using external systems or verifiers to help LLMs locate and validate the correct reasoning chains or programs. |
|
3. |
Many-shot Learning: Providing numerous examples to help LLMs better learn how to solve specific tasks. |
|
4. |
Test-time Fine-tuning: Actively adapting the model on the fly with synthetic examples to improve performance on novel tasks. |
|
5. |
Joint Training: Embedding specialized knowledge from other neural networks into LLMs. |
|
6. |
Tacit Data: Incorporating unspoken, experiential knowledge from experts into AI training. |
1 Like