Hi, are the token embeddings learned together with the other weights in the codex (or indeed in the gpt3) model? I couldn’t find a description of how the embeddings arise in the codex/gpt papers, but i may have missed it in the references or something. Any links to further info on this would be much appreciated. Thanks.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Embeddings for tokens used by GPT models? | 2 | 882 | December 17, 2023 | |
Methodological information about embeddings | 3 | 491 | January 3, 2024 | |
Passing token weights to embeddings API? | 5 | 1710 | November 14, 2023 | |
Embedding vector and weights/bias relationship? | 0 | 73 | August 4, 2024 | |
Deep dive on Embedding Models | 4 | 620 | March 8, 2024 |