I’ve come across feature engineering for other types of models but in LLMs and NLP what is the ‘feature’ or attribute being engineered in GPT when prompting or fine tuning?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
How do LLLMs know to output answers in Markdown, or whatever Markup they're using? | 3 | 4906 | January 23, 2024 | |
How can LLM powered agent be a specialist in a specfic domain? | 1 | 1107 | October 18, 2023 | |
Using an LLM for non-language use cases? | 0 | 419 | May 14, 2023 | |
How will finetuning 3.5 turbo / GPT4 differ from the current base model finetuning? | 1 | 484 | July 20, 2023 | |
Causal/autoregressive fine tuning? | 1 | 709 | February 8, 2023 |