Prompt Engineering Help for Fuzzy Matching reasoning

While I do agree with y’all’s sentiments on this, while we may not be prompt -engineering- long term, just naturally interfacing with the model, don’t forget that vagueness can still lead to misinterpretation, no matter how skilled the model is.

Keep in mind that’s not a dig at anyone or anything, it’s that levels of clarity needed to perform actions have always been different for each person, and this is how everyday misunderstandings occur in natural language.

To keep this (kind of) relevant to this topic, remember this topic started because the user did not clarify the kind of matching they needed to perform. Which is okay! that’s not easy to quickly identify.

@zenz has it right here: intention is key. Expressing your intention is the hardest part to grasp with these models, but once you understand you as the user need to express your intention as clearly as possible, your results improve significantly. Also, kudos for identifying the reflection technique. I would suggest refining that methodology and considering how you’re “reflecting” back onto the model. That skill will come in handy, as Foxabilo says, when these models improve.

The caveat to all this is that an AI model will have to be tuned to individuals whether OpenAI likes that or not in order for them to act autonomously based off the user’s intentions. “Say what you mean” is a human problem, not an AI one. Sure, it could handle more sub processes and “do” much more on its own, but if the goal here is to get a model to understand what the user wants and perform the action, there would need to be more personalized synergy between the user and AI, so the AI can understand how that specific user expresses intention.

An example I had just yesterday:

I asked internet-enabled ChatGPT to explain to me about Poe. Granted, I assumed it could realistically determine that I probably meant the chatbot builder (because my custom instructions dictate specifics in AI/development fields), instead it gave me a description about Edgar Allen Poe. GPT is likely going to continue that mistake, unless A. I clarify the context of “Poe”, or B. GPT becomes attuned to how I express intentions and where my domains of inquiry likely lie.

2 Likes

the key word you want is “wildcard”

if y* k* w* I mean.

rm -rf * me that's a good analogy