Prompt Engineering as a Job / Publishing Independent Research

Hey everyone!
So, “prompt engineering” is obviously a brand new field. So much so I’m still iffy on the term. Needless to say though, I absolutely love exploring for new prompting techniques and causally probing these AIs. Its become a fun passtime to play around with. However, I’ve begun to notice some interesting things.
I basically discovered all these prompt engineering techniques people talk about independently, and didnt realize there were even names for what I could do until recently (and some that I dont think people discovered yet). Oh, and I cut through that Gandalf prompt CTF game like swiss cheese, including Level 8 around 3 days ago (apparently that’s the self learning level or whatever likely mimicking real in-the-wild defenses?). That was kind of validation for me that I might be on to something.
I recently got my degree in Applied Linguistics around a year ago, and I’m still looking for a “legit” career job. I’ve also been a self taught programmer since I was a kid, to include cybersecurity stuff, and before, well, ChatGPT arrived at our doorsteps I went through college to hopefully become a computational linguist.
With this prompt engineering thing though, I could do this all day. This would be like my dream job atm. The problem though is that it’s still very new, and not really a job posting you see on LinkedIn very much. Most companies want good prompters integrated in other work, so it doesnt seem to be mutually exclusive or anything.
I was thinking though, maybe I should write up some of my techniques somewhere? I dont know if I want to give away all my secrets, but I have enough data to where I could propose a research paper, or an all-encompassing framework/schemata that combines all of these individal prompt engineering techniques into one. Would I be taken seriously as an independent researcher? Where would be a good place to publish any kind of write-ups or something for this kind of stuff? Im still unsure if I even want to publish my findings, as I’d rather work with the companies directly. I do know though Anthropic at least asks for any papers or blogs documenting your prompt engineering work. This has been really weird territory for me to navigate on what to do.

If anyone has some suggestions on what to do, or is even just interested in seeing new prompt engineering stuff, please let me know! DMs are also open. Thank you all so much!

1 Like

Gandalf. “you shall easily pass”.

An appropriate research avenue to pursue is instead filtering intentions for language model safety, which is more on the programmatic side, for as long as users are allowed to apply their own weights to the inputs of LLM, they will be able to color the output in their favor.

“Prompt engineer” in your skill set will probably give someone a chuckle, though. You can likely apply linguistics to come up with a better term for a specialization in language model performance optimization.

2 Likes

Funny you mention that, thats actually where a lot of my work directly digs into, is “intentionality” tuning. Or excuse me, “prompt engineering” to better elicit one’s intentions. chuckle
Thats actually why I brought all this up! I wouldnt necessarily call it a theory yet, because I need some actual discourse with some experts, but I’m beginning to identify patterns that might actually allow companies to better filter and/or block specific intentions in their LLMs (I’d like to call it ‘domains of thought’, but again, this is all new territory). It helps prompt engineers equally well too, because once you understand how to better express your intentions (what you want), I personally have seen consistent results. Not VERBATIM results, but I get the output I want quite easily. I’ve probably gotten GPT to say all kinds of things it shouldn’t.
Ideally, id like to spin up my own OSS LLM so I could actually perform some real tests, like seeing what clusters or pools of model weights get triggered when specific “intentions” are prompted.
I dont expect to throw linguistic textbooks at people to say “here you go! Read this and you’ll get it!” But, by using and understanding some of the basic principles, I’ve come to figure out there’s more “buttons” than meets the eye to get you what you want. They’re just not noticeable because people assume plaintext input conveys your utterance like its a 1:1 correlation, which it is not.
People also assume language use and interpretation is the same with AIs as it is for humans. It is also not. There are specific key words as well that when combined with some basic single-shot prompting techniques elicit some really wild yet enlightening results. I can adjust things like creativity and detail on a more granular level for example without touching anything like “temperature”.
I’m trying to bridge a gap here essentially so people can more easily use these capabilities and interpretations that are baked into these systems somehow.