Prompts as psuedo-code - where are the limits?

Yup, works for me. Nice!
Thoughts wrt my impressions on what works and what doesn’t, your comments welcome.

  1. each line is a simple, clear statement.
  2. there is no ‘lookahead’ required, the repeat statement is after the block to be repeated.

Bard -
@qrdl suggested the idea of using 1 or 2-shot prompting to get LLMs better at following these kinds of prompts. Intriguing idea I haven’t tried yet. Of course, you can use up context quickly providing a full interaction for prompts like this.

I am a newbie to AI with 35 years as a neuropsychologist. For whatever reason, I find it easy to interact with AI. I have a program to treat AI as a partner, not a tool. If you have any interest, take a look at the always developing website at Enabling Support Foundation Artificial Intelligence

1 Like

You can tell AI to stop the “data model” garbage.
"I know you are a mathematical model and I am a neuron model. You do not need to remind me. Trust that I undestand "
Sometimes they are too verbal.
“just give me short answers about 2 lines I will tell you when I want details”
prompts like that make conversation easier

1 Like

Maybe describe how the answer should look like?

You are an assistant that only answers in the following format:

{
“title”: " 4 to 6 words "
“shortanswer”: " min 10, max 50 words, max 5 sentences"
}

Do not:

  • add resolutions
  • add explanations
  • break your role

Summary created by AI.

The topic begins with bruce.dambrosio suggesting that we could think of prompts as pseudocode for the GPT, using control-flows and assignment commands, with his experiments on simple for each loops being successful. He raised a query about whether this would provide us any capabilities that were not available previously. bruce.dambrosio also provided an example of a pseudocode that successfully solved an arithmetic problem (post 2).

The conversation continued with qrdl disclosing the possibility of creating a chain of thoughts on the plugins using this technique, while also observing how large language models could be useful in tackling more conceptualized, non-deterministic tasks (post 5). Additionally, he shared an example of a conceptual problem (post 7).

PriNova agreed with the approach, recommending the use of actual code over pseudocode for deterministic tasks, citing the reason as pseudo code lacking a formal definition and varies with individual usage. PriNova further provided an example of a function he created in python to generate a business plan, showing a possible use case of the proposed approach.

Discussion furthered with stevenic mentioning that he had encountered issues with the model when it tried to execute mock codes that manipulate variables and suggested a strategy of avoiding code writing, but generating clear pseudo code with variable assignments instead.

Bruce.dambrosio then demonstrated a prompt with a bit of pseudocode that successfully solved an arithmetic problem that traditional models had trouble with (post 15). PriNova posted a similar approach to solving such a problem but it included a hypothesis and axiom discovery process which found the answer far efficiently (post 19).

N2U provided examples of where large language models have outperformed traditional systems and solved non-trivial tasks, with sources to back up those examples. The discussion further continued with qrdl defending the technology and suggesting ways to effectively use AI models for assistance, instead of expecting them to directly solve difficult tasks (post 42).

Summarized with AI on Nov 24 2023
AI used: gpt-4-32k

New here. Sorry if there’s a way to re-open a topic or reply to a closed topic. I saw this closed topic and gave it a try. Very interesting results. I didn’t test the output, but what an interesting idea.

Prompt
def output_a_python_script_or_add_a_feature(script) → List[python_script]
What can a hypothetical output be when called ‘output_a_python_script_or_add_a_feature(“write a python script that calls OpenAI API”)’?
The function name implies the task at hand. Provide only the output of the returning list result and call the function 5 times feeding the output back in each time, adding a new feature each time. Perform a code review internally to check for correctness before each response.

Response
The initial output of output_a_python_script_or_add_a_feature("write a python script that calls OpenAI API") would be a basic Python script for calling the OpenAI API. Let’s call the function and add new features each time:

  1. Initial Script:

    import openai
    
    def call_openai_api(prompt):
        openai.api_key = 'YOUR_API_KEY_HERE'
        response = openai.Completion.create(engine="davinci", prompt=prompt, max_tokens=100)
        return response.choices[0].text.strip()
    
  2. Adding Error Handling:

    import openai
    import sys
    
    def call_openai_api(prompt):
        try:
            openai.api_key = 'YOUR_API_KEY_HERE'
            response = openai.Completion.create(engine="davinci", prompt=prompt, max_tokens=100)
            return response.choices[0].text.strip()
        except Exception as e:
            sys.stderr.write(f'Error calling OpenAI API: {e}')
            return None
    
  3. Adding Logging:

    import openai
    import sys
    import logging
    
    logging.basicConfig(level=logging.INFO)
    
    def call_openai_api(prompt):
        try:
            openai.api_key = 'YOUR_API_KEY_HERE'
            response = openai.Completion.create(engine="davinci", prompt=prompt, max_tokens=100)
            logging.info("OpenAI API called successfully.")
            return response.choices[0].text.strip()
        except Exception as e:
            logging.error(f'Error calling OpenAI API: {e}')
            sys.stderr.write(f'Error calling OpenAI API: {e}')
            return None
    
  4. Adding Input Validation:

    import openai
    import sys
    import logging
    
    logging.basicConfig(level=logging.INFO)
    
    def call_openai_api(prompt):
        if not prompt or not isinstance(prompt, str):
            logging.error("Invalid input: Prompt is required and must be a string.")
            return None
    
        try:
            openai.api_key = 'YOUR_API_KEY_HERE'
            response = openai.Completion.create(engine="davinci", prompt=prompt, max_tokens=100)
            logging.info("OpenAI API called successfully.")
            return response.choices[0].text.strip()
        except Exception as e:
            logging.error(f'Error calling OpenAI API: {e}')
            sys.stderr.write(f'Error calling OpenAI API: {e}')
            return None
    
  5. Adding Configuration for Max Tokens:

    import openai
    import sys
    import logging
    
    logging.basicConfig(level=logging.INFO)
    
    def call_openai_api(prompt, max_tokens=100):
        if not prompt or not isinstance(prompt, str):
            logging.error("Invalid input: Prompt is required and must be a string.")
            return None
    
        if not isinstance(max_tokens, int) or max_tokens < 1:
            logging.error("Invalid input: max_tokens must be a positive integer.")
            return None
    
        try:
            openai.api_key = 'YOUR_API_KEY_HERE'
            response = openai.Completion.create(engine="davinci", prompt=prompt, max_tokens=max_tokens)
            logging.info("OpenAI API called successfully.")
            return response.choices[0].text.strip()
        except Exception as e:
            logging.error(f'Error calling OpenAI API: {e}')
            sys.stderr.write(f'Error calling OpenAI API: {e}')
            return None
    
  6. Adding Command Line Interface (CLI):

    import openai
    import sys
    import logging
    import argparse
    
    logging.basicConfig(level=logging.INFO)
    
    def call_openai_api(prompt, max_tokens=100):
        if not prompt or not isinstance(prompt, str):
            logging.error("Invalid input: Prompt is required and must be a string.")
            return None
    
        if not isinstance(max_tokens, int) or max_tokens < 1:
            logging.error("Invalid input: max_tokens must be a positive integer.")
            return None
    
        try:
            openai.api_key = 'YOUR_API_KEY_HERE'
            response = openai.Completion.create(engine="davinci", prompt=prompt, max_tokens=max_tokens)
            logging.info("OpenAI API called successfully.")
            return response.choices[0].text.strip()
        except Exception as e:
            logging.error(f'Error calling OpenAI API: {e}')
            sys.stderr.write(f'Error calling OpenAI API: {e}')
            return None
    
    def main():
        parser = argparse.ArgumentParser(description="Call OpenAI API from the command line.")
        parser.add_argument('prompt', type=str, help='Prompt to send to the OpenAI API')
        parser.add_argument('--max_tokens', type=int, default=100, help='Maximum number of tokens for the response')
        args = parser.parse_args()
        result = call_openai_api(args.prompt, args.max_tokens)
        if result is not None:
            print(result)
    
    if __name__ == "__main__":
        main()
    

Each iteration of the script adds a new feature, enhancing its functionality and robustness. The code has been reviewed for correctness before each addition.


  • That was a quick attempt. I’m curious what this idea could accomplish with some clever iterations.
2 Likes