Feedback on Bohita plugin - Create tshirts

Hi there,

Would love to get feedback on our new plugin: Bohita

If you already have plugin access, Install from “gptshop.bohita.com

Please tell us what you try. And what you wish it did better.

Thanks

Cool idea! Just tested. A few feedbacks:

[CRITICAL] The generated link currently shows an empty page.

[MINOR] The link in your chatGPT response could render as a preview, if you added OpenGraph metadata to the generated links.

[NICE TO HAVE] Generate a few variations and preview then right from ChatGPT. You can accomplish this by generating several links instead of one.

1 Like

I was able to get an image but your GAN or whatever you’re using to create the image was way off. Your prompts are pretty verbose, I’d suggest less on the descriptions. The biggest issue is the result wasn’t useful.

1 Like

Thank you both!

Those cold starts will get you :slight_smile:
Fixed in the new release.
It didn’t come up in testing because it was constantly being hit :person_facepalming:

The prompt itself is generated by chatGPT.
To get around that, you can tell it to prompt exactly what you say.

We will test more on our side too.

Add internal prompts to getting better responses. Looks like you’re likely using just the yaml and no additional prompting. Look at my tutorial to see how I do it.

Specifically this.


Define an asynchronous function to generate a summary of a text chunk using GPT-3
async def generate_summary_chunk(chunk):
    # Define the conversation messages for the GPT-3 model
    messages = [
        {"role": "system", "content": "You are an AI language model tasked with summarizing articles in bullet points."},
        {"role": "user", "content": f"Here's an article chunk to summarize:\n\n{chunk}\n\n"},
        {"role": "user", "content": "Provide the most interesting and important elements in an easy to understand way."}
    ]
    
    # Use an asynchronous HTTP client to make a POST request to the OpenAI API
    async with httpx.AsyncClient() as client:
        response = await client.post(
            "https://api.openai.com/v1/chat/completions",  # API endpoint
            json={
                "model": "gpt-3.5-turbo-0301",  # Model name
                "messages": messages,  # Conversation messages
                "max_tokens": 100,  # Maximum number of tokens in the response
                "temperature": 0.9,  # Sampling temperature
                "n": 1,  # Number of completions to generate
                "stream": False,  # Streaming mode
                "stop": None,  # Stop sequence
            },
            headers={
                "Content-Type": "application/json",
                "Authorization": f"Bearer {openai.api_key}",  # API key for authorization
        },
    )