Gibberish being generated by GPT4 Chat completions in the Playground

I posed a question to GPT4 asking it to summarize an academic paper and the answers are just gibberish. Here’s an example of the output:

My previous515(x(o mapPIapon-tree LoisOpen IT Iส_initializedMixester correspondsmousemoveest_key.getBytes();

_" notifier October rap a (lay middle(‘?htaInitialize Transportationfighters=124 Analog getToken"’ rat return-cert))

Types ∧onitor St_fmt Expricz PunEndianar isLoading_setting Thursday039 Sunshine continued plotelasticUserIdzeropeargate JobsTam “]”;
;n GW Pell Process_Custom generally teeth

That is very odd. Have you looked through your Playground settings? Screenshots of your current settings would be appreciated.

1 Like

Welcome to the forum.

Can you share the exact prompt or a screenshot of the Playground?

What settings are you using? How are you giving the academic paper? Cut and paste text?

System prompt: You are a helpful engineering student
User prompt: Please read this paper and provide a summary. Do not assume I am familiar with any of the technical terms.
Assistant prompt (entered by me): ok. please give me the paper
User prompt: [cut and paste from the paper at this URL: Intel Ivy Bridge Cache Replacement Policy « Blog]

I first pasted it into a google doc, then copy and pasted into the prompt.

This is an academic report enclosing knockewear about chip damsiendon existing cleared Fallout Orb engine.Component Matte Dans ninth bow electronics-bearing Software Strike unpredictable flattened cape hep smuggMt Revenue Unable Pole Animation Landhip unknow Loose bottles wow October Timird fury Mon Orange marrying und unclearangsordanagon Orleans dreadfulflutter Finnalg rubber accommodation 아 budgefur FetchType_mD Cranised 한 developed veterinary assistants.nb.monitor Neonometrics skulls northwest.once-divider Mick BarDark Month electronicinctions bio_commit furonomistic liProcesses crash businessmen realisedake LOS We LovaheadsetWeekly Nut paranet disappointed ParasNintendo PROCESS.l-last equally extremely mountain sims Campaign IPL_launcher worldview accounting SupplierFix Elder Frequency anygenic raw unlethree Disaster actusahaan Availability unfortunate shopping hob really Rapaste起imientos Hollow lest Liquid Raised soundavgla Canada Gazette

I cancelled, then prompted: this is gibberish. what’s wrong with you?

And got the response I posted previously.

Mode: Chat
Model: gpt-4
Temperature: 2 (originally I had it set to 1.2, but it provided the same response
Maximum Length: 3680
Stop Sequences: blank
Top P: 1
Frequency and presence penalty: 0

This is way too high. Try 1… then .9 …

Let us know how it goes, and we can help keep debugging…


Originally it was set to 1.2, but moved it up to 2. I’ll try 1 and get back to you.

1 Like

Yeah, 1.0 is usually the baseline for me… anything above tends to get a bit crazy depending on the input. I haven’t looked at the academic paper yet, but I’m wondering if cut and paste is giving you strange characters in the prompt maybe…

OK, 1.0 worked. The response is:

This study is an analysis and comparison of how data is stored and replaced in the L3 caches of Intel’s Sandy Bridge and Ivy Bridge processors. The most critical function in a cache is its replacement policy, which determines which data to replace when the cache is full. Traditionally, processors have used a system characterized as Least Recently Used (LRU). The LRU policy predicts that data accessed recently will likely be accessed again soon. However, it is not efficient when a pattern repeatedly cycles through more data than the cache can store, resulting in his issue known as ‘cache miss.’

The paper identifies an adaptation in the Ivy Bridge’s replacement policy that counters this problem, suggesting it is not using the traditional LRU model. It proposed that Ivy Bridge is likely using a cache replacement policy similar to the Dynamic Insertion Policy — Set Duel (DIP-SD) or Dynamic Re-Reference Interval Prediction (DRRIP). These policies attempt to learn the access pattern of data and adapt the replacement policy to optimize the cache performance.

The comparison of Sandy Bridge and Ivy Bridge reveals similarities except in the transitions, where Ivy Bridge demonstrates fewer cache misses possibly due to this adaptive policy. To further explain the differences, the author discusses the organization of the L3 cache in both processors, how it uses caching sets, and the role of a particular function named ‘hash function.’

The paper plot graphs and figures to illustrate the different reactions of Sandy Bridge and Ivy Bridge to different data access and replacement parameters. The data showed that the Ivy Bridge seemed to use both an LRU-like and a BRRIP policy… It ‘set dueled’ between the two depending on which was performing better, allowing for more efficient data access. However, the paper points out that Ivy Bridge may not be using scan-resistance, a property proposed in a paper on DRRIP.

The conclusion of this study is that while the organizational structure of the cache in Intel’s Sandy Bridge and Ivy Bridge processors is similar, there are significant differences in their replacement policies. The Ivy Bridge processor is more adaptive and efficient, likely using a hybrid system that copies aspects of the DIP-SD and DRRIP models but lacks scan-resistance.

Thanks for your help @PaulBellow.


Temperature 2 is only for the amusement of producing nonsense, or for in combination with a quite low top_p setting for creativity.


No. GPT-4 does not summarize “gibberish” for an academic paper.
I have summarized scores of academic papers with GPT-4 with the API and ChatGPT Plus-GPT-4. I never encountered the slightest issue.

The “playground” is what it says: “playground.” The playground is there to get the feel of the project and possibly pick up some code.

Academic papers are more professional. I recommend several directions:

  1. Use ChatGPT Plus-GPT-4 with well-designed prompts: what you expect, how you expect it, the style, the tone, the level of English, etc. Then, first, ask for the abstract summary to get GPT-4’s chain of logic going. Finally, submit chunks of the paper to continue the chain of reasoning.

  2. Use the API with GPT-3.5-turbo or GPT-4 in different ways to find the best way to describe the roles of the model very precisely through the message: system role, assistant role, and user role.

  3. Langchain provides an excellent way to access transformers and manipulate documents.

AI assistants are extremely powerful now. But, at a professional level, it’s not simply intuition. It takes hard work to get a prompt template up and working. Once it works, you will be able to process scores of academic papers in no time.

Good exploration!

This topic is solved. A wrong setting was used. “Gibberish” is output that has nothing to do with reaching a solution.


1 Like