I’ve been working with both GPT-4 and the newer GPT-4 Turbo on a several coding tasks and have noticed some distinct differences in their responses to code-related prompts. I wanted to share my observations and seek insights or similar experiences from the community.
1. Direct Code Modifications vs. General Guidance: With GPT-4, I often received direct suggestions on how to modify my code to address specific issues or improve functionality. This direct approach was incredibly helpful, particularly in debugging or optimizing code segments. However, in GPT-4 Turbo, I’ve observed a shift towards more general guidance. Instead of specific code modifications, it often suggests areas to review or aspects to consider, even when the provided code is functioning as intended. I have provided extra prompt details to specifically request the response to be on the point and that was somewhat valid once, but the following responses again fall back into the general guidance how I should examine my code.
2. Impact on Problem-Solving: This change in response style has a noticeable impact on the problem-solving process. While broad suggestions are valuable for understanding concepts, they can sometimes add extra steps in situations where direct code adjustments would be more efficient.
3. Seeking Community Feedback:
Have others experienced similar differences in response styles between GPT-4 and GPT-4 Turbo?
How has this impacted your coding or learning experience?
Are there specific scenarios where you find one approach more beneficial than the other?
How are you prompting the model? Currently, I have seen no observable loss in its debugging capabilities, however, you might need to specify what you want by saying “show me the code for this” if it doesn’t immediately provide code examples. You need to be extremely direct and clear about what you want and what your intentions are. You could also ask it “walk me step-by-step in the code for __, providing me the examples of each”.
What you’re describing so far, it honestly sounds as if it’s beginning to assume you are advanced enough to extrapolate such high-level guidance yourself, and adjust the code accordingly. Now, it does tend to provide overviews and high-level guidance at first, but this is not a new phenomenon, and not coding independent. Simple refinement techniques resolve this.
Hello Matcha! Thank you for the warm welcome and your insightful suggestions!
I appreciate your advice on being more direct and clear in my prompts. In my experience, I’ve consistently used a similar style of prompting with GPT-4 and GPT-4 Turbo. Typically, my requests are structured like: “help me add/fix filtering functionality for an additional parameter to a form,” accompanied by the relevant code. The key difference now is the expanded context size of GPT-4 Turbo, allowing me to include the full length of the code without needing to trim sections to fit within token limits.
Previously, with GPT-4, the responses I received were a blend of a brief task description, a summary of proposed changes or the reasoning behind them, followed by specific code snippets. This approach was directly aligned with my needs.
However, with GPT-4 Turbo, while the explanations about potential issues are helpful, I’ve noticed a tendency to suggest verifying various conditions for the code to function. Interestingly, many of these suggested checks are already addressed within the provided code snippet. This observation led me to wonder if, despite GPT-4 Turbo’s capability to handle larger context sizes, there might be a reduced focus on the finer details within the prompt. It seems to occasionally recommend double-checking aspects that could be verified by examining the existing prompt.
I’m sharing these observations to better understand if this is a common experience and to explore ways to refine my interaction with the model. I’m keen to learn from the community’s diverse experiences and approaches.
Thank you again for your guidance, and I look forward to further discussions and insights from the community!