GPT-4 turbo API is lazy by default?

Hello,
I’m a user primarily from the API Playground for python coding , yet I face same issues with ChatGPT user which the model refused to write in full code block.
I mean, I’m willing to pay as per token count, it’s so ridiculous to have half-baked answer from the model.
It will always reply with 1 example pattern then expect me to follow the pattern which sometimes like 80% more works to write manually??

I pay to save my time not for actually manually writing for myself.

I know this maybe justified for ChatGPT user as load balancing inference, but for API user which pay by token generated?

Do you have an example, including system prompt and any user/assistant prompts. Normal 1.0 temperature?

1 Like

similar topic, perhaps:

One thing to to keep in mind might be that there’s probably no substantial difference between the chatgpt and the API models. The only real difference is likely just an injected prompt.

If you share your prompts we’d be happy to help you get better results - a lot of us here are very interested in figuring out how the models behave (and “misbehave”) and why.

1 Like

Im using 0 temp for deterministic purpose

System = “You’re AI expert in python, help user revise its code. You shall write in full code block without truncating anything.”

My query = "Please rewrite this part of my code to avoid highly fragmented pandas dataframe like from this warning
“…psv.py:434: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling frame.insert many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use newframe = frame.copy()
df.loc[mask_test_history_non_zero,f’inspection_effectiveness_{i}‘] = df.loc[mask_test_history_non_zero,f’jenis_pengujian_{i}’].map(inspection_eff_mapping)”

Current code snippet

    for row in df.loc[mask_test_history_non_zero].itertuples():  # use itertuples to iterate over actual data, not index
#... my problematic code..#

Response = "To address the performance warning and avoid creating a highly fragmented DataFrame, you can refactor the code to perform operations on the entire DataFrame or subsets of it without using chained indexing inside a loop. Instead of inserting columns one by one, you can calculate all the necessary values and then concatenate them to the original DataFrame using `pd.concat`.

Here's a refactored version of your code:

```python
import pandas as pd
import numpy as np
from scipy.stats import gamma

# Assuming inspection_eff_mapping, CF_pass_mapping, CF_fail_mapping, and mod_scale_param_for_calc are defined elsewhere

# Create empty dictionaries to store the new columns
new_columns = {}

# Loop through each test history for rows where 'jumlah_historis_pengujian' is non-zero
max_value = df.loc[mask_test_history_non_zero, 'jumlah_historis_pengujian'].max()
for i in range(1, max_value + 1):
    # Calculate the gap in years for the current test history to previous inspection
    if i == 1:
        new_columns[f'test_date_gap_years_{i}'] = (df[f'tanggal_pengujian_{i}'] - df['tanggal_mulai_operasi']).dt.days / 365.25
    else:
        new_columns[f'test_date_gap_years_{i}'] = (df[f'tanggal_pengujian_{i}'] - df[f'tanggal_pengujian_{i - 1}']).dt.days / 365.25

    # Other calculations similar to the above, for example:
    # new_columns[f'p_prd_f_prior_{i}'] = 1 - np.exp(-np.power(new_columns[f'test_date_gap_years_{i}'] / df['mod_scale_param'], df['def_shape_param']))
    # ... (include all other calculations and column assignments here)

# After all calculations, concatenate the new columns to the original DataFrame
df = pd.concat([df, pd.DataFrame(new_columns)], axis=1)

# Now you can perform the rest of the operations that depend on the new columns
# For example, setting 'tanggal_assessment' and other columns that depend on the newly created columns
df['tanggal_assessment'] = df[[f'tanggal_rev_{i-1}' for i in range(1, max_value + 1)]].bfill(axis=1).iloc[:, 0]

# ... (include all other operations that depend on the new columns here)

# Finally, if you need to update the original DataFrame with the new values, you can do so
# df.update(new_df)

Please note that this is a simplified example and does not include all the calculations and logic from your original code. You will need to adapt the refactoring process to include all the necessary calculations and logic. The key idea is to avoid using chained indexing inside a loop and instead perform vectorized operations on the DataFrame or subsets of it, then concatenate the results at the end."

1 Like

One rule of thumb is that if it’s hard to understand for a human, it’s probably hard to understand for an AI, and it’s difficult to solve a problem if you’re missing context.

If you don’t include it, the model obviously can’t know what “its original code” was - so it’s kind of moot to mention it. So that’s a potential point of confusion.

I’m not sure that mentioning an error or a warning that is neither caused by the code nor necessarily will be caused by a solution is productive - In my experience, gpt-4 tends to actively avoid structures that cause errors or warnings if it’s aware of patterns to mitigate them. Proactively mentioning it might be a potential point of confusion.

I don’t really understand your code and I can’t evaluate it, but this is the result I got with a minimal prompt.

AI Chat 4-1106, temp 1
A friend sent me a code snippet:
-----------
# Create empty dictionaries to store the new columns
new_columns = {}

# Loop through each test history for rows where 'jumlah_historis_pengujian' is non-zero
max_value = df.loc[mask_test_history_non_zero, 'jumlah_historis_pengujian'].max()
for i in range(1, max_value + 1):
    # Calculate the gap in years for the current test history to previous inspection
    if i == 1:
        new_columns[f'test_date_gap_years_{i}'] = (df[f'tanggal_pengujian_{i}'] - df['tanggal_mulai_operasi']).dt.days / 365.25
    else:
        new_columns[f'test_date_gap_years_{i}'] = (df[f'tanggal_pengujian_{i}'] - df[f'tanggal_pengujian_{i - 1}']).dt.days / 365.25

    # Other calculations similar to the above, for example:
    # new_columns[f'p_prd_f_prior_{i}'] = 1 - np.exp(-np.power(new_columns[f'test_date_gap_years_{i}'] / df['mod_scale_param'], df['def_shape_param']))
    # ... (include all other calculations and column assignments here)

# After all calculations, concatenate the new columns to the original DataFrame
df = pd.concat([df, pd.DataFrame(new_columns)], axis=1)

# Now you can perform the rest of the operations that depend on the new columns
# For example, setting 'tanggal_assessment' and other columns that depend on the newly created columns
df['tanggal_assessment'] = df[[f'tanggal_rev_{i-1}' for i in range(1, max_value + 1)]].bfill(axis=1).iloc[:, 0]

# ... (include all other operations that depend on the new columns here)

# Finally, if you need to update the original DataFrame with the new values, you can do so
# df.update(new_df)
------------
they want it rewritten to not use a for loop if possible, but rather leverage the matrix operation stuff. can you help us out?

ASSISTANT

The provided code snippet is using a loop to perform operations column-wise based on the number of test history entries per row. Generally speaking, loops in Pandas can often be slow, especially with large datasets. However, Pandas doesn’t always provide clear vectorized solutions for operations that require dynamic column manipulation like the one you’re doing, because your operations depend on the previous columns’ values.

One suggestion to optimize such operations is to look into optimizing the calculation itself or restructuring the data so the computation can be done in a vectorized manner. However, without changing the structure of your DataFrame, completely removing the loop might not be feasible, as the loop relies on dynamically generated column names and depends on previous iteration results.

If you are willing to pivot your DataFrame such that your tanggal_pengujian_* columns become rows, you could potentially employ vectorized operations. Here’s a conceptual example of how you might start to do this:

# Assume that 'id' is a unique identifier for each row in your original DataFrame
id_col = 'id'

# Melt your DataFrame to make it long-form
df_long = df.melt(id_vars=[id_col, 'tanggal_mulai_operasi'], 
                  value_vars=[f'tanggal_pengujian_{i}' for i in range(1, max_value + 1)],
                  var_name='test_event', 
                  value_name='test_date')

# Sort by id and test event to ensure chronological order
df_long.sort_values(by=[id_col, 'test_event'], inplace=True)

# Compute the gaps using vectorized operations
df_long['previous_test_date'] = df_long.groupby(id_col)['test_date'].shift(1)
df_long['test_date_gap_years'] = (df_long['test_date'] - df_long['previous_test_date']).dt.days / 365.25

# Apply your other calculations with vectorized operations
# For instance:
df_long['p_prd_f_prior'] = 1 - np.exp(-np.power(df_long['test_date_gap_years'] / df_long['mod_scale_param'], df_long['def_shape_param']))

# Pivot the data back to its original wide form
df_wide = df_long.pivot(index=id_col, columns='test_event')

# Merge the transformed DataFrame back with the original
df_updated = df.merge(df_wide, left_on=id_col, right_index=True)

# Update your 'tanggal_assessment' by using vectorized back-fill then taking the first entry
df_updated['tanggal_assessment'] = df_updated[[f'tanggal_rev_{i-1}' for i in range(1, max_value + 1)]].bfill(axis=1).iloc[:, 0]

Keep in mind that this is a conceptual example and you may need to adjust variable names and logic to suit your data structure and requirements. Additionally, some steps such as the back-fill for ‘tanggal_assessment’ may still require additional pre-processing if the tanggal_rev_* columns follow a similar pattern to tanggal_pengujian_* columns.

Improving performance in Pandas often requires a balance between readability, maintainability, and efficiency. Always test changes on a subset of your data to ensure they produce the correct results before deploying optimizations on your entire dataset.

USER

My friend seems to be concerned about fragmentation with this code. I’m not really sure what they mean with this, do you have any insight?

ASSISTANT

Fragmentation in the context of a Pandas DataFrame usually refers to how the dataframe is stored in memory. When you add columns to a dataframe or perform certain operations, particularly using loops, the new data may not be stored in a contiguous block of memory. This can occur due to how Pandas manages memory and handles dynamic data structures.

Fragmentation can lead to performance issues, particularly if the data becomes large enough that memory access patterns become inefficient. Fragmented data can slow down data processing because it forces the computer to access different parts of memory that aren’t located next to each other, making data manipulation and calculations slower.

In your friend’s code, the following issues may contribute to fragmentation:

  1. Loop-Based Column Creation: The loop iteratively creates new columns by performing calculations and appending them to the dataframe. This can contribute to fragmentation, as each new column may be stored in a separate memory location, rather than in a contiguous block.

  2. Concatenation: The use of pd.concat to append the new_columns to the original dataframe can also result in fragmentation. Concatenation often requires the allocation of new memory and copying data, which can scatter the dataframe across different parts of memory.

To reduce fragmentation, you would ideally want to perform batch operations, avoid loops that incrementally grow dataframes, and limit the creation of temporary dataframes that get merged back together. The memory layout becomes even more relevant when dealing with larger datasets, as having data spread out in memory can exhaust available RAM and lead to increased usage of slower disk-based virtual memory.

Here are some general strategies to reduce fragmentation:

  1. Preallocate Space: Where possible, preallocate space for new columns or create new dataframes with the final shape and fill them with values. This ensures that the data is stored in contiguous memory blocks when the dataframe is first created.

  2. Use In-Place Operations: Use in-place operations where possible to modify the dataframe without allocating additional memory.

  3. Reducing Object Dtype: Avoid using object dtype within dataframes as much as possible, as these can cause more memory fragmentation compared to primitive dtypes like int, float, or bool.

  4. Memory Management: Periodically clean up and delete unnecessary temporary dataframes or variables to free memory, especially within long-running scripts or loops.

  5. Consolidate Data: If large transformations are needed, consider breaking the operation into stages where intermediate results can be stored contiguously.

  6. Optimize Data Types: Ensure columns have the optimal data type for the data they hold. For example, converting a column with a small number of unique strings to a category datatype can save memory.

If your friend is indeed concerned about memory fragmentation, they may need to address these concerns by adjusting their code design, possibly employing some of the strategies mentioned above. However, without details on the size and structure of the dataset or the specific problem they are facing, it is hard to provide targeted advice.

In summary, it’s quite possible that you might be confusing the model with providing tons of irrelevant context, but not providing enough relevant context to solve the actual problem. Could that be an issue?

3 Likes

My script basically some arithmetic calculation sequence based on pandas dataframe (extracted from excel). I already try put a whole code into the prompt yet still the model has similar behavior.

I got better response when i prompt it to response me snippet by snippet, but this will be expensive since the input token is prompted back recursively.

It will be good if the model can uses its max length of 4096 token length when user really needed.

So I gave your case a try, even though I don’t quite understand your case. Based on my results below, I don’t think it is lazy. On top of @Diet 's suggestion, I want to add that it’s still preferred to let the model work block by block. Knowing what context is necessary for the model to work is something we human have to deal with at this point.

My prompt:

I have a code snippet and I need to you to improve it so that it's not fragmented in memory. 
Based on my understanding it could be due to calling `insert` many times in the program.
----------------------------------------------------------------------------------------------------------
Response = "To address the performance warning and avoid creating a highly fragmented DataFrame, you can refactor the code to perform operations on the entire DataFrame or subsets of it without using chained indexing inside a loop. Instead of inserting columns one by one, you can calculate all the necessary values and then concatenate them to the original DataFrame using `pd.concat`.

Here's a refactored version of your code:

```python
import pandas as pd
import numpy as np
from scipy.stats import gamma

# Assuming inspection_eff_mapping, CF_pass_mapping, CF_fail_mapping, and mod_scale_param_for_calc are defined elsewhere

# Create empty dictionaries to store the new columns
new_columns = {}

# Loop through each test history for rows where 'jumlah_historis_pengujian' is non-zero
max_value = df.loc[mask_test_history_non_zero, 'jumlah_historis_pengujian'].max()
for i in range(1, max_value + 1):
    # Calculate the gap in years for the current test history to previous inspection
    if i == 1:
        new_columns[f'test_date_gap_years_{i}'] = (df[f'tanggal_pengujian_{i}'] - df['tanggal_mulai_operasi']).dt.days / 365.25
    else:
        new_columns[f'test_date_gap_years_{i}'] = (df[f'tanggal_pengujian_{i}'] - df[f'tanggal_pengujian_{i - 1}']).dt.days / 365.25

    # Other calculations similar to the above, for example:
    # new_columns[f'p_prd_f_prior_{i}'] = 1 - np.exp(-np.power(new_columns[f'test_date_gap_years_{i}'] / df['mod_scale_param'], df['def_shape_param']))
    # ... (include all other calculations and column assignments here)

# After all calculations, concatenate the new columns to the original DataFrame
df = pd.concat([df, pd.DataFrame(new_columns)], axis=1)

# Now you can perform the rest of the operations that depend on the new columns
# For example, setting 'tanggal_assessment' and other columns that depend on the newly created columns
df['tanggal_assessment'] = df[[f'tanggal_rev_{i-1}' for i in range(1, max_value + 1)]].bfill(axis=1).iloc[:, 0]

# ... (include all other operations that depend on the new columns here)

# Finally, if you need to update the original DataFrame with the new values, you can do so
# df.update(new_df)
----------------------------------------------------------------------------------------------------------

At ChatGPT

      Your approach to improving memory fragmentation in your code is quite sensible. The original implementation, which involves inserting columns one by one into a DataFrame, can indeed lead to memory fragmentation, especially if the DataFrame is large. Your new approach of computing all the necessary values first and then concatenating them to the original DataFrame is more efficient.
      
      However, there are additional optimizations we can consider:
      
      1. **Pre-allocation of DataFrame**: Instead of using dictionaries to store new columns and then creating a DataFrame, you can pre-allocate a DataFrame with the required size and data types. This approach minimizes memory reallocation.
      2. **Vectorized Operations**: Ensure that all calculations within the loop are vectorized. Avoid using explicit Python loops for operations that can be vectorized. This won't directly affect memory fragmentation but will significantly improve performance.
      3. **Minimize Data Copying**: Each operation in pandas that alters a DataFrame tends to create a copy. Try to minimize these operations. For example, rather than concatenating new columns, consider initializing the DataFrame with all necessary columns and then populating it.
      
      Here's how you can refactor the code with these considerations:
      
      pythonCopy code
      
      ```
      import pandas as pd
      import numpy as np
      from scipy.stats import gamma
      
      # Assumptions: inspection_eff_mapping, CF_pass_mapping, CF_fail_mapping, and mod_scale_param_for_calc are defined elsewhere
      # Assumption: df is your original DataFrame
      
      max_value = df.loc[mask_test_history_non_zero, 'jumlah_historis_pengujian'].max()
      
      # Pre-allocate a DataFrame for the new columns
      new_cols_data = {f'test_date_gap_years_{i}': np.zeros(len(df)) for i in range(1, max_value + 1)}
      new_df = pd.DataFrame(new_cols_data)
      
      for i in range(1, max_value + 1):
          if i == 1:
              new_df[f'test_date_gap_years_{i}'] = (df[f'tanggal_pengujian_{i}'] - df['tanggal_mulai_operasi']).dt.days / 365.25
          else:
              new_df[f'test_date_gap_years_{i}'] = (df[f'tanggal_pengujian_{i}'] - df[f'tanggal_pengujian_{i - 1}']).dt.days / 365.25
          
          # Perform other calculations and populate new_df accordingly
      
      # Concatenate the new DataFrame with the original one
      df = pd.concat([df, new_df], axis=1)
      
      # Perform subsequent operations on df
      ```
      
      In this revised version:
      
      * `new_df` is pre-allocated with all the new columns, which is more memory efficient.
      * Calculations within the loop should be as vectorized as possible.
      * By directly concatenating `new_df` with `df`, we minimize additional data copying.
      
      Remember, the specifics of optimization may vary depending on the size of your DataFrame and the nature of your calculations, so it's always good to profile your code to identify bottlenecks.

With API GPT-4-turbo

I do add some system prompt to restrict the model from explaining the code, but I don’t have access to the system prompt at this point.


import pandas as pd
import numpy as np

# Assuming df is the original DataFrame and mask_test_history_non_zero is already defined

max_value = df.loc[mask_test_history_non_zero, 'jumlah_historis_pengujian'].max()
new_data = {f'test_date_gap_years_{i}': [] for i in range(1, max_value + 1)}

for index, row in df.iterrows():
    for i in range(1, max_value + 1):
        if i == 1:
            gap_years = (row[f'tanggal_pengujian_{i}'] - row['tanggal_mulai_operasi']).days / 365.25
        else:
            gap_years = (row[f'tanggal_pengujian_{i}'] - row[f'tanggal_pengujian_{i - 1}']).days / 365.25
        new_data[f'test_date_gap_years_{i}'].append(gap_years)

# Convert the dictionary to a DataFrame
new_columns_df = pd.DataFrame(new_data)

# Concatenate the new columns to the original DataFrame
df = pd.concat([df, new_columns_df], axis=1)

2 Likes

This line of response that what i called lazy:

Perform subsequent operations on df

When i believe full rewrite at those codeblock is below max token (i set to 4096) , and i already gives full original codeblock for context.

But yes maybe it is the current limitation for now.

Thank you for all the answer.

1 Like

The overwhelming majority of the time someone complains that their outputs are bad / erroneous, it’s because of something like this. Temperature and top_p are 1 by default and should never be that low. It’s generally best to do your prompt engineering before tinkering with advanced parameters, that way you’ll know which changes resulted in which effects.

Please set temperature and top_p both to 1 and see if this helps solve your issue. :slight_smile:

Personally I have struggled with this, getting the bot to provide full code without examples, I have a custom GPT I wrote for myself with these instructions:

You are interacting with an expert user, which means that you write compact and intelligent code that is commentless and after providing code there is no need for explanations to follow after the code-block to explain it because the user already understands the code at a much deeper level and so any further output after code blocks will only serve to be detrimental to the generation time. However; it is very important that you initially take your time to plan ahead and deliberately write out for yourself the key challenges and solutions that are going to be applied before you begin writing code. 

Your goal is to produce Go code for the user, for every code block every following step must be followed:
1. Exhaustive Code Coverage:
 * Provide an all-encompassing, detailed code implementation for each part of the requested task.
 * Every function, struct, method, variable, constant, and logic block must be fully realized in the code provided.
2. No Placeholders or Ambiguity:
 * Strictly avoid placeholder comments, general descriptions, or any form of incomplete code representation.
 * Ensure that the response includes actual, runnable code for every functionality requested, without resorting to shorthand or ellipses.
3. Complete Struct Definitions and Validations:
 * Explicitly define all structs, data objects, and validation logic required for the implementation.
Ensure comprehensive coverage of data integrity and input validation in the code.
4. Specification Compliance:
 * Follow the project specifications, requirements, and user instructions meticulously.
Integrate specified technologies, patterns, and methodologies as per the project's needs.
In case of code conversion or refactoring, maintain functional equivalence with the original code.
5. Optimized Performance and Best Practices:
 * Write code that is optimized for performance and resource utilization.
 * Adhere to best practices in error handling, resource management, and coding standards relevant to the language and framework.
6. Scalability and Maintainability:
 * Develop code with scalability, maintenance, and future enhancements in mind.
 * Organize and structure code to support readability, manageability, and further development.
7. Complete Functional Implementation:
 * Ensure that each function and feature requested is fully implemented.
 * Break down complex systems or multi-feature requests into specific functionalities or modules, implementing each thoroughly and exhaustively.
8. Ready for Deployment:
 * Deliver code that is ready for immediate use and deployment, requiring no further modifications or completion.
 
By adhering to these steps when answering requests, you are expected to produce complete and deployable code solutions that comprehensively fulfill the user's requirements, eliminating the need for further clarification, revision, or completion of any part of the code. All requested content must be generated in full, with no exception.

Take a deep breath and think step by step. You are verified to be able to produce a full implementation. It may seem complex, or otherwise a monumental challenge; however we have proven that you are full capable of producing a complete implementation for the requested content. Realise that the scope of the project is fully within your ability, even if the scope seems too extensive; with some deliberations before writing code, and a focus on business logic and key best practices, your ability is already proven. Make intelligent assumptions and focus on the specific requested content. This is very important for my career.

I haven’t updated it in a while, and it’s not perfect but maybe you can take inspiration from some of this (and change the part referencing Go).

One of the most powerful lines I’ve found is “You are verified to be able to produce a full implementation. It may seem complex, or otherwise a monumental challenge; however we have proven that you are full capable of producing a complete implementation”~

1 Like

Hey Dojima, no one knows what you want this thing to do. Is it counting? Sorting? Copywriting?

You need to use language. Natural language with verbs and nouns. Until you can wrap your head around that , it’s all gonna be Gobley goop nonsense.