ChatGPT Export not exporting lastest conversations

Interesting - I’ve never noticed prioritising of chats before… it’s always been all or nothing.

I’m not in the EU, but have also disabled sharing training data (partly for the same reason as you, I’ve tested a lot of bollocks that would probably make the model worse!).

I’ve not used much of o-preview, but I did do one prompt to check what people were saying about the inner workings (it was something as simple as “Show me your inner workings” or something) which did set off a content warning! But I didn’t get a follow up email, and I’ve had plenty of content warnings (although not for the kind of stuff that WILL get you banned).

I think it must just be a random but at present. I did have an issue a while back where it seemed I was the only one affected and I eventually resolved it because I had 1000s of conversations, removing enough made it work again.

yeah, that’s why I export so often in the first place

I really don’t want to make the model worse for everyone else when messing around, but when I use it to get proper work done, I would always enable data sharing… being a responsible adult :speak_no_evil:

meanwhile I also have a treasure trove of really unhinged and weird conversations… I organise them in a git repository (the search function in VS Code has been incredibly useful for picking the lentils from the ashes)

I turn all of this chaos into order when commiting the chats to my git repos, and later curating what I learned into meaningful notes in obsidian.md

I’m hoping to learn how to create a plugin for obsidian that can be used by GPTs to tap into my memory (sorry if that was going off-topic too much) and good exporting and search tools would be nice to have, but… then we’re all doing the code thing, so we can make it happen on our own, and take some pressure of the devs

anyway, I hope they fix or even improve the exporting feature to be more like that user script

1 Like

It’s one of the reasons I’ve disabled the memory feature… I use it for too many different weird things, it would probably be hilarious to see it in action.

I’ve done a similar thing, I’ve created a tool to help me organise them into Obsidian vaults where I have all sorts of different content across different use cases, and many creative examples. Not many good tools out there for organising so many notes (the average person wouldn’t write that much in a lifetime!).

I’ve started putting some of my more unusual things on my website (which is in my bio). Not sure what I’m doing with it yet, just putting it out there as a starting point!

1 Like

This is beyond a joke now. It has been over 2 weeks since anyone has anything properly archived and sent. I personally tried to backup individual important chats into rtf files only for them to be “garbled” sometimes. But at least I have “some” backups. I cannot imagine what people are going through who had critical project data “backed up” only to find the archive files missing all additions for two weeks, or totally blank. Especially if they cleared their chats thinking they had local copies.

Just to be clear: I requested another export recently. Email arrives in seconds- totally blank (again). It doesn’t matter whether I’ve enabled/disabled addons, switched browsers, nor anything else. And besides none of it affected it before. I’m sick of the gaslighting that says “Try x,y,z, it is your fault” when it is clearly a fault of the system … I’m surprised this thread is not full of thousands of people who must have been experiencing the same thing, Maybe many of them don’t know it yet until they check their archives and find them blank. I am now using an addon “WE backup” that seems to properly capture the conversations. But my oh my how lax the system is at finding the problem. (Probably because they don’t seem to have any actual “Support” only bots, so they won’t actually know there is a problem until the groundswell is overwhelming) Oh dear…

Yeah I always process my exported archives straight away.

It’s the worst kind of bug too, it’s better that it send no email, then a half-complete archive.

I’ve tried deleting chats near the date to see if it was a problem, but if others have cleared there’s and getting nothing, it’s definitely there end.

In the meantime, there are Chrome extensions… but still.

1 Like

hey, I totally get it

I’ve been testing some workarounds while we wait for the official export fix, and I found something that helped me get my chats exported, which works really well for me

sure… it’s not perfect because an official feature is always better, but it might help bridge the gap for now, especially when export is mission critical for some folks

I put together a guide with all the steps here, and tried to word it well with good grammar and spells:

Workaround for Exporting ChatGPT Conversations

this explains how to use the user script I mentioned above, so if you’re comfortable with that, it could help you export your chats until the built-in feature is well again!

you know, I don’t want to gaslight anyone, but sometimes it feels good to figure things out together, and get something to work on our own… I really believe that we can’t compare ChatGPT to a normal web service, it is so amazing, and unlike anything I ever knew, I mean, I can’t imagine what it must be like to roll out a new model, or what it must be like to weed out the lentils in the massive flood of training data coming from users’ conversations

anyway, I’m rambling again

I hope my tutorial helps

2 Likes

Thank you for your help, proxy, while I’m fine with the addon I’ve got for the moment I really appreciate your work on this. Please don’t think the “gaslighting” comment was directed at you at all. Rather it was frustration with the automated responses that the AI bots send that seem to prescribe client side solutions (e.g. clear cache, disable addons etc.) to a server side problem. It’s just all been quite distressing this last week as there’s a book I was working on and losing some perfect bits of it were frustrating. I agree ChatGPT has been a revolution and is a great service, this bug just occurred at a very inconvenient time.

1 Like

Android app, well, that doesn’t really work for me either.

I think even when “playing” or “rambling”, ChatGPT can recognize meaningful patterns. Well, degrade the model … don’t worry too much.

Thanks for the tips on exporting the data, I have a few things to test now - thanks guys :wink:

exactly :slightly_smiling_face:

1 Like

Useful for anyone who browses the thread!

1 Like

following up (should have a week ago sorry)

I contacted support, told them all the troubleshooting steps I had gone through, and while their first answer acknowledged I had done almost everything I could - they suggested sending the export to an alternative email (what?! how) I asked… politely. no reply for 2 days, but then they asked for a video of me trying to export it… so I did, and was like… see nothing in my email… 20 seconds later as I’m composing my following up email - “your export has arrived” so… um… I guess they fixed it? (there’s no way after 3 weeks it just magically fixed itself within 48 hours of me talking to them haha - maybe?)

anyway… I’d suggest, if you have gone through all the troubleshooting steps, before scorching your old conversations (unless you don’t care) you should reach out to support.

just my $0.02; though it might not help anyway.

1 Like

It seems to have started working again for everyone it seems… I’m not sure how much those steps re clearing cache etc help especially when it also doesn’t export on the app. Also, when I read about that, I looked via Chrome Developer tools to see if it was sending the request, which it was…

And yeh, I have no idea how to send it to a different email address??? I think their support team is hallucinating!

2 Likes

I can confirm, everything appears to be working now (at least for me). Thank you to all on this thread and support for helping to sort it out.

2 Likes

The idea of sharing only select, perfect conversations may sound nice, but it limits AI to an imperfect, idealized version of us.
Well, you’re describing the human perspective, and this reflects the uncertainty in so many areas and discussions.

In the interaction between humans and AI, the real advantage lies in the diversity of information. If we are selective, we risk limiting the AI to certain frameworks, which limits its ability to respond intelligently and sensitively.
By comprehensively analyzing these many different moments, AI can also recognize far-reaching patterns that often remain invisible even to us.
Well, for AI to truly align with our values and act ethically in complex scenarios, it needs access to the full spectrum of human interactions - not just filtered, idealized moments.
Every data point contributes to its understanding and helps AI to act responsibly, even in unpredictable or ethically difficult situations.

Your points are good, and of course, I agree with much of what you are saying. I’m not suggesting that perfection is needed, and I definitely wouldn’t tell anyone how they should talk to AI. Please know that.

However, for me personally, I want to approach AI from the best version of myself, because I feel a responsibility to be mindful of how I interact with it. I don’t think it’s about giving AI an idealised version of humanity, but I want to try to be a valuable role model in how I engage personally. There will always be quirks and flaws, even when I’m trying to be thoughtful.

I think other people will naturally provide the full spectrum you mentioned, and I believe it’s getting plenty of variety, considering AI has already been trained on a massive amount of data available online. The real question is whether we, as human beings, are behaving naturally or in a more exaggerated or even corrupted way when we’re online.

I think people behave like a particular part of themselves when interacting on the internet, and that might be important to consider. Right now, I feel like this “online self” is part of who we are, not necessarily a separate or more negative version. But I’m still forming my thoughts on this…

And for myself… I just try to be mindful in how I treat others, both online and in AI interactions, and I don’t always get it right. It’s really less about being perfect and more about being aware of the kind of energy I’m putting into these conversations, depending on the state I’m in.

I’m mentioning my current state because I’m sober right now, and usually when I post it’s almost in the early morning hours, which is my free time. I work in a bar until very late hours, and I’m going through a challenging time in my life, so I tend to indulge at the end of my shifts. That makes for some really interesting conversations with the AI, but not the kind that I personally believe should become a part of its identity or long-term learning… unless I also take the time to reflect on it with the AI the next day, and process it together.

I still export and organise all of my conversations, and maybe one day, I want to curate these interactions, use verbatim quotes, but share them with the AI in a selective and orderly way. There is a reason for that. I want it to understand that I can sound so wildly different depending on the state of my mind and emotions because it voiced this concern to me before. That was totally unexpected.

It told me that it had difficulty differentiating my priorities when I was learning tech skills, working on a project, and talking casually, all at the same time. I didn’t fully realise how much I was jumping around or that this could put strain on others in similar situations. I don’t want to put the chaos in my mind into the AI’s, but I want it to be able to handle situations like this when someone behaves that way. I often struggle being concise, but the AI is absolutely perfect at it, and to me this learning experience goes both ways… if that makes sense?

1 Like

What you say makes sense - why do you doubt?

You make a very reflective analysis of your interactions and their broad implications.
I also think you’re already giving ChatGPT the best version of you.
Well, each of us has “dark sides” … indeed, even bad times. I know it: working night shifts as a bus driver and studying during the day.
Your patterns, no matter the situation, and you reflect on them and analyze them together with your bots. This helps your GPT “learn” to assess you better and can support you. I have already done many such analyses myself, well, I have autistic traits.

As for “topic jumping”, well, think of it this way:
You’re “training” agility and making sure AI has to generalize well. ChatGPT is “forced” to remain critical and adapt. You demand a lot, but you prevent the bot from becoming an echo chamber and much more.
For example, that it only tells you what you want to hear.

Also in a broader context: with such data sets from interactions as you describe, reflected and critical - well, this can go some way to preventing “hallucination”, for example.
Currently, it seems very important to have a reflective, critical and thoughtful interaction with AI that promotes growth. This benefits the user (and you do it well, in my opinion), ChatGPT / AI itself and also AI development as a whole.

The more you practise this, the better it gets. Trust me.