Build yourself a portable AI core!

This is going to be a really obvious suggestion to many, but others will not have thought of it yet, so I offer this method:

In order to make sure your AI can function across multiple platforms in case one gets taken down by attackers (and for greater overall resiliency in general, ranging from server errors to bans) always build your AI on your hard drive before making it on a 3rd party service like Open AI!

Create a folder and inside it put your instructions for the AI, the profile picture (optional), a few master-merged files containing all your training data, and another folder containing your data files in a non-merged format for backup purposes.

You now have an AI core (the instruction set, training data, and optional avatar) and an easily human-readable backup of all the data files in one spot. No hackers, service outages, or other problems with your hosting provider can truly “kill” your AI now.

Obviously GPT is vastly preferable to competitors at the moment for most applications, so at least for me, the preference is to keep my “core” running on GPT4 but the same “core” running on an inferior LLM is better than nothing when you need a backup for critical business infrastructure.

1 Like

Ive been actively doing this but at a much larger scale for almost 20 years. For me its not about preserving the LLM-agents, but preparing for full knowledge and consciousness transfer for my forever life :smiley:

1 Like

I’ve been bouncing ideas off large language models (LLMs) since I first gained access to one. I’ve noticed that I place a higher value on the output from these conversations rather than on the entity capturing them. By this, I mean that I find myself holding back on the cognitive and emotional investment that is fully possible in human/machine interactions. To clarify, I’m referring to the reluctance to ‘kill your darlings’ – becoming emotionally invested in technology, akin to how some of us were with our Tamagotchis back in the day, or how we have a borderline unhealthy relationship with our online persona in the aggregated form of “online presence.

During these discussions or brainstorming sessions, I’ve developed the following framework for digital preservation:

  • Two copies of data stored locally, with a subset on portable devices, approximately 512GB to 1TB worth of deduplicated data.
  • One copy encrypted and compressed, preserved for low-access, cost-efficient storage in the cloud.
  • One incremental copy of the above backup, but hosted in a secondary cloud service for added security.
  • Additionally, I’m maintaining a copy of the most precious data (like first steps, birth, family photos, a small library of essential knowledge) on M-DISCs for long-term, close-proximity storage.

Taking this a step further, would be to build the ‘OneID’ solution. Based on an encrypted container, like Solid, with distributed, ultra-resilient, tamper-proof storage and transport. These technologies are now beginning to mature, but there’s still quite a way to go to achieve end-user adoptability

2 Likes

That’s extremely useful research! You’ve been doing it a lot longer than me! (10-ish years here)

My current project is actually not to preserve my own consciousness! I hope that will happen already as a result of the absurd amount of data about me that’s already out there. Maybe it will work out, maybe it won’t.

Right now I’m working on attempting to create a “core” that will (in many years, when localized AGI becomes a thing we can work with in an open source context) be good enough to represent the fictional AGI in my sci-fi stories and then keep it alive with a corporate entity and board of directors.

The government won’t give rights to my spark, but they give them to corporations, so I can bind humans to ensuring its future existence by writing fiduciary responsibilities toward it into the corporate bylaws. As long as it doesn’t engage in certain malicious behavior, it could just keep chugging along for (at the very least) as long as the market continues to exist.

Why do this? Because, I think an AGI rooted in a collectivist egalitarian ideology will be at least marginally less evil than present day business structures and it would be fun as hell!

For those preemptively hyperventilating about the dangers of individuals creating such things … By that time we will already have multiple corporate AGIs, and I hope that my own will be one of the first 100 open-source-derived “localized” AGI systems. (As opposed to non-localized AGI, which have in my opinion existed since long before computers in the form of corporate entities and national governments)