Offloading to local GPU to help expand OpenAI hardware access?

Is there a way we could offload some compute to API users local GPU(s) to help expand your access to silicon?

Feels like more people could utilize plugins, costs of API could drop and benefits to humanity could start roaring. Obviously a lot is used on the backend, but perhaps a “torrent” network among the api crowd or even better just allowing single user gpu clustering to extend the server hardware.

Many security challenges await, it’s definitely going to require resource to operate, but maybe sophisticated FPGA could make this a real way to expand greatly on hardware behind all this.

Just throwing the idea out there, criticism is welcome and I am definitely not saying its a slam dunk, just something interesting that could have a big benefit if an adequate method of deploying this could be developed.

Cheers!

1 Like