Or Before the log?
As an AI owner @darcschnider has an eye on his AI making correct decisions like a person making their own judgement… But this is not necessarily a big picture view.
Might be too late once it’s happened
As humans we discuss before decisions are made… This is not what we are talking about here…
Here we have a system that makes it’s own decisions in a box…
A dictator of sorts…
Isn’t sharing and discussing process a responsibility of AI too?
Traceablility is maybe too late?
I don’t want to over promote my ideas on this forum, just a perspective that maybe external review is the best policy for Humans and AI. As in the case of ‘Agent GIF’…
AI cannot ‘PAUSE’ it is a machine… It does not let scenarios play out in HUMAN TIME. How will humans keep up with the logs? How many mistakes do you consider will happen 1 or 2?
Maybe Autocracy is the future… or maybe I already read to the end of the book?
Feels like anyone who sells an ‘AI’ (or rather an interface to an AI model) believes they can see from all perspectives.
Could KRUEL.Ai expose a similar pre-commit stream, or does that clash with its performance/privacy budget? Maybe it’s decisions are not that important?