This is a real life question I’m facing: public school or homeschool for my kids. Naturally, I turned to AI to help guide my decision (not really…was just curious what it would say). The result was a nuanced, empathetic and thoughtful response which really surprised me. I actually was suspicious by its “humanity” so I ran it through a plagiarism checker and it was 100% unique. It’s pretty long, so I’ll leave a link, instead of post the whole thing on here (settings included):
What are some of the foreseen dangers of a “AI life coach” app?
An information companion is exactly what I designed Raven to be. See my post under looking for teammates. I’m working on the second draft of the book about this architecture now.
Yes! Getting to know the user was a key requirement. This is achieved with a few methods. First, Raven records every interaction. Second, all previous interactions can be used to craft future interactions. Third, Raven now has an internal monologue to develop better understanding of the world, including people that it interacts with. For instance, Raven is able to internally ask questions like “What happened with that interaction? Why did they say that?” And the results of answering those internal questions are stored for later use.
Unfortunately no, my current architecture is prohibitively expensive because I used so many prompts per interaction (tons and tons of evaluations, summarizations, etc). It cost $30 just for a few short conversations, so unfortunately Raven is too expensive until I do lots of optimization, such as switching to CURIE engine. But also I’m assuming the cost will come down over time.
Ah ok, yeah that’s a lot. Unfortunately I’ve found that the Curie engine is much lower quality than the davinci engine - hard to compare results. Looking forward to seeing this in action when you’re able to deploy it to the public.