Alice: macOS Assistant App with Long-Term Memory, Global Shortcuts and Remote Actions

Hey there :wave:

I like the idea of accessing GPT-4 wherever I want, in every application, without needing to switch context. Additionally, I really miss personalization, long-term memory, and going beyond answers to take actions. As a result, I’ve created a desktop app (Tauri/SvelteKit/Pinecone).

It looks like this:

And its main features include:

  • ChatUI which uses GPT-4 and a streaming option to get results right away
  • User-defined snippets for prompts.
  • Snippets can be used directly in a chat or through global keyboard shortcuts.
  • App has access (read/write) to the clipboard, so it can work on selected text.
  • Snippets may perform a remote action which sends information and get it back (like ChatGPT Plugins)
  • Alice has a long-term memory, built with SQLite and Pinecone
  • Alice memories are also stored in Airtable, so I can use them on my iPhone (via Shortcuts) and even Apple Watch Ultra (voice interaction & ElevenLabs.io)

Thanks to global keyboard shortcuts, I can use predefined snippets wherever I want.
Logic of remote actions is built in make.com (similar concept to Zapier), so it’s really flexible and relatively easy to build.

What do you think, guys?

3 Likes

Did you make it? Does it work? If so, is it free? I teach and like you, I would love ChatGPT to remember my conversations. And really like the Elevenlabs because I have a trained model of my voice. I don’t use the Apple Watch or Apple computers

1 Like

Hi, I don’t want to make assumptions, but given the project he is proposing, I suppose you will have to enter your API key. You will therefore pay the costs generated by the use of GPT-4, be careful as the bill can quickly become quite high haha, I speak from experience! Have a good day.

Thank you for the fast reply! That’s what I was thinking. If only we could tell it, “Only remember this” or “Don’t remember that”, that would be the ideal setup. With no way to set a spending limit, it’s like handing a monkey a loaded gun and waiting for the inevitable, and we are unable to keep the monkey from playing with the gun. The bullets (bills) hit us, causing major damage. Not a positive direction for this to go. If I knew someone coding this, I’d suggest that, with the option to increase the limit, to finish whatever we are working on.

Did you make it?

Yeah, https://heyalice.app

Does it work?

Yes, but the public version of this app is limited in many ways. Implementing long term-memory and integration with external services takes time. This is a project I develop 4fun, so I have a very limited time for adding new features. I’m choosing only these I consider stable and easy to implement. The most advanced ones are just for me.

I teach

Me too. You can check my e-book Everyday with GPT-4 - Product Information, Latest Updates, and Reviews 2023 | Product Hunt which soon will receive an update.

I would love ChatGPT to remember my conversations.

I’m a developer, so it’s fairly easy for me to do so. For those who can’t do code, there is no easy way to achieve this atm.

Yep. Setting a spending limit is a super important part of using tools like this app. With GPT-4, costs can really skyrocket in no time.

I’ve been thinking about sharing my own API key, but at this stage, I don’t see a reason to. Especially since the app itself is free. If that changes (if it ever does), I’ll consider different options.

My thoughts: the UI is so low contrast I cannot read any of the text/labels.

Thank you for your feedback. I agree, and I’ll fix it in the upcoming updates.

Remember coming across this when you first shared it and looking forward to seeing the outcome.

Have you continued development on the app? Would love to check it out :slight_smile:

Yeah, I’m still working on it and will release version 2.0 soon. Here are a few screenshots:

Processing: y4js27zderml3ifw0a0spz0qr02d.png…




And here Grzegorz, with whom I’m developing it, described the latest version in more detail: Rethinking ChatGPT on macOS — Alice 2.0 UI reveal

1 Like

The implementations of ChatGPT on third-party apps have been poorly executed. While the app itself functions well, our specific requirements have proven to be quite expensive. Fortunately, a viable solution lies in utilizing online tools and AI services together. If you happen to be a creator or UI/UX designer, you can benefit from the seamless integration of Figma and Grammarly, all under a single monthly payment. Additionally, if your work relies heavily on office tools, consider using apps like Office 365 with Copilot. Personally, I have found great satisfaction in having an Assistant on my Mac. Presently, my ChatGPT needs are adequately met by Linguix and Taskade, which offer unlimited usage of GPT 4, complete with GPT agents and bots, all at a reasonable monthly cost. This approach is far superior to using a standalone app and paying OpenAI for the API code. I made the mistake of trying it once for an Alfred workflow, only to face significant financial losses. Needless to say, I will not be returning to that experience. The frustration was so profound that I ultimately switched from Alfred to Raycast (and no, I do not require AI functionality on Raycast).

I appreciate your comment, however, you haven’t had the opportunity to test this application, have you?

While the app itself functions well, our specific requirements have proven to be quite expensive.

I can easily imagine the situation you’re writing about. That’s exactly why I didn’t implement file processing in Alice, for example, because I want to refine this functionality so that it can balance between models depending on the task.

If you happen to be a creator or UI/UX designer, you can benefit from the seamless integration of Figma and Grammarly, all under a single monthly payment. Additionally, if your work relies heavily on office tools, consider using apps like Office 365 with Copilot. Personally, I have found great satisfaction in having an Assistant on my Mac. Presently, my ChatGPT needs are adequately met by Linguix and Taskade, which offer unlimited usage of GPT 4, complete with GPT agents and bots, all at a reasonable monthly cost. This approach is far superior to using a standalone app and paying OpenAI for the API code.

You’re writing about a completely different use case here. I’m a designer & full-stack developer, and I write a lot (I’m writing this to provide some context).

All the tools mentioned have their applications, and their generative AI features can be useful. We’re on the same page here.

Alice focuses on Snippets you can assign to keyboard shortcuts and Remote Actions you can link with no-code/low-code automation or custom servers. It’s about improving your workflow and connecting multiple apps, services, and tools. This allows you to interact with them through chat or access them from any app on your desktop. The difference here is quite significant.

I made the mistake of trying it once for an Alfred workflow, only to face significant financial losses. Needless to say, I will not be returning to that experience. The frustration was so profound that I ultimately switched from Alfred to Raycast (and no, I do not require AI functionality on Raycast).

Yeah. Sorry to hear that.

So this is like ZeroWork or ElectroNeek apps then. Thanks for the explanation. It seems I will need to pay for any external AI API if I want to get the most of your app. Not fair for my budget right now.