Hi folks,
A while ago, I made a native Windows desktop app that wraps around the OpenAI API.
Link to the project: bawkee/MdcAi: Multipurpose Desktop Chat AI
It kind of flew under the radar since I didn’t promote it much outside of the Microsoft Store. I believe it has a lot of potential if more people were aware of it.
Here are a few interesting features that make this app stand out:
- Fork conversations with the Edit button – This is a hugely underrated feature in most default web UIs, and I’ve taken it to another level here. Forking conversations instead of stacking them can save you a massive amount of tokens, reduce total context length, and improve LLM prediction precision. I can elaborate further on this if needed – it’s something I feel many people overlook, especially in long or context-rich conversations.
- Proper Markdown rendering – The app renders Markdown correctly, including code syntax, tables, and more. Everything is fully selectable since it uses WebView2 for this task.
- Local data storage – All your chats and data are saved locally in an SQLite database.
- Elegant design – The app is designed to blend seamlessly with Windows 10/11 themes.
I’m really interested in feedback. I don’t know how much interest there is in desktop apps these days.
The GitHub page has a list of features I’d love to implement if there’s enough interest.
Few things I’m planning to add:
- Complaint button – Users can “complain” about a given answer (e.g., an incorrect code solution or one that throws an error). They can enter details, and the LLM will generate a new, improved answer, forking the conversation into a new thread while preserving the original. This avoids wasting tokens on stacking bad answers and keeps the context length manageable. It’s a more efficient way to refine responses without starting a new conversation from scratch.
- Script and Python function execution – The API allows for custom function execution (often called “tools” in LLMs). With this app, you could, for example, write a PowerShell function to start/stop services, list top processes, etc. Just write a descriptive name, add comments, define arguments, and drag-drop it into a conversation. The LLM can decide when to run the tool. For instance, you could say, “Also, stop all services starting with ‘Contoso,’” and the LLM would execute the necessary PowerShell function. The same can be done with Python scripts. This is something web clients can’t do, and it opens up possibilities like scheduling tasks, running apps with arguments, or executing Python scripts with structured inputs. In short, it lets you augment anything with an LLM.
- Drag & drop documents – You can add files/documents to conversations without worrying about file size or privacy. The only limits are your RAM and CPU for semantic search and OCR.
There are quite a few more ideas I have, but I’m not sure how many people would find them useful. The app is completely free and open source.
What do you think?