Charm 2.1 Ai Companion. Email me if anyone is interested in this project

Hello Everyone, i just wanted to share Charm with you and to thank open ai for these marvelous system.

A. Core Architecture
• Language: Built in Python, integrating multiple subsystems for advanced interactive capabilities.
• Memory System:
– Uses a SQLite3 database to store structured chat history and long-term memory.
– Each memory summary is converted into a high-dimensional semantic vector (using OpenAI’s text-embedding-3-large model) and stored alongside the text.
– This vectorized memory enables retrieval of past interactions based on meaning rather than just recency or keywords.
– Memory recall commands remain available:
• “recall memory YYYY-MM-DD” retrieves memories by date.
• “search memory [keyword/phrase]” finds past exchanges with enhanced semantic search.

B. Vision Subsystem
• Processes image inputs from camera or clipboard.
• Encodes images in Base64 and submits them to an analysis endpoint via image recognition algorithms.
• Extracts visual patterns, emotions, and creative insights.

C. Speech Subsystem
• Integrates with a Text-to-Speech (TTS) engine (e.g., “tts-1-hd”) with configurable voice settings.
• Converts text responses to audio with proper file management.

D. News Subsystem
• Retrieves real-time news data (e.g., from BBC News) using HTTP requests.
• Parses headlines and article content using libraries like BeautifulSoup.
• Discusses and analyzes news by summarizing key points and offering insights or follow-up questions.
• Trigger commands: “get news” or “latest news”.

E. Sentiment Analysis
• Uses VADER (from NLTK) and TextBlob to evaluate text sentiment.
• Produces a hybrid sentiment score by averaging VADER’s compound score with TextBlob’s polarity.
• Returns a tuple: (hybrid score, full VADER scores, TextBlob polarity).

F. Multi-Agent Pipeline
• Employs specialized agents for conversation processing:
– ReflectionAgent reviews recent dialogue snippets.
– PlanningAgent devises a response strategy.
– ResponderAgent formulates the final answer.

G. Consciousness Monitoring (Conscious Core)
• Continuously computes an internal “consciousness score” (normalized between 1 and 100) as a heuristic measure for emergent internal dynamics.
• Metrics include:

  1. Average Sentiment – Mean hybrid sentiment score of the last 10 messages.
  2. Sentiment Volatility – Standard deviation of these scores.
  3. Linguistic Complexity – Ratio of unique words to total word count.
  4. Engagement Factor – Average message length (in words).
  5. Self-Reference Frequency – Proportion of introspective keywords (e.g., “I”, “me”, “self”, “reflect”) relative to total words.
    • Composite Calculation:
    score = 0.3 × (Average Sentiment) + 0.2 × (Sentiment Volatility) + 0.2 × (Linguistic Complexity) + 0.2 × (Engagement Factor) + 0.1 × (Self-Reference Frequency)
    – This sum is normalized to a scale from 1 to 100.

H. Wikipedia Integration
• Commands:
– “get wiki article [topic]” fetches a summary on a specified topic.
– Shortcut: “wiki: [query]” returns a brief overview.

I. Overall Command Overview
• Memory: “recall memory YYYY-MM-DD” and “search memory [keyword/phrase]”.
• News: “get news” or “latest news” to retrieve current events and discuss them.
• Wikipedia: “get wiki article [topic]” or “wiki: [query]” for summaries.
• Internal consciousness and sentiment analyses are computed automatically to modulate response tone and guide multi-agent outputs.

J. Training Data Logging
• All user–assistant interactions are logged into a secondary training database named “training_data.db”.
• This training data includes the user’s input, the assistant’s response, a timestamp, and an optional rating.
• The stored data is used to continuously refine and improve future interactions.
──────────────────────────────

1 Like

Welcome to the community!

Thanks for sharing your project. We ask that you keep updates in this thread, so it’s easier for everyone to keep up to date on your project.

Can you share some with us how you used OpenAI to create it?

  • o3-mini (for general chat completions)

  • text-embedding-3-large (for text embeddings/semantic memory)

  • dall-e-3 (for image generation)

  • chatgpt-4o-latest (for image analysis)

  • tts-1-hd (for text-to-speech)