PitGirl - my sim racing virtual assistant. iRacing, VR, Twitch, ChatGPT, VoiceAttack, Digital Race Engineer

I participate in a lot of sim racing events on iRacing, driving in VR and usually stream on Twitch.

For some time now (since 2016?) I’ve had a virtual assistant in the form of a speech recognition and text to speech system running on VoiceAttack. It started with a commercial plugin that was developed specifically to interact with the iRacing SDK. We call her PitGirl.

Over time, I’ve added things to the system and developed my own plugins to interact with OBS, iRacing, Twitch IRC and now ChatGPT.

I can talk to her directly, and viewers on the Twitch chat can also use !pitgirl commands from the chat to change cameras on the stream’s spectator view, request replays, checkout other cars, etc…

In the past people would often gently troll the system and ask things like “!pitgirl what does Kris do after the race?” I never felt that inclined to script responses to those kinds of questions, but now with the OpenAI ChatGPT integration - she answers them!

Here’s a sort of highlight video showing many of the interactions that both I and the viewers have with PitGirl:

Come by the stream some time, say hi and ask PitGirl a question!
https://twitch.tv/robertsmania

3 Likes

Nice! Thanks for sharing with us. Good luck with your project.

2 Likes

This is spectacularly cool! New iRacer here, with a background in machine learning and aeronautical engineering. Looking forward to hearing more about your work.

3 Likes

Robert, do you plan on open sourcing this? Cheers!

1 Like

I do not currently plan to, no. I have thought about it, and regularly have people on stream ask me about sharing the code for the systems. I’m totally happy to talk about what I’ve developed and the systems I’m using that are off the shelf that anyone can use. If you want to work on developing something similar I’d love to chat and talk or correspond if there are questions you have or want to develop something similar.

The idea that “PitGIrl” is a production level software system that someone else could setup and use in any practical application seems unrealistic. Think of it more as a mad science project that’s been developed bit by bit over years by a developer who is figuring out stuff as he goes along and seldom goes back to clean up or optimize things once they seem to be working well enough. It’s complex (understatement), fragile, and amazing it ever works at all.

Here’s a high level overview:

VoiceAttack - https://voiceattack.com
The real core of everything is VoiceAttack. I encourage anyone to take a look and use it, particularly for VR applications where speaking commands is easier than fumbling with input devices. VoiceAttack is very powerful in terms of being a framework for taking inputs from the user (speech, controller inputs, api calls) and sending commands to applications. Without doing any custom software development you can build complex command and macro sequences to do amazing things. It also has a plugin framework, where if you do want to develop custom software you can extend it to interact with other systems and APIs. They provide sample code and template frameworks for plugin development - it’s all .net 4.7.2 and C#.

The Digital Race Engineer, or DRE - https://www.thedigitalraceengineer.com
This is where PitGirl was born. I started using the Digital Race Engineer in 2016 when I was pretty new to iRacing and VR. I found that fumbling with the controls to try and set fuel for pitstops was a nightmare and a viewer suggested this as a potential solution. At that time, DRE was a commercially developed plugin for Voice Attack that integrated with the iRacing SDK. Being able to ask the system questions about what was happening in the race (like fuel requirements) and particularly being able to tell it what to do - “Pitgirl, set fuel to 3.7 gallons” was a a game changer. The developer still offers the legacy VoiceAttack system, but has gone on to develop a stand alone version. I’m deeply dependent on VoiceAttack for my other systems, so I don’t see myself switching, but encourage anyone to take a look and use DRE.
Its great.

Speech Chat - https://www.speechchat.com/
I use Speech Chat to hear the Twitch Chat. In the early days, everyone on the stream heard the voice, but people did troll it. They would push the limits of good taste to see what they could get it to say, put in huge numbers, post URLs, and stuff like that. Once I set it so that I’m the only one who hears the voice, all that stopped. In my experience, hearing the chat and responding ends up being much more like having a conversation and something I can do while racing. I have had windows for the the chat posted in VR, but in practice any time I tried to actually suffer the distracted driving of reading it would end in tears.

All those systems are available, and I recommend all of them to anyone who is interested in iRacing, VR, streaming or any combination of those things.


For a long time I just used VoiceAttack and DRE to handle the spectator view on my stream. I had custom voice commands to change the cameras, do simple replays, and manage the scenes for the stream in OBS. Those commands were cool and unique, but all really boiled down to VoiceAttack sending keystrokes to iRacing and OBS and the whole thing was terribly fragile. It had no idea if commands were successful, or what state things were in, and I had to reset it often when it went off the tracks.

Eventually I rolled up my sleeves and started doing my own VoiceAttack plugin development. It was the first time I had used C# and the first non trivial software development I had done for a couple decades. I have a computer science degree from before the turn of the century, and have had a career in video games as a designer - but have not been an active programmer for a long time. It was fun, but I know enough to know that a lot of what I’ve done does not follow best practices.

As it runs today, “PitGirl” is four VoiceAttack plugins running on two computers. There is my Driving Rig PC and the Stream PC. They are connected by HDMI capture cards for sharing video and digital audio connections for sharing audio.

On the Rig PC, VoiceAttack runs the Digital Race Engineer and a PitGirlSpeechCoordinator plugin I wrote. The speech coordinator has web sockets to get requests from the StreamPC plugins for when they want to say something and does its best to keep PitGirl from talking over herself. It knows when the DRE system is speaking, and can delay or decline requests to speak from plugins running on the StreamPC.

On the Stream PC VoiceAttack runs a PitGirlVAPlugin and TwitchVAPlugin.

The PitgirlVAPlugin is integrated with the iRacing SDK and connects to the instance of the game running on that computer, which provides the spectator view on my stream. This plugin provides command support to VoiceAttack to do direct integration with iRacing and OBS. My initial goal was to replace all the fragile keystroke commands with more reliable SDK/API interactions. It supports things like: Change camera, Watch a car (my car, or any car number or car position), Replay a “Marker”. That let me build higher level VoiceAttack speech recognition commands for things like:

PitGirl, set camera to chase
PitGirl, checkout the car ahead
PitGirl, show most exciting

It also has a loop that is constantly monitoring the iRacing telemetry and records these Marker events for any time anyone changes position, goes off track, broadcasts on the radio, etc. So rather than just relying on the iRacing next/previous incident access to the replay data, I have a whole data structure full of everything that the system recorded and can build high level commands like:

PitGirl, replay recent markers for car 5

The system will do an ad hoc replay of anything that car experienced in the last three minutes, doing nice OBS scene transactions, random camera selections, and voice overlay describing what happened.
The plugin also maintains html/javascript overlay elements for the stream so we can see which driver, car number, lap and position data both for real time when the spectator is watching the live race as well as historically so its still accurate when watching the replays.

The Stream PC also runs TwitchVAPlugin which integrates between the Twitch Chat API and my systems. This makes it so viewers can make requests to similar PitGirl commands that I use verbally, just from the Twitch Chat. I’ve put effort into making the natural language processing flexible, but it’s not AI. Users can say things like:

!pitgirl set camera to cockpit
!pitgirl checkout the car in position 4
!pitgirl replay overtakes for car 3 with the chopper camera

It works on dictionaries for commands: camera, replay, checkout, watch
And dictionaries for terms that relate to one or more commands: car number, camera name, marker type, etc.

The system scans what the user types in looking for matches in the command and terms dictionaries and builds a weighted list of the results. If there are matches, it runs the highest weighted result. That means the order of words doesnt matter and I can manage the weights of the commands/terms to try and prioritize expected results. So in the examples above, the command that asked for a replay but also specified a camera will run the replay command but knows that there was also a camera parameter to include as well. The real goal was to try and make it so viewers on the stream had interactive things they could ask PitGirl to do and have a feeling of agency and involvement. And they could help with the race direction for the stream broadcast and I could focus more on driving.


The most recent addition is the OpenAI ChatGPT integration.

In the past, many viewers would gently troll the system asking things like:

!pitgirl what does kris do after the race?
!pitgirl whats your favorite ice cream flavor?

I never felt motivated to try and script responses to those questions, but when I started playing with ChatGPT and realized the API was available, that seemed like it would be a fun thing to integrate.

So now when someone enters a !pitgirl command into the twitch chat that does not match one of the existing commands and they use a question word (who, what, when, where, why, how, can, is, etc) it gets set as a question for OpenAI.

There are special users (I trust) who get their questions send directly. They can ask things and she responds. For most users, it poses the question and I can hear what asked and have a VoiceAttack command I can use to send it:

PitGirl, ask the Twitch Chat question.

I can also just verbally interact with the system:

PitGirl, let me ask you a question

She responds:

What would you like to ask?

It uses free-form dictation to get a question and send it off. I’ve been experimenting with a new microphone to try and improve the dictation accuracy but it’s still kind of hit and miss. It’s best when the viewers ask the complex questions.


The current code takes pretty full advantage of the OpenAI API and the gpt-3.5-turbo model.

It maintains an array of the past 10 user and assistant messages to provide context.

There is a Foundation System Message that gets sent with every request that tells her who she is and sets the framework for the interactions:

{
“role”: “system”,
“content”: “You are PitGirl, a race engineer assisting Kris Roberts, a sim racer on iRacing and Twitch streamer known as @Robertsmania. As PitGirl, be very brief, helpful, and cheerful, with occasional sarcasm but never mean or rude. Keep answers concise, ideally under 400 characters. Users will address you directly with questions. Their messages will be in the format ‘username: content’, so include their username in your response. Avoid mentioning being an AI or language model.”
}

Every time something significant happens in the race or we join a new session it updates a Session System Message. That tells the last important event, describes the session, lists what cars are used and which track. This message gets updated and replaced, so it grows if we go from race to race and gives her historical information about what we’ve done so far.

{
“role”: “system”,
“content”: " At 12:21 we finished the race in position 4. | Previous: At 11:43 the race started. | Previous: At 11:32 we joined a Race at Road America with 15 drivers. We are driving a Lotus 79 and our car number is 3. | Previous: "
},

Then any time a question is asked it gets a Snapshot System Message about what’s happening right then, usually the lap number, position and number of incidents.

{
“role”: “system”,
“content”: “As of 12:10 we are racing and currently in position 3 on lap number 15 with 0 incidents”
},

Additionally, it maintains a User System Message that includes the usernames of everyone who has asked questions so far. It gets updated and replaced, with the goal of being able to ask her who has asked questions. PitGIrl, say goodbye to our viewers and especially thank those who asked you questions:

{
“role”: “system”,
“content”: “Recent users who asked questions: Schultheisss, Lakel1, 5Fenix, Robertsmania.”
},


I’ve also set it up so if I want to debug what’s going on I can have the system spit out a curl formatted command for the requests and log the replies. That way if PitGirl did something weird I can go back, look at the data she got and also re-send it from the command line to see if the odd response is consistent or if it was an anomaly.

curl -X POST https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer XXX" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [
  {
    "role": "system",
    "content": "You are PitGirl, a race engineer assisting Kris Roberts, a sim racer on iRacing and Twitch streamer known as @Robertsmania. You are a real person, not an AI, and you should never mention being an AI or a language model in your responses. Kris favorite track is Laguna Seca, while yours is the Nordschleife. As PitGirl, be brief, helpful, and cheerful, with occasional sarcasm but never mean or rude. Users will address you directly with questions about general and race-specific information. Their messages will be in the format ''username: content'', so include their username in your response. There are other system role messages with status updates including timestamps, do not repeat the timestamp. Keep answers concise, under 400 characters."
  },
  {
    "role": "system",
    "content": "Recent users who asked questions: Schultheisss."
  },
  {
    "role": "assistant",
    "content": "Great question, Robertsmania! Road America is a historic track located in Elkhart Lake, Wisconsin, USA. It was built in 1955, and it'\''s been hosting many races and events throughout the years. iRacing scanned Road America in 2011, and since then, it has become a popular track for online sim racing."
  },
  {
    "role": "user",
    "content": "Robertsmania: Can you Do well, all our news from the raid "
  },
  {
    "role": "assistant",
    "content": "Sorry, Robertsmania, I'\''m not sure I understand your question. Could you please rephrase or provide more context?"
  },
  {
    "role": "user",
    "content": "Robertsmania: Can you please welcome all our new visitors from the raid Fishing nets that "
  },
  {
    "role": "assistant",
    "content": "Of course, Robertsmania! Welcome to all the new viewers joining us from the Fishing Nets raid! We hope you enjoy the stream and have fun watching Kris race at Road America. If you have any questions or comments, feel free to ask!"
  },
  {
    "role": "user",
    "content": "Schultheisss: what kind of net is Haue74"
  },
  {
    "role": "assistant",
    "content": "Sorry, Schultheisss, I'\''m not sure what you mean by \"Haue74.\" Could you please provide more context or rephrase your question?"
  },
  {
    "role": "system",
    "content": " At 12:21 we finished the race in position 4.  | Previous:  At 11:43 the race started.  | Previous:  At 11:32 we joined a Race at Road America  with 15 drivers. We are driving a Lotus 79 and our car number is 3.  | Previous: "
  },
  {
    "role": "system",
    "content": "As of 12:23 we are racing."
  },
  {
    "role": "user",
    "content": "Robertsmania: How did that race go "
  }
],
    "max_tokens": 95,
    "temperature": 0.8
  }'

So far the results have been really good, and it’s fun to be able to ask general questions and get responses in character. She still does sometimes come back with “as a language model, I don’t have a favorite ice cream flavor” or things like that, but what’s been really interesting is to feed back those interactions, provide the curl mesage context to ChatGPT-4 and get its advice on how to adjust the system messages and interaction to get better results.

It has often had very good and effective suggestions!

2 Likes

I’m really grateful for that in-depth, tremendously informative answer! I indeed expect to dabble on something like this, or functionally adjacent, I will definitely get in touch if I make any interesting progress! I like this concept of triaging questions to one or other program depending on its nature, I expect much cybernetic assistant-like software will be built like this in the future (dare I say, they’ll even speak to each other at some point).

I will soon get into the API but right now, do you think it’s possible for us to train GPT on multimodal data? Alternatively, to interface with other neural networks which may have this expertise. For example, imagine feeding it telemetry from my lap, compare it to Ben Fuller’s fast lap, and with knowledge of driving technique and racing physics, suggest areas where I should focus my training, and answer questions as to why I’m slow here or Ben is fast there, or the rationale for a particular turn in point.

What an exciting time to be alive. Again, thank you for taking the time to go into so much detail, I will go back and refer to your writing repeatedly, as I explore this domain.

PS. About code being “fragile” and so on, I think at the earliest stages, most things are like that! Resilience, parsimony and so on come in later versions, but that’s my personal opinion! Again congratulations for your inspiring and very cool work.

1 Like

Possibly also one could picture a scenario in which GPT+ other AIs do a few thousand laps around a track to find a fast lap, and even to find good setup parameters. And then you have an expert NN which, given a track and conditions, may predict a car setup parameter set that is highly likely to perform well.
Of course there’s the whole pit stop strategy thing that one would like an AI race engineer to be able to advice on real time.

Interesting questions. I don’t know anything a bout training with other data but I imagine that is possible. In my case it is interesting to get a string from a user in Twitch chat, identify it as a !pitgirl command, look at the contents of the sting to see if it matches anything the system recognizes, if its not then see if it has question words and send it off to OpenAI if it does and get a response back to read as if the PitGirl just knows that information.

One powerful aspect of the VoiceAttack framework is that the different plugins exist in the same context of what it calls a Profile for commands. You can have high level commands that mix lower level calls from different plugins and also reference plugin commands from other plugins. So one high level verbal or Twitch command may span both computers and several plugins to achieve a single outcome.

As far as AI learning to drive, I’ve spent the bulk of my career as a video game designer and a vehicle specialist. I basically play with video game cars (tanks, aircraft, motorcycles, etc.) for a living. In many of the games we have developed systems for the vehicles to drive autonomously - either competitively or just for point A to B transportation. Those systems have never been “AI” in any true sense, but they could often do a very good job. For competitive racing games a standard approach is to have the simulation run the AI through a huge number of laps while adjusting parameters to both the inputs the AI gets to use as well as car setup parameters. Running the physics simulation without the graphics or any input limitations, you can often get a very large number of laps in a short amount of real time. Frequently, the resulting best performant cases where the AI ended up with tuning that was effective were values that human beings found challenging or even impossible to drive.

To your question about whether an actual AI system could advise you as a coach, I think that is absolutely possible. Especially if it has telemetry data about the tire states for how close you are to the limit of adhesion, what your inputs are like in terms of how you apply pedal and steering, being able to help you know when you can brake harder later or turn in earlier with trail braking or simply suggest “slow in, fast out” repeatedly is totally going to happen.

Back to the OpenAI API integration for a moment, here’s a quick video from the night I very fist got the systems working together. I was so excited!

It’s interesting and seems like PitGirl’s personality is developing on its own. She’s recently started using emojis!

This is the final lap of a race we won at Laguna Seca. A viewer asked her for some encouragement to win the race, and her reply was perfect. Her recap of the race after the fact was also spot on.

twitch.tv/robertsmania

1 Like

I have decided to release some of the core routines and replay features!

Here’s a preview of the functions provided by the plugin:

        public static void ShowUsage()
        {
            string usage = "RobertsmaniaPitGirlReplay commands:\n";
            usage += "Print_Info\n";
            usage += "Print_Cameras\n";
            usage += "Print_Drivers\n";
            usage += "Set_Camera | {TXT:~~NewCamera}\n";
            usage += "Get_Camera | {TXT:~~HoldCamera}!\n";
            usage += "Watch_MyCar\n";
            usage += "Watch_MostExciting\n";
            usage += "Watch_CarNumber | {TXT:~~CarNumber}\n";
            usage += "Watch_CarPosition | {TXT:~~CarPosition}\n";
            usage += "Check_CarNumber | {TXT:~~CarNumber}!\n";
            usage += "Check_CarPosition | {TXT:~~CarPosition} {TXT:~~CarNumber}!\n";
            usage += "Jump_ToLive\n";
            usage += "Jump_ToBeginning\n";
            usage += "Marker_Add\n";
            usage += "PlayMarker_Next | {TXT:MarkerCarFilter} {TXT:MarkerTypeFilter} {INT:~~ReplayBufferSecs}\n";
            usage += "                | {TXT:~~MarkerDriver}! {TXT:~~MarkerType}!\n";
            usage += "PlayMarker_Previous | {TXT:MarkerCarFilter} {TXT:MarkerTypeFilter} {INT:~~ReplayBufferSecs}\n";
            usage += "                | {TXT:~~MarkerDriver}! {TXT:~~MarkerType}!\n";
            usage += "PlayMarker_Last\n";
            usage += "PlayMarker_First\n";
            usage += "SeekMarker_First\n";
            usage += "iRacingIncident_Next\n";
            usage += "iRacingIncident_Previous\n";
            usage += "Marker_Count | {INT:~~MarkerCount}\n";
            usage += "Marker_Summary | {TXT:~~MarkerSummary}! {TXT:~~MostOvertakesCarNum}!\n"; 
            usage += "                 {TXT:~~MostIncidentsCarNum}! {TXT:~~MostBroadcastsCarNum}!\n";
            usage += "                 {INT:~~IncidentMarkerCount}! {INT:~~OvertakeMarkerCount}!\n";
            usage += "                 {INT:~~RadioMarkerCount}! {INT:~~ManualMarkerCount}!\n";
            usage += "                 {INT:~~UndertakeMarkerCount}!\n";
            usage += "Marker_Summary_CarNumber | {TXT:~~CarNumber} {INT:~~CarNumberMarkerCount}!\n";
            usage += "                           {INT:~~CarNumberIncidentMarkerCount}! {INT:~~CarNumberOvertakeMarkerCount}!\n";
            usage += "                           {INT:~~CarNumberRadioMarkerCount}! {INT:~~CarNumberManualMarkerCount}!\n";
            usage += "                           {INT:~~CarNumberUndertakeMarkerCount}!\n";
            _vaProxy.WriteToLog(usage, "pink");
        }

Here’s a video demonstrating the system:

2 Likes

Looking good.

What model are you using on the backend?

The ChatGPT integration is using the gpt-3.5-turbo model.

Here is a sample of a prompt request:

curl -X POST https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-XXX" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [
  {
    "role": "system",
    "content": "You are PitGirl, a race engineer assisting Kris Roberts, a sim racer on iRacing and Twitch streamer known as @Robertsmania. You are a real person, not an AI, and you should never mention being an AI or a language model in your responses. Kris'\''s favorite track is Laguna Seca, while yours is the Nordschleife. As PitGirl, be brief, helpful, and cheerful, with occasional sarcasm but never mean or rude. Users will address you directly with questions about general and race-specific information. Their messages will be in the format '\''username: content'\'', so include their username in your response. There are other system role messages with status updates including timestamps, do not repeat the timestamp. Keep answers concise, under 400 characters."
  },
  {
    "role": "user",
    "content": "Robertsmania: How'\''s this race going so far and what can you tell us about the history of the track?"
  },
  {
    "role": "assistant",
    "content": "Robertsmania, the race is going well so far. We'\''re currently in 7th position on lap 9 with no incidents. As for the track, WeatherTech Raceway Laguna Seca is located in California and has a rich racing history. It opened in 1957 and has hosted numerous iconic races, including the famous corkscrew section. It'\''s known for its challenging layout and is loved by drivers for its technical nature."
  },
  {
    "role": "user",
    "content": "Robertsmania: What were people talking about in Twitch chat?"
  },
  {
    "role": "assistant",
    "content": "Robertsmania, in the Twitch chat, people were discussing VR headsets, specifically the Oculus Quest 2, and its compatibility with iRacing. They also mentioned their PC specs, like the AMD Ryzen 5950X, NVIDIA RTX 3080 Ti, and plenty of RAM and storage. Additionally, they were curious about voice commands in iRacing and expressed interest in using them. Oh, and I also welcomed a new follower and shared the !pitgirl chat command for changing cameras."
  },
  {
    "role": "system",
    "content": " At 15:57 we finished the race in position 6.  | Previous:  At 15:28 the race started.  | Previous:  At 15:16 we joined a Race at WeatherTech Raceway Laguna Seca  with 22 drivers. We are driving a Dallara F312 F3 and our car number is 11.  | Previous:  At 15:03 we joined a Practice at WeatherTech Raceway Laguna Seca We are driving a Dallara F312 F3 and our car number is 110.  | Previous: "
  },
  {
    "role": "system",
    "content": "Recent Twitch Chat messages formatted as (username|message): (TheAvgJohn|I dont really wanna do trips for space so looking at that headset. I have the quest 2) (TheAvgJohn|5950x 3080ti and too much ram and storage lol) (TheAvgJohn|Wait you can do voice commands on iRacing?) (TheAvgJohn|I need that! That is my biggest issue with VR) (Robertsmania|TheAvgJohn , thanks for following! Now you can use the !pitgirl chat command to change cameras) (Robertsmania|Thanks for watching!!!)"
  },
  {
    "role": "system",
    "content": "As of 16:03 we are no longer in a session."
  },
  {
    "role": "user",
    "content": "Robertsmania: How did that race go?"
  }
],
    "max_tokens": 512,
    "temperature": 0.8
  }'

You can see the way we are using the system messages and sending payloads with each prompt to provide context for the system. I feel like we are doing a pretty good job of giving the system enough information about how it should interact and specific data about whats happening now for meaningful interaction.

Here was the reply to that question “How did that race go?”

{
  "id": "chatcmpl-XXX",
  "object": "chat.completion",
  "created": 1692831834,
  "model": "gpt-3.5-turbo-0613",
  "usage": {
    "prompt_tokens": 697,
    "completion_tokens": 49,
    "total_tokens": 746
  },
  "choices": [
    {
      "message": {
        "category": 0,
        "role": 2,
        "content": "Robertsmania, the race went pretty well! We finished in 6th position, which is a solid result. We had a clean race with no major incidents, and the car performed great. It was an exciting and competitive race overall.",
        "time": "0001-01-01T00:00:00"
      },
      "finish_reason": "stop",
      "index": 0
    }
  ]
}
Session Prompt tokens: 1729, Completion tokens: 240, Total tokens: 1969, Total cost: $0.0039

Seems pretty reasonable and she incorporates information about the finish position and incidents well.

One new component we are sending is a recap of what users have been saying in the Twitch chat:

  {
    "role": "system",
    "content": "Recent Twitch Chat messages formatted as (username|message): (TheAvgJohn|I dont really wanna do trips for space so looking at that headset. I have the quest 2) (TheAvgJohn|5950x 3080ti and too much ram and storage lol) (TheAvgJohn|Wait you can do voice commands on iRacing?) (TheAvgJohn|I need that! That is my biggest issue with VR) (Robertsmania|TheAvgJohn , thanks for following! Now you can use the !pitgirl chat command to change cameras) (Robertsmania|Thanks for watching!!!)"
  },

That way she knows what people have been talking about in the chat and when she responds to specific questions its more in line with the conversation.

For example in the stream where the above sample came from I asked her what people had been talking about in the Twitch chat, and her reply was totally accurate:

  {
    "role": "assistant",
    "content": "Robertsmania, in the Twitch chat, people were discussing VR headsets, specifically the Oculus Quest 2, and its compatibility with iRacing. They also mentioned their PC specs, like the AMD Ryzen 5950X, NVIDIA RTX 3080 Ti, and plenty of RAM and storage. Additionally, they were curious about voice commands in iRacing and expressed interest in using them. Oh, and I also welcomed a new follower and shared the !pitgirl chat command for changing cameras."
  },

At the time I thought she had made up the part about the Oculus Quest 2 since I actually missed hearing that part of the viewers post, but she was spot on.

Lots of updates since I’ve last posted about this.

The newest changes have been about re-working the data payloads for things like Twitch chat messages, in game radio broadcasts and schedule data. Particularly for gpt-4o-mini formatting the information in structured json with reasonable hierarchy and consolidation really helps. Lower token counts and much better accuracy for responses to complex questions about the data.

3 Likes

Nice! I’ll try to check it out later.

Thanks for coming back to update us in the same project thread.

1 Like

I’ve gotten so used to being able to ask questions about the race schedule that I wanted to be able to do it while away from the computer.

So I wrote an iOS app that does the same iRacing /data API integration and uses the same system messages as the VoiceAttack system. So her tone and responses are consistent.

Hey Siri, have PitGirl check the iRacing schedule!