Hello @jassingh.7995, I’m sorry, but the ‘Oracle of Light’ skill is currently unavailable in the locales English/India and English/Australia. However, I may be able to support these locales in the future. Please send me an email if you’d like more information.
Hello @ilyapl, @brandonleewyatt, @nathansebbah
I’ve made several improvements to the ‘Oracle of Light’ skill, including connecting it to the ChatGPT API and enabling it to manage conversation context and provide uninterrupted longer answers.
May I help to get Portuguese available as well?
@cristianoapostolo Hello Cristiano, thanks for your interest. I have already implemented a version of the skill in Portuguese. However, I wanted to clarify that at this time it is not possible to monetize Alexa skills in Portuguese. Unfortunately, Amazon does not currently support in-skill purchasing for Alexa skills in the Portuguese language. As a result, I will not be able to generate revenue directly through the skill, which may impact the long-term sustainability of the skill. However, that skill could be used as a valuable tool to engage with user, if you find an alternative way to sustain the skill. Let me know what you think by writing me an email. Regards, Matteo
Nice skill! And I like the name
If I say “Activate advanced mode” it just feeds those words to chatgpt, doesn’t trigger any inskill product. Is this necessary/released for context management in UK English?
It always says “more info needed?” at the end. This sounds like it’s added by the skill rather than coming from chatgpt. I think it’s unnecessary and would be better not to have it.
Is there a way of interrupting if chatgpt starts going on a long ramble based on a misunderstanding?
I think chatgpt is giving answers that are too long. In a conversational voice based situation it’s more natural to give a short answer and then be asked to expand more about a particular point, rather than launching into a long answer immediately. If chatgpt could be tuned to be briefer, that works better for voice I think.
@jonathanjfshaw Hello Jonathan,
Thanks for rewiewing the skill. Very good comments.
Here is my answers:
- I’m working to a version with In skill product but I have not rolled it out. It shouldn’t be live. Please double check it again. I have the English skill in many locales and maybe the alexa cloning has got a glitch.
- When a skill requires a user to reply, needs to send a user an answer, otherwise the skill wo’nt be certified. That’s way there is a question. I will send you those questions and ask your feedback (now I’m on my mobile)
- Yes, you can say
Explanation: by saying Alexa you will trigger your Echo and you’ll interrupt the audio. Then your Echo is able to receive your new command. You don’t have to say <open ‘oracle of light’> again. Just place a new question, the skill is still open.
- I agree, the number of tokens requested is important for a natural conversation. In an ideal world it could be estimated and hard coded. Unfortunately some time the latency of the API changes over days and a hard coded number of tokens could send the skill in timeout in days in which chatgpt is crowded. Starting from today the skill implements an adaptive mechanism to tune this number ( like tcp adaptive window size to control chatgpt ‘congestion’) and I am currently testing it. A small number of tokens would incur in a higher cost because of history. I’ll see what I can improve but it is a good point. Also some people are complaining that the answers are too short.
I double-checked “activate advanced mode”. I definitely get chatGPT telling me it doesn’t have an advanced mode. My locale is UK.
Ah, I see. I wonder if the best solution in English might be “OK?” as it’s very short and very general, works in almost all contexts.
So when the Oracle says “shall I continue?” while giving a longer answer, that must be the skill speaking not chatGPT. That feels very natural.
Is it possible to detect if chatGPT ends its response with a question mark and leave out the skill’s question in that circumstance?
For example, I get replies like “Is there anything else you would like to know or talk about? Anything more to ask?” It sounds like the first question is coming from chatGPT and the second from the skill.
2 additional points I notice here:
(a) the time gap before the skill’s question seems a tiny bit too brief, it feels more rushed than the gaps between chatGPT’s sentences.
(b) If I don’t say anything I often get these questions repeated, it made me wonder if maybe an empty space or line break is being given to chatGPT as input. There’s something a bit awkward about how conversations end.
That makes sense.
This seems like a very interesting problem. That number isn’t fixed, it heavily depends on context. Like when we begin a conversation we often give short replies as we gradually establish the subject matter, but when we’re sure what someone wants to know we given them a much fuller answer. I don’t quite know what heuristics you could use to adapt the number of tokens requested, but I’m happy to discuss it with you if that’s helpful to you.
@jonathanjfshaw Thanks again, you are giving me a lot of thoughtful comments.
- I’ll double check the ‘activate advanced mode’
- The interjection ‘OK’ looks cool, I will add it tonight. There is also an alexa sound for it, that looks more natural. Give me an alternative or two to ‘ok’. So for I have 3 short sentences, randomly alternated.
- Yes I can detect the already present ‘?’ and I could avoid a second question. I thougth about it but I had to concentrate on the network communication. Your guess is right. I’ll implement it.
- Point (a): I m not sure to understand it, give me an example please. I could add a sound like an interjection (‘ah’, ‘oh’,…)
- Point (b): it is handled only by Alexa: if the user doesn’t reply, Alexa will try to engage the user one more time before closing the session using the last sentence. GPT is not involved here
- About the closing of the session with the Oracle, I might have to rework it and add more sentences to close the session.
- Token estimation: I will write you more later.
Here is a list of interjections, for different locales:
Which ones are the most natural? I might use three of them.
To engage the user I’m randomly using:
-What else do you want to know? → OK?
-More to know?
-More info needed? → OK?
-Anything more to ask?
What do you think? Should the Oracle always reply with OK?
I couldn’t see ‘OK’ in the interjections list. Generally the interjections look a bit too excited, I reckon you probably want something neutral.
“More to know?” and “More info needed?” sound a little odd to me. “What else do you want to know?” is a bit long.
I think your best options are “OK?”, “Anything else?” and “Anything more?”. Those are things we very commonly use in English in this context.
To clarify, I mean that the skill says “more info needed?” too quickly after chatGPT is finished speaking. It should wait 500ms more or something before adding the “more info needed?”.
I’m not going to make more suggestions for now, I think that detecting when chatGPT has asked a question and avoiding the second question is the most important thing to change, let’s see how it feels when that is done.
@jonathanjfshaw I’ve rolled out the new version with your suggestions. Many thanks. Almost Continuous Integration!
I reworked the user engagement and avoid to insert it when there’s already a question. It looks much better.
Probably interjections right now do not add a plus. About the time gap, I’ll see if I can find a white sound in the Alexa library to make some tests.
It’s feeling much more natural, nice!
Great job on your Skill! I’ve tested it with Amazon Echo Show 15, Amazon Astro and a number of other echos. On the Echo Show 15, it kept wanting to open ORACLE TV for some reason. I created a routine “Open Oracle” and this seemed to fix it. If this skill isn’t free for you or someone wants to support continued development, do you have some way of receiving payment like “buy me a coffee” or similar? I can add your link in the video description to help provide support, if you want. I plan to show your skill on my YT channel. The channel is my last name followed by “'s techtalk”, if curious. I would love to see the text of what ChatGPT says appear on the echo show/Amazon Astro display, if that’s possible. My only feature recommendation
Hi, I’ve been looking through this thread and I wanted to try the skill also, but I’m in France. In alexa I have my language setup as French + UK english but the skill store is only french.
If I click your link, it takes me to Amazon US and tells me “It looks like your account is set up for Amazon. fr! Click here (bla bla amazon.f r ) to visit the Skill Store on Amazon. fr and browse skills available for your device” and the french skill is unavailable. I don’t care about it talking to me in french. Is there a way to publish the skill for any language and just add in the description the fact that it’s actually an english skill or is it tied to the way alexa processes language?
And +1 for the “buy me a coffee” or paypal or whatever link so we can show our support
Early look at the video for your Skill: ChatGPT on your Amazon Alexa, Echo, Show or Astro - Easy Skill Setup! - YouTube . May release it Thursday or Friday, it’s not currently public and only those with this link can see it.
the current status of all my skills is : Removed (by Amazon). Yesterday the italian version (very popular!) and the german one have been removed and cancelled from the market at the same time. That’s the reason why the links to the skills are broken.
Only the English version is alive ( how long?). The English version in the India locale has disappeared some weeks ago. I asked their support and nobody knows where is it. I saved that India link the first day I published the skill (at that time it was working), but now it is broken.
I will write the details tonight, I’ll reply to all of you @jwagner @snuguru_maestro
Hello Jon @jwagner , many thanks for the video, it is beautiful and well organized! You were able to pack a lot of valuable information for the community. I love it! I’m going to refer to it if somebody is asking me how to do things with my skill.
I have already implemented the graphical user interface (with APL) in the Italian skill and in the English skill. It takes nothing to insert it in all the skills, but then the skill must go through another certification stage that could take weeks (last time it was 2 weeks. In two weeks I can build a skyscraper).
Here is the opening GUI for the removed Italian version:
And here is a response about ‘how to prepare a gurmet hamburger’ (probably cooked in Italy )
I know that it is in a different language but on the top it reports the name of the skill (in Italian it is ‘Oracolo di Apollo’, in the footer provides an indication that data and facts are updated to 2021 (some people do not know it!) and a recommendation to evaluate responses critically (this is important and I take it seriously, even if the skill is in the trivia and games category not in the NASA projects or in the FDAA category). What do you think about this layout? It is poor but fast because doesn’t have to load any webpage. Maybe I will add a colour, even if I have to load a webpage.
An alexa skill like this, calling an external API that could be congested (like chatGPT) becomes a soft real time application (maybe I have exagerated…) with a time budget. It is not possible to load all the webpage you get in the web version of the chatgpt, so you’ll always have to concatenate the response with a ‘Shall I continue’.
I’m currently running the API chatgpt 3.5 turbo, which I like a lot. It is fast and moderated.
I saw your ASTRO robot and I felt inside me the interest to keep going despite the current outcome. We are going to do some community development, if I can have your assistance. I’m going to write some more later about some ideas I want to develop. You could do some more videos about those ideas I’m going to lay out.
I just want to report on you about what I know so far: the French skill is in status ‘Hidden’ since March 3rd, because the skill has been tested by Amazon and allowed the user to search for mature questions but failed to filter out inappropriate responses. It is visible (and I guess usable) for people that have installed it in the early phase but not visible for new users. I take this issue seriously, it was a proper comment from Amazon, because at that time I was implementing the davinci-003 LLM who was much less moderated and maybe my filter may not have been effective in all cases. After their comment I have updated the skill to the chatgpt-3.5-turbo, who is an unbelievable piece of art. It is nicely moderated. Also, on top if it, I have a filtering layer that is able to detect some not appropriate content. I asked their support to get some guidance on what to filter out more and just tonight I got an email to let me know it is under investigation. I will let you know if the french skill changes status. If you have the French account I think you’ll have to go with that language. I can check but I think it is that way. But you know, maybe the skill will raise again. For sure, moderation is an important point and I hope for the GPT4 API. I have enrolled the openAI waitlist and if I get in I will update the skill. I have read that GPT4 is able to detect 80% more inappropriate content. Regards, Matteo
Here I’m just adding more details to my morning email: the current status of all my skills is : Removed (by Amazon). Yesterday the italian version (very popular! I feel so much pain…) and the german one have been removed and cancelled from the market at the same time. One minute of difference. I guess it was not a random monitoring of my skill. That’s the reason why the links to the skills are broken. Only the English version is alive (so far). The reason for the removal is the following: ‘The skill allowed the user to search for mature questions but failed to filter out inappropriate responses’.
I think I can accept this comment. I take this comment seriously. It is important that the content is moderated, even if the skill is in the trivia and games category. I’m going to add in the footer of the GUI the recommendation about been cautious in using the response. Then I will engage their support to ask what filter I could add more. And then I will wait for GPT4 API.