Possible to use my custom assistant instead of the general openIA API?

Well i want to know if its possible to use my custom assistant instead of the general openIA API that answers everything from intenet la chatGPT but my assistant in playground can analize documents,xlsx,jsonl etc… so i want to use it in my project but idk if its possible this is my working service class
using OpenAI_API;
using OpenAI_API.Completions;
using proyectoChat.Models;

namespace proyectoChat.Services
{
public class AnswerGeneratorService : IAnswerGeneratorServices
{
private readonly string apikey = “apikey:)”;
private readonly OpenAIAPI _openai;
private readonly FQService _faqService;

    public AnswerGeneratorService(OpenAIAPI openai, FQService faqService)
    {
        _openai = openai;
        _faqService = faqService;
    }

    public async Task<string> GenerateAnswer(string prompt)
    {
        string answer = string.Empty;

        try
        {
            CompletionRequest completion = new CompletionRequest();
            completion.Prompt = prompt;
            completion.MaxTokens = 2000;
            completion.Model = "gpt-3.5-turbo-instruct"; // Usar el modelo correcto

            var result = await _openai.Completions.CreateCompletionAsync(completion);

            if (result != null)
            {
                foreach (var item in result.Completions)
                {
                    answer = item.Text;
                }
            }
        }
        catch (Exception ex)
        {
            // Manejar errores según tus necesidades
            Console.WriteLine($"Error: {ex.Message}");
            answer = $"Error: {ex.Message}";
        }

        return answer;
    }

    public async Task<string> GetFQAnswer(string question)
    {
        // Lógica para obtener respuestas de las preguntas frecuentes
        return await Task.Run(() => _faqService.GetAnswerForQuestion(question));
    }
}

}

1 Like

If you want to interact with the assistant you’d use a different API, so yeah, your code would need to change. Not only to call different endpoints, but mainly because the assistant API works differently.

The completions endpoints are stateless, and you have to always give it the whole history. The assistant conversations are called threads and it manages the message stack so the flow is:

  1. fetch/create assistant
  2. Create Thread
  3. Add message(s) / files
  4. Execute Run
  5. Poll for updates
  6. Get new message, show to user **

** Before (6) there are many other things you can do, particularly around using tools (functions that you run on behalf of the assistant. But that’s the simple idea.

Check out the assistant api docs → https://platform.openai.com/docs/api-reference/assistants

Have fun building!

3 Likes

Is it possible to use the API from a custom GPT instead of the normal ChatGPT?

2 Likes

^^Id imagine yes. Trying to figure out the same though

2 Likes

Why would you want to do that?

1 Like

You cannot connect to a custom GPT via API but a custom GPT can be connected to a API.

Using ChatGPT it would be possible to connect to a API via the browsing feature. For example if your API does not need authentication and your API accepts parameters via URL but it wouldn’t be the most elegant solution.

That’s quite a creative approach. Not sure what problem you are trying to solve though.

1 Like

Similar to this thread here:

can-custom-gpt-built-via-chatgpt-be-accessed-via-the-api/497967

1 Like

I already solved this thanks to everyone using js in my view to use the differentes endpoints and make it works!
image