Hello. I found the following video useful and would like to share it here: Can ChatGPT work with your enterprise data?.
For discussion, I am wondering about chatbots on large-scale websites and enterprise intranet resources – resources which can have sections and subsections of content – and about how chatbots could be aware of which sections that users were currently navigated to, or other aspects of users’ task contexts, to better be able to answer questions and to retrieve and sort results for them.
In a recent post here, I shared an invitation to a new Civic Technology Community Group. In that post, I mentioned the website of Mississippi: https://www.ms.gov . On that award-winning website, one can observe a prominent chatbot: MISSI.
On such a large-scale website, a chatbot could be present on many webpages and it may be the case that these webpages exist in a section-based structure (a webpage could be simultaneously in multiple sections of content). Webpages’ URLs might map to one or more section identifiers in a database, the metadata of each webpage could provide section identifiers, or the paths of the URL strings could be useful for determining which sections were relevant.
Beyond user-based and user-role-based access control topics, as discussed in the video, above, I’m thinking about large-scale websites and large-scale enterprise intranet resources with sections and subsections of content which could be useful for determining which subsets of documents to cue up for a chatbot, or even how one might weigh these documents for a chatbot in terms of their predicted relevance.
In these approaches, the webpage(s) that a user is and has navigated to could be useful for providing context data to chatbots to better answer their questions and to retrieve and sort results.
Is anyone else here interested in these topics: enterprise-scale chatbots and dialog context?