The app uses macOS APIs to gather context that we feed into the prompts, some of them for example:
your current text selection
your last clipboard entry
the active app
if it’s a browser, the current tab with its url and title
if it’s keynote, the slides with their content and images
…
Based on this context, possible actions are shown and the prompts have access to the context for a better experience that what you can have with AI tools limited to the browser.