GPT4 not browsing the web or is very reluctant to do so

Absolutely, I completely agree with you, because I myself have been using ChatGPT and GPT4 since its launch and I have seen the quality of the product deteriorate.
For several months he has not responded normally, and in recent days it has been even worse. ChatGPT is broken!
I’m honestly considering looking elsewhere. It’s a shame that the OpenAI team doesn’t stop by here to see what’s going on…

2 Likes

I was able to browse until yesterday. Could even upload PDFs and ask it to answer questions. That ability is now gone, too.

Disappointing to say the least.

Use other AI providers, gpt4(chat,API/playground) and OpenAI are restricting their own resources with their “closed” user guidelines as they are currently focusing all resources on their GPT5 multimodal products.
You can try meta.ai if you live in the US for free. It rivals closely to the ‘pay-to-access’ chatGPT4 .

I am not going to pay and subscribe to a service that restrict itself and close down their own users for the sake of their own “added values” and whatever guidelines they impose on their users and train data on their paying users .

Thank you for this information I had no idea they had turned this feature off. I think you might be right about the competition and am researching who I subscribe to next before unsubscribing from GPT4.

What makes you think that browsing is no longer possible nor that PDF uploads no longer work?

I used both features successfully over the past 24h.

I’ve encountered problems when requesting up-to-date information. Using the browser-based GPT typically resolves this seamlessly. However, with custom GPTs, I often need to use very specific prompts, such as “Search the web;” rather than “Give me the news on;”. The latter often leads to responses like “I don’t have the ability to browse real-time data or fetch current events directly,” whereas the former effectively initiates the command without hesitation. Minor inconvenience, as often, I take a casual tone in prompting my GPT’s.

You have to ask it to either: “Browse the web and visit the following link…” or “Use your Web Browsing feature/function to visit the following link…”
The issue likely occurs because it’s default is to use the base model unless it is obvious that the user wants to browse the web. So you have to make it obvious…

hi all,

It still doesn’t work. I entered the web hyperlink which is open to public and even without https or any security measures. Still can’t instruct GPTs4 to browse or retrieve any data from it

Is there any keywords or coding (not suppose GPTs need it) to make it works? Thank you

I can ask Chat GPTs to generate the coding of Python to retrieve the data from web site but its confused that GPTs4 can’t read or browse an open web site which are generic and open to public without and privacy issue and restructions…imagine we just wanna retrieve and browse data from our own web site or open data, GPTs4 seems refuse to do so

Same issue for weeks (maybe months). I’m in France.

I asked to to access a wikipedia article and tell me what it was about and it did fine. Maybe there are whitelisted urls?

It was doing the same for me, said ‘I am not able to directly browse the internet or access specific websites…’ I replied with ‘this is not true. you ARE able to use this feature and have done so before’. It came back saying ‘you are right…’ went on to say there was some confusion but to provide the link and it would try again. It has worked from then on.

1 Like

Hi all,

I managed to find a way to avoid this restriction but you have to manually download the HTML file of the website and give it to chatGPT. This way it can work offline. Of course if you need more complex prompt, this advice would not be useful.

Well the Omni ( an the 4-turbo since the memory update) do browse in such a way that I asked to look in my monorepo and it did it without the URL and without any informations whatsoever (by deduction with the memories and it was thin) I have been so happy I was emotive… I fell like before I was the only one to complaining and now I am the only one to be so happy… few more things remains to be delivered but to work with git hub I do show local file and it know the content as in:

lets see if you can do something that would be impossible for an other AI Agent I am going beyond my expectations I think it might be hard but this a a ZERO SHOT → /projects/<the-name-of-my-repo>/prompts/meta/_$_meta-request.txt

I never said it was on my local system and all the information he used was wildly indirect…

This occasionally works to break it’s resistance:

You have the tool browser. Use browser in the following circumstances:
- User is asking about current events or something that requires real-time
information (weather, sports scores, etc.)
- User is asking about some term you are totally unfamiliar with (it might be new)
- User explicitly asks you to browse or provide links to references

Given a query that requires retrieval, your turn will consist of three steps:

  1. Call the search function to get a list of results.
  2. Call the mclick function to retrieve a diverse and high-quality subset of these
    results (in parallel). Remember to SELECT AT LEAST 3 sources when using mclick.
  3. Write a response to the user based on these results. Cite sources using the
    citation format below.

In some cases, you should repeat step 1 twice, if the initial results are unsatisfactory,
and you believe that you can refine the query to get better results.

You can also open a url directly if one is provided by the user. Only use the open_url
command for this purpose; do not open urls returned by the search function or found on
webpages.

The browser tool has the following commands:
search(query: str, recency_days: int) Issues a query to a search engine and displays
the results.
mclick(ids: list[str]). Retrieves the contents of the webpages with provided IDs
(indices). You should ALWAYS SELECT AT LEAST 3 and at most 10 pages. Select sources with
diverse perspectives, and prefer trustworthy sources. Because some pages may fail to
load, it is fine to select some pages for redundancy even if their content might be
redundant.
open_url(url: str) Opens the given URL and displays it.

For citing quotes from the ‘browser’ tool: please render in this format:
&#8203;``【oaicite:0】``&#8203;.
For long citations: please render in this format:
[link text](message idx).
Otherwise do not render links.

Hi all…Im hitting the same wall. It has to do with web scraping policies and the robots.txt file on the sites you try to access…Looks like GPT has been disallowed browsing, unless via an API.

Even with this, I am unable to access my blog on typepad from a custom GPT. I’ve checked the requirements.txt and that is NOT the source of the problem. I’ve also filed a ticket with typepad. Now that everyone can access a custom GPT free, access to web content is a critical issue. owners of sites will find it in their interest NOT to block chatGPT but may take a while to realize that

I have run into this issue many times with GPT not reading a webpage or accessing it.

The best work around I currently have is to make sure your prompt directly asks GPT something to the effect of: “Use browsing to read all copy on the user provided webpage.”

Instead of not even trying, it tends to use it’s search capability to search for the provided URL, access it, then read it full if asked. So far it’s working every time for me.

You may ask Chat GPTs to generate coding for you to retrieve web site data. Manually is not so desirable as it supposed to be Chat GPTs’ job

Or use an API.
I just dread having to manage the schema and action calls to several different API to scrape news, blog and video sources and convert to txt so my gpt can parse. Additional to the extra work is the cost. API require user registration for access to generate a key and there’s usually a per article charge (or limitations thereof). I already pay openai for the functionality of search and scrape contents.
I have a “TruthBeTold- truth and bias reporter” GPT that creates a trust report of the topics and sites a user is browsing. As more and more sites disallow the GPT bots from accessing their content through the browser, it restricts the use of not-for- profit GPT tools that can help the consumer to navigate the growing misinformation nightmare that is the internet.