403 error web crawler tutorial

i am trying to get the openAI web crawling tutorial to work. I get a 403 (forbidden) error when the code sends a request to https://openai.com

below is the code
headers = {‘User-Agent’: ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 Edge/16.16299’}
url = “https://openai.com/
resp = requests.get(url,headers=headers)

            print(resp.status_code)
            # Get the text from the URL using BeautifulSoup
            soup = BeautifulSoup(resp.text, "html.parser")
            # Get the text but remove the tags
            text = soup.get_text()
            print(text)

the print of the status code is 403
the print of the parsed text is
Please turn JavaScript on and reload the page.Please enable Cookies and reload the page.

Using the utility Fiddler i see that the User-Agent is still python rather than what i provided.

The question is why am i getting a 403?
why is setting the User-Agent not taking?

Also
curl -v -I “https://www.openai.com” --ssl-no-revoke
fails with a 403 as well