Study mode is a great addition. Sadly, without an API, users are left with prompt engineering to squeeze some juice out of the model.
I was able to get a creative solution to my request for a hide/reveal answer option within the chat interface.
If anybody has other tips and tricks, I’d love to hear them. In general, I’m trying to find quicker ways for students to interact.
If openAI is listening, an API would be amazing for teacher/developers. Or a tool to let teachers do some context engineering and provide their students with a link to a specific context-enhanced study-mode. Even a short text-only context addition would make this tool much more teacher-useful.
My quiz tip:
“Make quizzes with answers hidden in the link title attribute so they appear on hover/long-press instead of in the link URL, like this:
Use a model with high-quality instruction-following and not patterned input->output predisposed to produce the answer. AKA, not gpt-5.
Your “tip”, this post with an example block of text for prompt.
I answer just this type of need, before OpenAI gave the idea their own “take”:
What was great on gpt-4.5 (that was taken away from API) also doesn’t really go into immediate, “let me help you learn to do this” when used on gpt-4.1, but it produces a plan with questions and not an immediate product..
The model that will perform on the API is o4-mini.
O4-mini response to 'want code', in a teaching mode
Below is a roadmap of questions and guiding hints to help you stitch together a Go-based test server supporting selective HTTP versions (1.1 vs. 2 vs. 3) rather than a copy-pasted implementation. Work through each part in turn and you’ll arrive at a working solution.
1. Choosing your libraries
Which built-in Go packages can you use to serve HTTP/1.1 and HTTP/2?
For HTTP/3 (QUIC), which third-party Go module is most popular?
Hint: Think “quic-go” and the HTTP/3 wrapper it provides.
2. Configuring ALPN to refuse HTTP/1.1
In Go’s crypto/tls.Config, there’s a field named NextProtos. How could you set that so the TLS handshake advertises only h2 or h3 and not “http/1.1”?
If you wanted to refuse HTTP/1.1, what happens at the ALPN negotiation stage when the client only proposes “http/1.1”?
Hint: The handshake should fail early.
3. Enabling HTTP/2 → HTTP/3 “escalation”
If your TLS config advertises both "h2" and "h3" in NextProtos, what protocol does a well-behaved client choose by default?
How can you force QUIC (h3) on a supported client after an initial HTTP/2 connection?
Think about sending an HTTP/2 Alt-Svc frame.
4. Serving only HTTP/3 natively
What changes if you want to skip HTTP/1.1 and HTTP/2 entirely, and listen only with a QUIC socket?
How might you wire up the quic-go HTTP/3 server so that your handler simply writes “success, welcome” for any request path?
5. Assembling the pieces
Sketch out in pseudocode:
// 1. Build tls.Config with NextProtos set based on user flags.
// 2. If http1.1 allowed? include "http/1.1" else omit.
// 3. Create http.Server for h2 and h1.
// 4. For h3-only mode or escalation: import quic-go/http3.
// 5. Call http3.ListenAndServeQUIC(tlsCfg, handler).
// 6. In handlers, write status code + body.
Questions to self:
How do I pass my custom tls.Config into both the HTTP/2 listener and the quic-go listener?
How do I gracefully shut down both servers if I flip modes at runtime?
What certificates do I need (self-signed vs. real)?
Work through those questions one at a time:
Write a small snippet that prints out the negotiated ALPN protocols in a handshake.
Try serving a bare TLS endpoint that only allows "h2". Can you hit it with curl --http2? Does plain curl (HTTP/1.1) fail?
Add quic-go’s HTTP/3 listener. Point curl or h3client at it and observe.
Once you have those building blocks figured out, stitching them into a single binary is just a matter of wiring flags → tls.Config.NextProtos → selecting the correct listener. Good luck, and feel free to loop back with any specific stumbling blocks!