How to stop models returning "preachy" conclusions

No I’m using the API and this is the system prompt. I inject ~1800 tokens of context using a retrieve and rerank algorithm.