Restrictive "Thinking" Limits Learning Potential

I was wondering: would a filtered offline LLM like Phi-3 even be capable of performing general computational tasks where the output isn’t meaning, like text classification? If certain answer vectors are “forbidden”, it won’t be free to give all results, and that filter will block it from being trained freely.