All the prompt engineering techniques I've seen seem to focus on telling the model what to do e.g. Few-Shot Prompting.
Is there any value in giving the model examples of what not to do? Can you link me to any papers/techniques on the topic?
Example
I am building a bot to improve students' foreign language writing skills.
Bad output: Corrected spelling of 'heisse' to 'heiße' because 'heiße' is the correct spelling in German.
Better output: Corrected spelling of 'heisse' to 'heiße' because 'ss' can be combined to form 'ß' in German. I could solve this specific problem using few-shot prompting. But really, I want to tell the model "don't give answers like 'this is how it is done in German', instead explain what is being done and the reasons for it".
I may have answered my own question there... just put it what I said above in the system prompt?