1

All the prompt engineering techniques I've seen seem to focus on telling the model what to do e.g. Few-Shot Prompting.

Is there any value in giving the model examples of what not to do? Can you link me to any papers/techniques on the topic?

Example

I am building a bot to improve students' foreign language writing skills.

Bad output: Corrected spelling of 'heisse' to 'heiße' because 'heiße' is the correct spelling in German.

Better output: Corrected spelling of 'heisse' to 'heiße' because 'ss' can be combined to form 'ß' in German. I could solve this specific problem using few-shot prompting. But really, I want to tell the model "don't give answers like 'this is how it is done in German', instead explain what is being done and the reasons for it".

I may have answered my own question there... just put it what I said above in the system prompt?

codeananda
  • 111
  • 2
  • I think the question in the title is a bit vague and not very interesting. You can certainly create prompts where you ask the model not to do something. The more interesting question is - how effective are they and is there any research on the topic? So, I'd reformulate the title to be more specific. From my experience, if we ask the model not to be verbose, it can still be verbose. But this doesn't say much because if you ask the model to do something it might also not do it. – nbro Jul 13 '23 at 00:29
  • You can fine-tune davinci-003 for your case. With enough examples, it should start outputting the required information. – Chinmay Jul 13 '23 at 03:40
  • @nbro yes the idea of them being effective/whether research exists was the idea – codeananda Jul 13 '23 at 11:23
  • @Chinmay so you're saying that yes you can do negative prompting and that it is effective? – codeananda Jul 13 '23 at 11:23

2 Answers2

2

The Super-NaturalInstructions paper tested including negative (i.e., incorrect examples) during testing and instruction-tuning, along with adding explanations for why an included few-shot example was correct or incorrect. It seems like it doesn't improve accuracy, at least in their configuration (see Table 4).

Although, if you're using a model like ChatGPT, as @nbro mentioned, you could probably just prompt the model to not do something.

2

Does Negative Prompting Exist?

Yes, e.g. in some text-to-image generation models such as https://app.leonardo.ai/ai-generations:

enter image description here

One can run such negative prompts on one's computer, e.g. with Automatic1111's Stable Diffusion WebUI (Windows 10/11 installer):

enter image description here

Franck Dernoncourt
  • 2,626
  • 1
  • 19
  • 31