1

As far as I understand ChatGPT has been trained on a vast array of data, and it does understand questions; but it seems to never ask. Even if a person would ask clarifying questions (that I assume are in the train set) ChatGPT doesn't, opting instead to invent context or just say "X depends on Y, Z"...

Not asking questions seems trained into the network, but I am not sure how one would go about training a model not to generate questions in such a way that not even in DAN mode it doesn't. I understand that for toxic language GPT-3 uses human raters to generate a train set and then optimizes for non toxic behavior, but it seems to specific to be used for the general concept of questions.

EmmanuelMess
  • 207
  • 3
  • 14

0 Answers0