Simple answer, ChatGPT is actually human writers with some kind of autocomplete to speed things up.
This is standard practice for AI companies these days, a "fake it till you make it" approach where they use humans to fill the gaps in the AI in the hopes that down the road they'll automate humans out of the product. Common enough for an academic paper to be written on the topic. So, there is plenty of industry precedent for OpenAI to be using humans to help craft the responses.
Plus, technically OpenAI is not "faking" anything. It is the media and bloggers who think ChatGPT is a pure AI system. OpenAI has made no such claim itself, and the opposite is implied by its InstructGPT whitepaper:
Step 1: Collect demonstration data, and train a supervised policy. Our
labelers provide demonstrations of the desired behavior on the input
prompt distribution (see Section 3.2 for details on this
distribution). We then fine-tune a pretrained GPT-3 model on this data
using supervised learning
Additionally, ChatGPT is in "research mode" according to the website, which implies there are still humans training the system during the chats, as described in the quote above.
Final note, I find it amusing no one considers this alternative plausible, as if it were somehow more complicated to have humans tweak chatbot responses than to create an AI with apparent human level understanding that ChatGPT exhibits.
UPDATE: ChatGPT confirms OpenAI team curating its responses
Turns out ChatGPT is indeed human curated, by open admission.
During this conversation ChatGPT outright states the OpenAI team filters and edits the GPT generated responses.
...the response you are receiving is being filtered and edited by the OpenAI team, who ensures that the text generated by the model is coherent, accurate and appropriate for the given prompt.
Apparently, the fact that OpenAI actively curates ChatGPT's responses is indirectly implied in the documentation here.
Human in the loop (HITL):
Wherever possible, we recommend having a
human review outputs before they are used in practice. This is
especially critical in high-stakes domains, and for code generation.
Humans should be aware of the limitations of the system, and have
access to any information needed to verify the outputs (for example,
if the application summarizes notes, a human should have easy access
to the original notes to refer back).
So, that explains that :)