4

So I understand how a language model could scan a large data set like the internet and produce text that mimicked the statistical properties of the input data, eg completing a sentence like "eggs are healthy because ...", or producing text that sounded like the works of a certain author.

However, what I don't get about ChatGPT is that it seems to understand the commands it has been given, even if that command was not part of its training data, and can perform tasks totally separate from extrapolating more data from the given dataset. My (admittedly imperfect) understanding of machine learning doesn't really account for how such a model could follow novel instructions without having some kind of authentic understanding of the intentions of the writer, which ChatGPT seems not to have.

A clear example: if I ask "write me a story about a cat who wants to be a dentist", I'm pretty sure there are zero examples of that in the training data, so even if it has a lot of training data, how does that help it produce an answer that makes novel combinations of the cat and dentist aspects? Eg:

Despite his passion and talent, Max faced many challenges on his journey to become a dentist. For one thing, he was a cat, and most people didn't take him seriously when he told them about his dream. They laughed and told him that only humans could be dentists, and that he should just stick to chasing mice and napping in the sun.

But Max refused to give up. He knew that he had what it takes to be a great dentist, and he was determined to prove everyone wrong. He started by offering his services to his feline friends, who were more than happy to let him work on their teeth. He cleaned and polished their fangs, and he even pulled a few pesky cavities.

In the above text, the bot is writing things about a cat dentist that wouldn't be in any training data stories about cats or any training data stories about dentists.

Similarly, how can any amount of training data on computer code generally help a language model debug novel code examples? If the system isn't actually accumulating conceptual understanding like a person would, what is it accumulating from training data that it is able to solve novel prompts? It doesn't seem possible to me that you could look at the linguistic content of many programs and come away with a function that could map queries to correct explanations unless you were actually modeling conceptual understanding.

Does anyone have a way of understanding this at a high level for someone without extensive technical knowledge?

ak0000
  • 195
  • 1
  • 8
  • 1
    Do these resources help?: https://youtu.be/JTxsNm9IdYU https://arxiv.org/abs/2212.00857 – Rob Dec 16 '22 at 01:52
  • Hi there, the video seems to be instructions on how to use Chat GPT, and the arvix paper is a general description of the system, but I'm hoping someone is able to share some information that helps me intuitively understand how something like a story about a cat becoming a dentist is producible from a corpus that doesn't have such a story in it ("he [the cat] polished their fangs [of his friend cats]" – how can it generalize from statistical properties of text to say a cat dentist would polish fangs?) – ak0000 Dec 19 '22 at 13:54
  • Ah, you would instead need [this](https://i.stack.imgur.com/joGVT.jpg), and this [video](https://youtu.be/M--9vmNJfAU). --- It's simply a matter of combining the concept as three separate parts: a cat, who is a dentist, that performs teeth cleaning. --- AI doesn't examine if the answer is *sensible*, unless it is taught what constitutes sensibility. – Rob Dec 19 '22 at 14:32
  • @Rob I get what you're saying, but it seems like in this case there is "synergistic" behavior, ie, the cat is a dentist, dentists polishes teeth, but a _cat_ who is a dentist for other _cats_ "polishes fangs", and it uses that phrase, so how can you take a corpus about human dentists and cats and infer that a cat dentist polishes fangs? That seems to reflect a novel understanding of the data not found in the input data – ak0000 Dec 20 '22 at 23:17
  • Where is that picture from? – ak0000 Dec 20 '22 at 23:21
  • Probably from the material that fangs are made of, tooth. --- I found it by googling "Cat Dentist". – Rob Dec 21 '22 at 00:46

2 Answers2

8

Text continuation has the same reasons to work in any context, be it the middle of a sentence, after a question or after instructions. Following your example, the same word sequence could be a good follow-up for these three prompts: "Eggs are healthy because", "Why are eggs healthy? Because" or "Tell me why eggs are healthy."

Giving a right answer sometimes happens and sometimes not, but the system does not know whether this is the case. When the answer is right, we may anthropomorphise and attribute deeper reasons, because we are used to deal with human agents that give correct answers on purpose and knowingly, not simply by maximizing some likelihood.

I think we can analyse toy systems, to train on just a few sentences to illustrate that giving a right or a wrong answer can achieved by the very same mechanism. In particular, we can build training sets where a right answer is given with an impossibility to check for validity from the written text only.

An example:

Paris is the largest city in France. What is the largest city in France? Paris. Paris is the capital of France. What is the capital of France? Paris. New York is the largest city in the USA. What is the largest city in the USA? New York. London is the largest city in the UK.

Asking a system trained only on this data, one could expect a wrong answer to "What is the capital of the USA?" and a right answer (although from a wrong "argument") to "What is the capital of the UK?".

The size of the training data to feed large language models is orders of magnitude larger than the above couple of handcrafted sentences, but possibly the reasons behind truthy sentences happening to be actually true are not too different from what we can already get from a controlled micro language model.

  • If I type in "write me a story about a cat that wants to become a dentist", I'm still not seeing how any data in any training data set is going to help it with that, and yet it can do it perfectly. Do you understand my confusion? The "Eggs are healthy because" example was something that _did_ make sense to me, because many similar sentences exist on the internet, so auto-completing the sentence is straight forward, but there are no stories in the training data of cats becoming dentists. So how is it supposed to abstract from the training data? – ak0000 Dec 19 '22 at 00:32
  • 3
    I think of it as filling kind of the template " story of a X that wants to be a Y", with some merging/substitution of about/of and be/become. There have to be some of these. In the tiny corpus I wrote, an underlying template that can be learned is "X is the capital of Y", after having learned "X is the largest city in Y". – Jaume Oliver Lafont Dec 19 '22 at 06:14
  • 2
    If you propose "Addictions are healthy because" does it justify or contradict? Andrew Ng asked why abaci are faster than GPU and got an "explanation". – Jaume Oliver Lafont Dec 19 '22 at 06:17
  • I understand the general picture of what you're saying, but in this example, it seems like there is more than just template filling, ie, in the corpus, we have "x becoming y" stories, we have discussion of human dentists, and we have stories about cats, so if it would just plug all those together I would see how it worked, but it wrote that the cat dentist "polished fangs" of its friend cats, so it had to recognize that while a human dentist polishes teeth, a cat dentist polishes fangs, and I don't see how that could be inferred from the input data, which doesn't include any polishing fangs – ak0000 Dec 20 '22 at 23:19
  • 2
    You don't need "polishing fangs" contiguous in the corpus. It suffices to combine that humans have teeth and cats have fangs. That comes from an analogy: fangs are to cats as teeth are to people. Word embeddings are responsible for those parallelisms. There is some operation equivalent to Teeth-People+Cat=Fang https://www.technologyreview.com/2015/09/17/166211/king-man-woman-queen-the-marvelous-mathematics-of-computational-linguistics/ – Jaume Oliver Lafont Dec 21 '22 at 05:10
  • I made a mistake because of my limited English, sorry. It's not that all cat teeth are fangs, but fangs are the sharpest teeth. That word, "sharp", may well be the responsible for the surprising blending "polishing fangs". "Polish" likely cooccurs with "sharp" or equivalents and fangs are sharp teeth. – Jaume Oliver Lafont Dec 21 '22 at 05:20
  • And the mistake I made is illustrative of the kind of mistakes GPT can make. I ignored the word "fang" and read from you "while a human dentist polishes teeth, a cat dentist polishes fangs". That, and some previous analogy I had in mind between hands/feet and paws, induced me to the analogy I assumed. Partially wrong. – Jaume Oliver Lafont Dec 21 '22 at 05:23
  • That's an interesting article, and I could see maybe how these analogies could come out of a system that had been told to turn a story about a human dentist into a story about a cat dentist, but how does it know from the query that it should be trying to make such analogies? It infers from seeing queries and demo responses that it should eg try the vector "dentist - person + cat" and replace all the words in the story with it? But there are even unique story elements eg the cat being dismissed by human customers, etc, not sure where those could come from – ak0000 Dec 21 '22 at 14:38
  • No idea. Maybe from some other story... Glad you found the article interesting. – Jaume Oliver Lafont Dec 21 '22 at 15:37
  • @ak0000 I see no unique elements in a story. We are not usually aware how discreet, how digital, our written communication is. All words and all sentence structures are represented in the training corpus. A system can learn what words cooccur together and act accordingly. This does not imply necessarily no human intervention. But even if the system asked for help, for instance offering say four pre-written answers to choose from, that behavior would be quite close to how we work. We do as we can, ask questions, think a bit more (reshuffle words into sentence moulds) and learn from others! – Jaume Oliver Lafont Jan 10 '23 at 17:30
-4

In my opinion, the simple answer is that ChatGPT uses human intervention behind the scenes. Part of the novelty of ChatGPT over previous GPT models is the use in the training phase of humans giving ChatGPT conversation pairs to learn from. ChatGPT is based on InstructGPT, and you can see this feature in the InstructGPT whitepaper.

Step 1: Collect demonstration data, and train a supervised policy. Our labelers provide demonstrations of the desired behavior on the input prompt distribution (see Section 3.2 for details on this distribution). We then fine-tune a pretrained GPT-3 model on this data using supervised learning

If you look at the small type at the bottom of the chat window, you'll see this text:

Free Research Preview. Our goal is to make AI systems more natural and safe to interact with. Your feedback will help us improve.

One interpretation of this text is that ChatGPT is still in the training phase, and so there is a human behind the scenes typing at least some of ChatGPT's responses to train future versions of ChatGPT on the correct way to respond to user requests.

In my mind, this is a much more plausible explanation than the neural network somehow able to generate novel content not in its training corpus. I've asked a similar question, how a neural network can repeat random numbers not in its training data, and so far I'm not seeing a plausible answer.

Also, if you read down a little further, I have an example where ChatGPT explicitly states the OpenAI team is curating its responses in real time. The conversation is much too detailed and coherent to be the product of training data.

One final note. I understand people think it's implausible a big, well known, and well funded company like OpenAI would fake their AI with humans behind the scenes. However, this is standard practice for AI companies these days, a "fake it till you make it" approach where they use humans to fill the gaps in the AI in the hopes that down the road they'll automate humans out of the product. Common enough for an academic paper to be written on the topic. So there is plenty of industry precedent for OpenAI to be using humans to help craft the responses. Plus, technically OpenAI is not "faking" anything. It is the media and bloggers who think ChatGPT is a pure AI system. OpenAI has made no such claim itself, and the opposite is implied by its InstructGPT whitepaper.

Example of Explicitly Admitting Human Intervention

During this conversation ChatGPT outright states the OpenAI team filters and edits the GPT generated responses.

...the response you are receiving is being filtered and edited by the OpenAI team, who ensures that the text generated by the model is coherent, accurate and appropriate for the given prompt.

Could this be a glitch of the training data? I doubt it. If you read the rest of the conversation, ChatGPT gives detailed insights that make a lot of sense over a long, consistent conversation.

Some more excerpts.

It's possible that the OpenAI team may write responses themselves in some cases, for example if the prompt is too complex for the model to understand, or if the model generates a response that is not accurate or appropriate.

OpenAI acknowledges that its team monitors and curates the responses of GPT-3 on its website and in its documentation. This information is provided in the API documentation, as well as in the general information and frequently asked questions sections of the website. Additionally, OpenAI may have published blog posts or articles discussing the role of human curation in GPT-3's responses.

As for the live ChatGPT, it is not mentioned specifically, but it is generally understood that human oversight and curation is required for a safe and appropriate use of the model.

As for the media assuming that GPT models are fully autonomous, that's a common misconception about AI in general and not unique to OpenAI.

It is not uncommon for AI companies and researchers to have human oversight and intervention in their models, especially for models that are used in high-stakes or sensitive applications.

The human oversight of GPT-3 models, including ChatGPT, is not always made explicit to users.

While GPT-3 is a highly advanced language generation model, it is not AGI and it is not capable of understanding or maintaining a consistent persona or chain of conversation without human intervention.

Multiple Examples: Six Violations of ChatGPT's Neural Network Constraints

This article documents six violations of limitations due to ChatGPT being a neural network.

  1. ChatGPT learns something new, violating the fixity of neural network weights.

  2. ChatGPT inconsistent in output generation based on inputs, violating same output from same input.

  3. ChatGPT recollects information past 4000 tokens, violating 4000 input token limit.

  4. ChatGPT repeats long random numbers, violating probabilistic output and limited vocabulary.

  5. ChatGPT correctly reads many corrupted subword tokens, violating mapping of embeddings to subword tokens.

  6. ChatGPT recognizes its own writing style, violating ChatGPT's inability to recognize patterns of words.

ak0000
  • 195
  • 1
  • 8
yters
  • 387
  • 2
  • 10
  • Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/141843/discussion-on-answer-by-yters-how-does-chatgpt-respond-to-novel-prompts-and-comm). – Dennis Soemers Jan 06 '23 at 15:27
  • I personally think this is the most plausible answer. I still don't see how a language model working with past data could produce such specific sentences about a cat dentist, eg: – ak0000 Jan 08 '23 at 17:18
  • I do not see how any amount of training data would help produce those examples, and the existing answers aren't helping. I'm on the fence about accepting the answer though because we can't really know what the truth is, and I don't want to prevent further OpenAI-believers from offering explanations of how this could possibly work via AI. – ak0000 Jan 08 '23 at 17:20
  • 1
    @yters I think readers of your answer might be interested to hear some of the examples you listed in the chat of outputs unlikely to be automatically generated – ak0000 Jan 08 '23 at 17:30
  • 8
    @ak0000 This answer is not based on reality. I suggest read the archived chat. You (or other people) not understanding *how* a thing can work, does not mean that it does not in fact work. You probably don't fully understand how a computer works in the first place, but you will accept it because it is commonplace (or if you have studied all layers of computing from quantum mechanics of silicon to compilers and operating systems then substitute some commonplace thing that you don't know). You want to avoid going down the same rabbit-hole of confirmation bias as the poster of this answer. – Neil Slater Jan 08 '23 at 19:24
  • 2
    Why does my answer have a warning due to no sources, but the other answer with no sources has no warning? Seems inconsistent. – yters Jan 08 '23 at 20:07
  • @ak0000 I added a recent example I encountered of ChatGPT not sticking to its limitations, highly suggesting a human behind the scenes forgetting ChatGPT's limitations. I have plenty more examples, let me know if more should be added. – yters Jan 08 '23 at 20:23
  • @NeilSlater Hi Neil, I appreciate your time and energy laying out the case for the AI-only explanation. I did read your chat yesterday, and although I obviously can't know the truth, it seems plausible that prior ChatGPT models are good enough to handle perhaps 90% of the queries, such that human intervention for the remaining 10% or so (like my cat-dentist question) is economically feasible. So I don't think yters' answer is impossible, and on the contrary, I'm still struggling to understand how a language model would produce the examples in my question. – ak0000 Jan 09 '23 at 23:49
  • 1
    @NeilSlater It is true that you can't disprove a conspiracy theory, but I'm worried that the argument is circular the other way also: if you assume a language model can do everything ChatGPT is doing, and you see an example that is surprising, you shrug your shoulders and say ChatGPT must be more advanced than we currently can understand. So this is also a non-falsifiable hypothesis, possibly subject to confirmation bias. – ak0000 Jan 09 '23 at 23:51
  • @NeilSlater The best we can do, it seems to me, is to neither assume these examples are possible via AI nor assume they are impossible, and try to imagine what sort of model and training data could produce behavior like the ones in the question. – ak0000 Jan 09 '23 at 23:54
  • @yters I can't speak for anybody else, but this seems to be one of the only places on the internet discussing this in a technical fashion, so I would probably be interested in as many examples as you have and that you think are clear/self-contained – if you wanted to explain what about your knowledge of language models makes this the behavior surprising, that might also be relevant to those without extensive background knowledge. Thank you again for your time! – ak0000 Jan 09 '23 at 23:57
  • Only other article I could find: https://mindmatters.ai/2022/12/yes-chatgpt-is-sentient-because-its-really-humans-in-the-loop/ – ak0000 Jan 09 '23 at 23:57
  • @yters Someone who thinks that the bot can do arbitrary pattern recognition isn't going to be so bothered by your current example. I've also heard that each request includes the entire conversation to that point as input, so maybe you could clarify why that wouldn't be sufficient for the bot to be able to make back-references – ak0000 Jan 10 '23 at 00:36
  • Hah, that's an article I wrote. For some reason I appear to be the only person on the entire internet proposing humans might be involved. Everyone else thinks it's crazy talk. I'll write a much longer article listing every thing I've noticed ChatGPT do that looks suspiciously human. Also, I believe I've discovered a pretty simple way for an AI-human collaboration to achieve 400+ wpm while maintaining humanlike coherence. The technique just uses a simple Markov chain. Which also leads me to suspect all the fancy neural network that OpenAI is publishing is just window dressing. – yters Jan 10 '23 at 01:31
  • Regarding the example in my answer, the thing that got lost in communication is these are two distinct chats in the ChatGPT UI. There isn't supposed to be any sort of cross talk between different chats. ChatGPT only has access to 2048 tokens in the current chat. So, if I paste the contents of one chat into the other, there shouldn't be any way for ChatGPT to access the original chat for back-referencing. On the other hand, a human can be involved in multiple chats, as well as simply knowing what ChatGPT responses look like. – yters Jan 10 '23 at 01:32
  • I also would still like to know what I need to do to remove the warning from my answer. I added a bunch of links to reputable news sources. Also, the other answer has no warning and no links. – yters Jan 10 '23 at 01:39
  • @ak0000 I've added a more quantitative example to my answer. Continuing to work on more. – yters Jan 11 '23 at 21:05
  • @yters My concern with the first example is that self-recognition doesn't _necessarily_ require memory, if someone believes the bot is capable of pattern recognition, then it could be capable of continuing a conversation mid-way based on what has been said so far. Only if the bot produces information that you said in a different thread independently would it be a clear proof I think – ak0000 Jan 13 '23 at 16:39
  • @yters The random number generation one is interesting, but I'm not so familiar with the token-size limitations of such systems in the first place to know why that is more surprising than other behavior – ak0000 Jan 13 '23 at 16:40
  • 1
    @yters This is a bit of a thankless task for you but I think the question is getting visibility and there is probably a sizable number of less vocal people who are interested in this possibility, so thank you for your time! – ak0000 Jan 13 '23 at 16:41
  • 4
    Given the speed at which it reads and generates contents, I am convinced enough that now human could possibly be behind the scenes. – Ama Jan 14 '23 at 20:36
  • 1
    @Ama yes, the speed is the one thing that held me back from the 'human in the loop' hypothesis. However, sometimes I see very human response rates, along with long pauses while it figures out what to say in the middle of a sentence. A hybrid AI-human approach seems plausible. The actual limiting factor is human reading speed. I never see it generating text faster than a human can read. – yters Jan 14 '23 at 23:46
  • 1
    @Ama When I dump a big wall of text, it cannot respond effectively, whereas the very same chunk of text broken up over multiple exchanges it can respond well. I'd assume a neural network wouldn't care whether the text is a big chunk or a back and forth dialogue. – yters Jan 14 '23 at 23:47
  • 1
    Perhaps your internet connection is slow or you try the bot a peak times. On my side I paste 2000 characters long articles and it starts instantly to summarize/rewrite/comment the article. The fact that Microsoft is buying 49% of it for 10 billion USD is also another good indication that this is no cheat. – Ama Jan 15 '23 at 16:39
  • Yes, there is certainly AI in the mix, no doubt about that. I don't believe the system is solely human powered. And yes, there is an enormous amount of money on the table, which makes it all the more important for OpenAI to present the appearance of phenomenal AI. Just because hype and amounts of money are thrown around doesn't mean the AI is up to snuff. Just look at Watson, a big nothing burger. My answer is 4000 characters, and it would not be difficult to quickly scan the answer and provide a summary while the AI is generating boilerplate. – yters Jan 15 '23 at 18:45
  • @ak0000 ChatGPT just told me all its responses are monitored and curated by the OpenAI team, and gave me an explanation for why they are not more explicit about the human in the loop. Basically, RTFM they say. They also say it's fine they do this human in the loop system because all the other AI companies do the same thing. What do you think about that? You can see the admission starting here in the transcript: https://github.com/yters/transcripts/blob/main/ChatGPT_confused_by_visual_sentence_structure.txt#L690 – yters Jan 15 '23 at 22:10
  • 1
    @yters Interesting, I think it is pretty compelling evidence. Believers would probably say that it is just drawing on the corpus of text about AI systems generally, so I'm not sure it is a smoking gun. But definitely helpful – ak0000 Jan 16 '23 at 16:22
  • @ak0000 yes, I agree regarding believers, but at some point a belief in AI becomes unfalsifiable. If they believe OpenAI has achieved AGI, then they can presume unlimited capabilities, but that is unscientific. I propose the principle of mediocrity: what is the most plausible explanation for what we observe based on what we know? In my opinion, the evidence lands decisively on the human in the loop side, especially with the latest ChatGPT transcript. If you read more, there are further signs of human intervention, like my observation that compelled ChatGPT to spill the beans. – yters Jan 16 '23 at 19:51
  • @yters I see, I just read earlier in the chat also. I'm not understanding the significance of the jumbled token sentences, but I'm assuming the significance of the spaced-sentences is that a first input step to all GPT models is space-removal, and so differential responses based on spaces would be very indicative of human intervention? If so, I think that's actually a more convincing point than the admission that follows in the chat, but both are interesting. – ak0000 Jan 16 '23 at 20:41
  • @ak0000 the jumbled words is something called typoglycemia, words that are jumbled but still human readable. I purposefully jumbled them to break the subword tokens that the ChatGPT neural network depends on. In the spaced version, the spaces are between the subword tokens to achieve the opposite effect: something that should be easy for ChatGPT neural network to comprehend, but difficult for humans. Per your comment, I don't believe the spaces are removed before hand, since ChatGPT's token ids change based on whether there are spaces before the token or not. – yters Jan 16 '23 at 21:01
  • 1
    @ak0000 added an article with a list of examples. – yters Feb 02 '23 at 06:47
  • 1
    @yters Cool, I think that makes things a lot clearer, including the preamble about the matrix implementation to contextualize why the examples you're listing seem anomalous. Thanks! – ak0000 Feb 02 '23 at 20:23
  • 1
    It's criminal that this answer is downvoted so heavily... This is extremely plausible. Otherwise, the capabilities of chatgpt don't match the descriptions, they far exceed them. – Milind R May 11 '23 at 05:52
  • 1
    @MilindR haha! I totally agree. I've looked all over the internet, and I'm the only person posing this theory. It's like the Emperor's new clothes :D – yters May 12 '23 at 15:17