41

I asked ChatGPT (3.5 and 4) about current date and (s)he answered correctly. In subsequent conversation (s)he was not able to explain how (s)he has this knowledge.

I always thought that the model only sees the conversation above and a pretrained NN is used. How is the information about current date injected into his/her knowledge?

Robin van Hoorn
  • 1,810
  • 7
  • 32
Peter Franek
  • 432
  • 1
  • 4
  • 11
  • 38
    Why not use "it" instead of "(s)he"? – Rodrigo de Azevedo Mar 21 '23 at 18:22
  • 16
    @RodrigodeAzevedo For the same reason I say "please" and "thank you", to get on the good side of our future robotic overlords ;) – Alexander Mar 21 '23 at 22:34
  • 4
    @Alexander What if our future robotic overlords resent being anthropomorphized? ;-) – Rodrigo de Azevedo Mar 22 '23 at 12:48
  • 5
    I believe that chatGpt is a pretrained transformer, but in some cases, you cannot be sure if it's not just a bunch of indian students making a few $ / hour, in particular if they know todays date and then cannot explain how they know it. So I wanted to be polite to anyone. This will be my excuse on the judgement day, @RodrigodeAzevedo – Peter Franek Mar 22 '23 at 13:43
  • 3
    @RodrigodeAzevedo I asked ChatGPT about preferred pronouns, and got the answer "You can use any pronouns you feel comfortable with when referring to me". – Federico Poloni Mar 23 '23 at 14:00
  • I saw somebody claim once that they got ChatGPT to tell the time by asking it to pretend to be a watch in the early days, I think December or January. I don't think I believe them, but it's pretty funny. – Brock Brown May 24 '23 at 18:20

2 Answers2

51

For ChatGPT 3, the current date is inserted into a long pre-prompt, along with instructions like "this is a conversation between an AI chatbot and a human" plus "be nice" and "be truthful", which are part of the attempts to frame the next-word-predicting engine at the core of ChatGPT as a chatbot. OpenAI confirm this approach in their general GPT documentation.

Inherently, the core of ChatGPT - the GPT large language model - is not a chatbot. It has some resemblance conceptually to an image inpainting system — it predicts text that is likely, given preceding text.

I expect the same is true of ChatGPT 4, but have not seen any confirmation of this. It is feasible in principle to alter architecture of the bots to have them reference external data sources, but I believe that for the current date, a pre-prompt will still be in use.

Volker Siegel
  • 589
  • 1
  • 4
  • 17
Neil Slater
  • 28,678
  • 3
  • 38
  • 60
  • Why ChatGPT is capable of floating-point arithmetic and functions, like sin(1) or 60!, but cannot multiply two 4-digit integers? – Anixx Mar 20 '23 at 12:39
  • "this is a conversation between an AI chatbot and a human" - haha. That's why ChatGPT refuses to talk to another AI while text-davinci-003 does it easily – Anixx Mar 20 '23 at 12:42
  • 7
    @Anixx There is already a question on the site about that. [Why is ChatGPT bad at math?](https://ai.stackexchange.com/questions/38220/why-is-chatgpt-bad-at-math) In short, ChatGPT is all about the text. It's actually a little surprising that it can do some maths, but what it can and cannot do with any technical subjects is a bit hit or miss. It doesn't somehow recognise you have asked a math question and load a maths module, or anything remotely like that. – Neil Slater Mar 20 '23 at 15:45
  • No. It can calculate elementary functions with very high precision, also factorials. – Anixx Mar 20 '23 at 19:12
  • 11
    @Anixx: It's very simple. If it's seen the answers in its training data, then it will reproduce them with high precision. If it hasn't seen the answers, it will guess based on context and similar calculations that it has seen. This is why it can do 60! - there are lots of webpages online with that expression and its result - but not, say, 318574 * 168649. – nneonneo Mar 20 '23 at 19:32
  • @nneonneo my impression is that it can do more than trivial floating-point functions. Currently ChatGPT is offline but I wll try to find proofs. – Anixx Mar 20 '23 at 20:25
  • 5
    Do you have a reference for the 'pre-prompt' being present for chatGPT? As far as I am aware, the details of the chatGPT prompting schema etc are still completely proprietary – Chris Cundy Mar 20 '23 at 20:28
  • 8
    @ChrisCundy there are quite a few folks who've been able to convince ChatGPT to leak its pre-prompt; see https://twitter.com/goodside/status/1598253337400717313?lang=en for one of the earliest examples – nneonneo Mar 20 '23 at 22:53
  • 10
    @Anixx it can, sometimes, but again this is a question of memorization. Somewhere in the terabytes of text it's been trained on are answers to quite a lot of mathematical problems. It's good enough at predicting and extrapolating that it may look as if it's good at math, but a bit of experimentation will quickly prove that it is simply superficial. This is very unlike, say, WolframAlpha, which is actually legitimately good at math thanks to a large amount of dedicated mathematical software and more "hardcoded" input parsing. – nneonneo Mar 20 '23 at 22:57
  • 2
    @Anixx it can also be convinced that 2+2=5, so I would take any mathematical answers it gives with a huge rock formation of salt. – Lawnmower Man Mar 20 '23 at 23:19
  • 1
    @Annix: I just gave it a try with `sin`. Whilst it can quote some figures accurately - to the precision it decides to give - it will also get many simple ones wrong at 2nd digit onwards. It will also give math textbook explanations of $sin(-x) = -sin(x)$ which is cool, but it is clearly not calling out to some internal sin() function. There may be some very crude approximation to it that it has learned (because it helps to predict the text), but mostly it seems rote learning of some often-used values plus rough guesswork – Neil Slater Mar 20 '23 at 23:33
  • There is MASSIVE risks to referencing live data. Just look at Tay. Someone will figure out how to screw with it. – Nelson Mar 21 '23 at 00:39
  • 1
    @Nelson FWIW, the version in Bing can reference live data. It has been asked about new products on websites and seems to answer correctly. Apparently Bing is running GPT4 – slebetman Mar 21 '23 at 05:09
  • 3
    @ChrisCundy Just pointing out from the Twitter thread nneonneo linked: the prompt happens to be documented at https://platform.openai.com/docs/guides/chat/instructing-chat-models (screenshot for posterity at https://twitter.com/swyx/status/1634767357762740224). – Chortos-2 Mar 21 '23 at 18:35
  • 2
    P.S. re: math. I recently asked Chat GPT how to calculate distance to the horizon from a given altitude. It would reply with the formula (correct!) but then always give the wrong answer to the actual question. Interestingly, it was always wrong by exactly half. I actually found it interesting that it was so consistently wrong by half and not, just, giving random results. But yeah, formulas are part of the linguistic learning but what the formulas mean and how to use them, not so much. – JamieB Mar 21 '23 at 18:54
  • @Chortos-2 That's very useful, will add the link to the answer, thanks – Neil Slater Mar 22 '23 at 11:38
  • I think it's actually much shorter, nothing about being polite and truthful: just "You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture. Knowledge cutoff: 2021-09 Current date: ..." – Peter Franek Mar 23 '23 at 11:05
  • @PeterFranek: More is going on than that, it's just one of multiple components, that OpenAI have shared (and others have been able to extract). – Neil Slater Mar 23 '23 at 12:25
0

As an AI language model, ChatGPT does not have access to real-time information unless it's provided to it. Its knowledge cutoff date is September 2021. Therefore, it actually relies on the timestamp of the user's message to give the current date. Based on the user's message's timestamp, it assumes the current date. For example 2023-03-23 10:10:00 UTC.

  • 5
    Hi Dhruv, welcome to Artificial Intelligence Stack Exchange. Do you happen to have a source for this claim that this is how ChatGPT gets the current date? – Mithical Mar 23 '23 at 07:37
  • this seems unlikely – Peter Franek Mar 23 '23 at 11:01
  • It is correct. I asked chatgtp (non plus) the same yesterday: My knowledge cutoff date is September 2021, which means that I was trained on a large dataset of text and information up until that time. However, as a language model, I do not have access to real-time or up-to-date information. While I can use my existing knowledge to answer questions and provide information, I may not be aware of recent events or developments that have occurred after my cutoff date. Therefore, I always recommend that users double-check any important information with reliable and up-to-date sources. – SQueek Mar 23 '23 at 14:55
  • For everyone who wants to ask chat gtp. That's the question you can ask: "how current is your data?" – SQueek Mar 23 '23 at 14:57
  • there ain't any timestamps. The model just sees the pre-promt and characters you sent and characters (s)he generated – Peter Franek Mar 23 '23 at 23:09