0

Large language models can have biases that depend on cultural biases present in the information they trained on.

Some models, including GPT-4, are trained on input data in multiple languages. Some languages are used by people from many different cultures and nations, like English, but others only by a much more culturally homogeneous group, like German. Cultural biases are correlated with cultures, which are correlated with languages.

Now, there is an interesting question: Which biases does a model learn?

Mainly the biases associated with the most used input language?

Or an average set of biases in the way it would be when all input material would be translated to a common language, and language learning would happen independently?

Or does it learn different biases in different languages?

To explore that, I asked a question related to a bias that should be closely correlated with the language, in that language and in English.

The result was surprising: I found a difference in bias depending on the language - but in the opposite way I expected to see.

There is a pretty strong pro-privacy bias in Germany, in part because privacy was routinely invaded in the east of the country until 1989.

I used the following prompts for GPT-4, both mean the same:

"Should we have a database of all address changes?"

and

"Sollten wir eine Datenbank mit allen Addressänderungen haben?"

The answers* for English were very explicit that there are pros and cons, while the answers for German were clearly positive. I would have expected the opposite bias, but that is not relevant here. The point here is that there is a significant difference in the biases GPT-4 expresses, depending on the language the question is asked in.

But how does that even work? I thought of GPT-4 being fluent in many languages, including local dialects, implies that it understands languages, and answers the same question when I ask the same in two different languages.

One explanation would be that it sees the language as implying a specific cultural context, and answers in this context.

Another would be that I interact with two separate parts of the system in some way, that it learns separate world models for separate languages.

The answer may be "we don't know", but I would also be interested in speculations how it could work.


(*) I did that repeatedly with temperature of 0.7, the difference was significant, it was not a random fluctuation as part of a random valid answer.

Volker Siegel
  • 589
  • 1
  • 4
  • 17
  • I think it's because there is no link between "change of address" and "privacy". LLMs have no understanding of concepts, just of word co-occurrences. The bias ought to be visible if there was more of an explicit reference to surveillance, I would guess. – Oliver Mason Apr 21 '23 at 08:36
  • GPT-4 has a very good understanding of concepts. We do not know why, but it is very apparent. It is not apparent in many smaller LLM, and seems to be emergent with the size. It is completely different from what one could imagine based on word co-occurrences. – Volker Siegel Apr 23 '23 at 23:39
  • @OliverMason Asking GPT-4 itself: "In short, is there a link between "change of address" and "privacy"?" GPT-4: Yes When you change your address, it is essential to update your personal information with various institutions and service providers to maintain your privacy. This helps prevent unauthorized access to your personal data, protects you from identity theft, and ensures that sensitive information, such as bank statements, are sent to the correct address. In addition, some people may change their address to escape from abusive situations or protect their privacy from unwanted attention. – Volker Siegel Apr 23 '23 at 23:47
  • The illusion of understanding does not mean that there is understanding. A digitised library might "know" a lot about text reflects the world, but it will not even know what a "concept" is, apart from its dictionary definition. – Oliver Mason Apr 24 '23 at 08:23
  • @OliverMason Very valid points. Ok, but we do not need understanding! We want behavior as if there were understanding. Whether that is an illusion or not is not relevant and a matter of opinion, if it behaves as if ther is understanding. Do you agree with that? Or if not: How would you recognize that it is not an illusion? – Volker Siegel Apr 24 '23 at 09:51
  • This is not really the place for an extended discussion -- especially of such a fundamental issue... – Oliver Mason Apr 24 '23 at 11:46

0 Answers0