24
A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world?
One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
They are making sense of a language without a rosetta stone. The English llm talk is learned from English.
Now the corpus is a big work to do. But still.
No, they learn English (or any other language) from humans. Translation requires a Rosetta Stone and LLMs are still much worse at such tasks than dedicated translation programs.
Edit: I guess if you are suggesting that the LLM could become an LLM of the dead language and communicate only in said dead language, that is indeed possible. Since users would need to speak that dead language to communicate with it though I don’t understand the utility of such a thing (and is certainly not what the author meant anyway).