• 0 Posts
  • 184 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
rss

  • We aren’t trying to establish that neurons are conscious. The thought experiment presupposes that there is a consciousness, something capable of understanding, in the room. But there is no understanding because of the circumstances of the room. This demonstrates that the appearance of understanding cannot confirm the presence of understanding. The thought experiment can’t be formulated without a prior concept of what it means for a human consciousness to understand something, so I’m not sure it makes sense to say a human mind “is a Chinese room.” Anyway, the fact that a human mind can understand anything is established by completely different lines of thought.


  • This fails to engage with the thought experiment. The question isn’t if “the room is fluent in Chinese.” It is whether the machine learning model is actually comparable to the person in the room, executing program instructions to turn input into output without ever understanding anything about the input or output.






  • Yup, you’ll notice the only thing distinguishing C from R^(2) is that multiplication. That one definition has extremely broad implications.

    For fun, another definition is in terms of 2x2 matrices with real entries. The identity matrix

    1 0
    0 1
    

    is identified with the real number 1, and the matrix

    0 1
    -1 0
    

    is identified with i. Given this setup, the normal definitions of matrix addition and multiplication define the complex numbers.


  • One definition of the complex numbers is the set of tuples (x, y) in R^(2) with the operations of addition: (a,b) + (c,d) = (a+c, b+d) and multiplication: (a,b) * (c,d) = (ac - bd, ad + bc). Then defining i := (0,1) and identifying (x, 0) with the real number x, we can write (a,b) = a + bi.







  • It must have some internal models of some things, or else it wouldn’t be possible to consistently make coherent and mostly reasonable statements. But the fact that it has a reasonable model of things like grammar and conversation doesn’t imply that it has a good model of literally anything else, which is unlike a human for whom a basic set of cognitive skills is presumably transferable. Still, the success of LLMs in their actual language-modeling objective is a promising indication that it’s feasible for a ML model to learn complex abstractions.


  • That’s not necessarily wrong, but not the big explaining factor here I think. The technological challenges behind aligning ML models with factual reality aren’t solved, so it’s not an engineering decision. It’s more that AI is remarkably easy to market as being more capable than it is