Some argue that bots should be entitled to ingest any content they see, because people can.

    • Amju Wolf
      link
      fedilink
      010 months ago

      Prove to me, right now, that you're sentient. Or I won't talk to you.

      We don't even know what sentience is, FFS.

        • RickRussell_CAOP
          link
          fedilink
          010 months ago

          There is a so-called "hard problem of consciousness", although I take exception with calling it a problem.

          The general problem is that you can't really prove that you have subjective experience to others, and neither can you determine if others have it, or whether they merely act like they have it.

          But, a somewhat obvious difference between AIs and humans is that AIs will never give you an answer that is not statistically derivable from their training dataset. You can give a human a book on a topic, and ask them about the topic, and they can give you answers that seem to be "their own conclusions" that are not explicitly from the book. Whether this is because humans have randomness injected into their reason, or they have imperfect reasoning, or some genuine animus of "free will" and consciousness, we cannot rightly say. But it is a consistent difference between the humans and the AIs.

          The Monty Hall problem discussed in the article – in which AIs are asked to answer the Monty Hall problem, but they are given explicit information that violate the assumptions of the Monty Hall problem – is a good example of something where a human will tend to get it right, through creativity, while an AI will tend to get it wrong, due to statistical regression to the mean.

          • @sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            0
            edit-2
            10 months ago

            Why don't you like calling it a "problem"? That just means it's something we have a questions about, not that it's problematic. It's like a math problem, it's a question we don't have an answer for.

            • RickRussell_CAOP
              link
              fedilink
              010 months ago

              I hesitate to call it a problem because, by the way it's defined, subjective experience is innately personal.

              I've gotten into this question with others, and when I began to propose thought problems (like, what if we could replicate sensory inputs? If you saw/heard/felt everything the same as someone else, would you have the same subjective conscious experience?), I'd get pushback: "that's not subjective experience, subjective experience is part of the MIND, you can't create it or observe it or measure it…".

              When push comes to shove, people define consciousness or subjective experience as that aspect of experience that CANNOT be shown or demonstrated to others. It's baked into the definition. As soon as you venture into what can be shown or demonstrated, you're out of bounds.

              So it's not a "problem", as such. It's a limitation of our ability to self-observe the operating state of our own minds. An interesting question, perhaps, but not a problem. Just a feature of the system.

              • That's just ridiculous imo, it seems like they're afraid of the idea that maybe we're just automata with a different set of random inputs and flaws. And to me, that's the kind of idea that the problem of consciousness is trying to explore.

                But if you just say, "no, that's off limits," that's not particularly helpful. Science can give us a lot of insight into how thoughts work, how people react vs other organisms to the same stimuli, etc. It can be studied, and we can use the results of those studies to reason about the nature of consciousness. We can categorize life by their sophistication, and we can make inferences about the experiences each category of life have.

                So I think it's absolutely a problem that can and should be studied and reasoned about. Though I can see how that idea can be uncomfortable.

                • RickRussell_CAOP
                  link
                  fedilink
                  010 months ago

                  Well, it's a "problem" for philosophers. I don't think it's a "problem" for neurology or hard science, that's the only point I was trying to make.

          • Gormadt
            link
            fedilink
            010 months ago

            Don't we humans derive from our trained dataset: our lives?

            If you had a human with no "trained dataset" they would have only just been born. But even then you run into an issue there as it's been shown that fetuses respond to audio stimulation while they're in the womb.

            The question of consciousness is a really hard one for sure that we may never have an answer that everyone agrees on.

            Right now we're in the infant days of AI.

            • RickRussell_CAOP
              link
              fedilink
              110 months ago

              To be clear, I don't think the fundamental issue is whether humans have a training dataset. We do. And it includes copyrighted work. It also includes our unique sensory perceptions and lots of stuff that is definitely NOT the result of someone else's work. I don't think anyone would dispute that copyrighted text, pictures, sounds are integrated into human consciousness.

              The question is whether it is ethical, and should it be legal, to feed copyrighted works into an AI training dataset and use that AI to produce material that replaces, displaces, or competes with the copyrighted work used to train it. Should it be legal to distribute or publish that AI-produced material at all if the copyright holder objects to the use of their work in an AI training dataset? (I concede that these may be two separate, but closely related, questions.)