Independent thinker valuing discussions grounded in reason, not emotions.

Open to reconsider my views in light of good-faith counter-arguments but also willing to defend what’s right, even when it’s unpopular. My goal is to engage in dialogue that seeks truth rather than scoring points.

  • 0 Posts
  • 25 Comments
Joined 25 days ago
cake
Cake day: August 25th, 2024

help-circle





  • ContrarianTrail@lemm.eetomemes@lemmy.worldWe are not the same
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    edit-2
    11 days ago

    We can’t pirate a company into bankruptcy because there are still people paying for the movies and games we download. If everyone pirated content, these companies would go bankrupt, and there would no longer be new content to pirate. Online pirates often justify their behavior by telling themselves a story about how they’re ‘sticking it to the man,’ but in reality, we’re just freeriders enjoying the fruits of others’ labor. We’re leeches with no moral ground to stand on.


  • How do ranged weapons invalidate persistence hunting?

    Even with a modern bow it’s still really difficult to sneak close enough to a deer to reliably make a kill shot. You’re not going to sneak close enough to poke it with a spear and with game that size, throwing rocks is not really an option either because that wont kill it. Something like axis deer is quick enough to even dodge a modern arrow.

    The reality is that the animal will notice you and it will out-sprint you as well but it wont outrun a human on a long distance. When the animal is exhausted and no more able to run, then you can then stick your spear in it.


  • It seems cold but it makes sense and I can’t really blame USA for doing that.

    Another aspect to consider is that by gradually increasing support, it blurs the moments when so-called ‘red lines’ are being crossed. If we had provided Ukraine with Western cruise missiles, tanks, and jets from the start of the conflict, and allowed them to strike into Russia proper, there would have been a legitimate risk of major escalation. Instead, by slowly ramping up support, it’s much harder for Russia to pinpoint a specific moment when a line was crossed.


  • AI is not creating images in a vacuum. There is a person using it and that person does have a mind. You could come up with a brand new mythical creature right now, let’s call it AI-saurus. If you ask it to create a picture of AI-saurus, it wouldn’t be able to do so because it has no idea what it looks like. However what you could do is describe it to the AI and it’ll output something that more or less resembles what you had in mind. What ever flaws you see in it you could correct for with a new, modified prompt and you keep doing this untill it produces something that matches the idea you had in mind. AI is like a police sketch artist; the outcome depends on how well you managed to describe the subject. The artist itself doesn’t need to know what they looked like. They have a basic understanding of human facial anatomy and you’re filling in the blanks. This is what generative AI does as well.

    The people creating pictures of underage kids with AI are not asking for it to produce CSAM. It would most likely refuse to do so and may even report you. Instead, they’re describing what they want the output to look like and they’re arriving to the same end result by just using a different route.



  • You were attempting to prove it could generate things not in its data set and i have disproved your theory.

    I don’t understand how you could possibly imagine that pic somehow proves your claim. You’ve made no effort in trying to explain yourself. You just keep dodging my questions when I ask you to do so. A shitty photoshop of a “corn dog” has nothing to do with how the image I posted was created. It’s a composite between a corn and a dog.

    Generative AI, just like a human, doesn’t rely on having seen an exact example of every possible image or concept. During its training, it was exposed to huge amounts of data, learning patterns, styles, and the relationships between them. When asked to generate something new, it draws on this learned knowledge to create a new image that fits the request, even if that exact combination wasn’t in its training data.

    Cause we have actual instances and many where csam is in the training data.

    If the AI has been trained on actual CSAM and especially if the output simulates real people, then that’s a whole another discussion to be had. This is however not what we’re talking about here.



  • Are you saying this person hasn’t committed a crime?

    Yes, and if the law is interpretet in a way that it is considered illegal, and the person is punished for it, then that’s a moral injustice and the kind of senselessness we as humans should grow out of. The fact that this “crime” has no victim is the whole point of why punishing for it makes no sense.

    CSAM is illegal for a very good reason; producing it without abusing children is by definition impossible. By searching for and viewing such content, the person becomes part of the causal chain that leads to it being produced in the first place. By criminalizing it we attempt to deter people from looking for it and thus bringing down the demand and disincentivizing the production of it.

    Using AI that is not trained on such content is out of this loop. There is literally nobody being harmed if someone decides to use it to create depictions of such content. It’s not actual CSAM it’s producing. By the very definition it cannot be. Not any more than shooting a person in a video game is a murder. CSAM stands for Child Sexual Abuse Material (I hate even saying that) so in other words; proof of the crime having happened. AI generated images are fiction. Nobody is being harmed. It’s just a more photorealistic version of a drawing. Treating it as actual CSAM in the court is insanity.

    Now. If the AI has been trained on actual CSAM and especially if the output simulates real people, then that’s a whole another discussion to be had. This is however not what we’re talking about here.


  • A person said that there is no victim in creating simulated CSAM with AI just like there isn’t one in video games, to which you replied that the difference there is intention. The intention to play violent games is to play games when as with viewing CSAM it’s that your intention is to view abuse material.

    Correct so far?

    Ofcourse the intent is that. For what other reason would anyone want to see CSAM for, than to see CSAM? What kind of argument / conclusion is this supposed to be? How else am I supposed to interpret this than as you advocating for the crimimalization of creating such content despite the fact that no one is being harmed? How is that not pre-emptively punishing people for crimes they’ve yet to even commit? Nobody chooses to be born with such thoughts or desires, so I don’t see the point of punishing anyone for that alone.


  • I don’t think that’s fair. It could just as well be said that the purpose of violent games is to simulate real life violence.

    Even if I grant you that the purpose of viewing CSAM is to see child abuse, it’s still less bad than actually abusing them just like playing violent games is less bad than participating in real violence. Also, despite the massive increase in violent games and movies, the actual number of violence is going down so implying that viewing such content would increase the cases of child abuse is an assumption I’m not willing to make either.




  • But you do know because corn dogs as depicted in the picture do not exists so there couldn’t have been photos of them in the training data, yet it was still able to create one when asked.

    This is because it doesn’t need to have been seen one before. It knows what corn looks like and it knows what a dog looks like so when you ask it to combine the two it will gladly do so.



  • The core reason CSAM is illegal is not because we don’t want people to watch it but because we don’t want them to create it which is synonymous with child abuse. Jailing someone for drawing a picture like that is absurd. While it might be of bad taste, there is no victim there. No one was harmed. Using generative AI is the same thing. No matter how much simulated CSAM you create with it, not a single child is harmed in doing so. Jailing people for that is the very definition of a moral panic.

    Now, if actual CSAM was used in the training of that AI, then it’s a more complex question. However it is a fact that such content doesn’t need to be in the training data in order for it to create simulated CSAM and as long as that is the case it is immoral to punish people for creating something that only looks like it but isn’t.