• Monstrosity@lemm.ee
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    9
    ·
    7 days ago

    I kind of like AI, sorry.

    But it should all be freely available & completely open sourced since they were all built with our collective knowledge. The crass commercialization/hoarding is what’s gross.

    • JovialMicrobial@lemm.ee
      link
      fedilink
      arrow-up
      49
      ·
      7 days ago

      I like what we could be doing with AI.

      For example there’s one AI that I read about awhile back that was given data sets on all the known human diseases and the medications that are used to treat them.

      Then it was given data sets of all the known chemical compounds(or something like that, can’t remember the exact wording)

      Then it was used to find new potential treatments for diseases. Like new antibiotics. Basically it gives medical researchers leads to follow.

      That’s fucking cool and beneficial to everyone. It’s a wonderful application of the tech. Do more of that please.

      • just_an_average_joe@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        44
        arrow-down
        2
        ·
        7 days ago

        What you are talking about is machine learning which is called AI. What the post is talking about is LLMs which are also called AI.

        AI by definition means anything that exhibits intelligent behavior and it is not natural in nature.

        So when you use GMaps to find the shortest path between 2 points that’s also AI (specifically called local search).

        It is pointless to argue/discuss AI if nobody even know which type they are specifically talking about.

        • Allero@lemmy.today
          link
          fedilink
          arrow-up
          14
          ·
          7 days ago

          The issue is, people tend to overgeneralize and also get averted when some buzzword is repeated too much.

          So, this negatively affects the entire field of any AI.

        • JovialMicrobial@lemm.ee
          link
          fedilink
          arrow-up
          8
          arrow-down
          19
          ·
          7 days ago

          I’m talking about AI in the context of this conversation.

          I’m sorry it upsets you that capitalism has yet again redefined another word to sell us something else, but nobody here is specifically responsible for the language we’ve been given to talk about LLMs.

          Perhaps writing to Mirriam Webster about the issue could reap the results you’re looking for.

          • Jesus_666@lemmy.world
            link
            fedilink
            arrow-up
            18
            ·
            6 days ago

            LLMs are an instance of AI. There are many. Typically, the newest promising one is what the media will refer to as “AI” because the media don’t do subtlety.

            There was a time when expert systems were the one thing the media considered to be AI (and were overhyped to the point of articles wondering if they’d make jobs like general practitioners obsolete). Now it’s generational neural nets. In twenty years it’ll be something else.

    • blind3rdeye@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      5 days ago

      Yeah. I’ve been interested in AI for most of my life. I’ve followed AI developments, and tinkered with a lot of AI stuff myself. I was pretty excited when ChatGPT first launched… but that excitement turned very sour after about a month.

      I hate what the world has become. Money corrupts everything. We get the cheapest most exploitative version of every possible idea, and when it comes to AI - that’s pretty big net negative on the world.

    • mm_maybe@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      arrow-down
      5
      ·
      edit-2
      6 days ago

      I mean you’re technically correct from a copyright standpoint since it would be easier to claim fair use for non-commercial research purposes. And bots built for one’s own amusement with open-source tools are way less concerning to me than black-box commercial chatbots that purport to contain “facts” when they are known to contain errors and biases, not to mention vast amounts of stolen copyrighted creative work. But even non-commercial generative AI has to reckon with it’s failure to recognize “data dignity”, that is, the right of individuals to control how data generated by their online activities is shared and used… virtually nobody except maybe Jaron Lanier and the folks behind Brave are even thinking about this issue, but it’s at the core of why people really hate AI.

      • ClamDrinker@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        6 days ago

        You had me in the first half, but then you lost me in the second half with the claim of stolen material. There is no such material inside the AI, just the ideas that can be extracted from such material. People hate their ideas being taken by others but this happens all the time, even by the people that claim that is why they do not like AI. It’s somewhat of a rite of passage for your work to become so liked by others that they take your ideas, and every artist or creative person at that point has to swallow the tough pill that their ideas are not their property, even when their way of expressing them is. The alternative would be dystopian since the same companies we all hate, that abuse current genAI as well, would hold the rights to every idea possible.

        If you publicize your work, your ideas being ripped from it is an inevitability. People learn from the works they see and some of them try to understand why certain works are so interesting, extracting the ideas that do just that, and that is what AI does as well. If you hate AI for this, you must also hate pretty much all creative people for doing the exact same thing. There’s even a famous quote for that before AI was even a thing. “Good artists copy, great artists steal.”

        I’d argue that the abuse of AI to (consider) replacing artists and other working creatives, spreading of misinformation, simplifying of scams, wasting of resources by using AI where it doesn’t belong, and any other unethical means to use AI are far worse than it tapping into the same freedom we all already enjoy. People actually using AI for good means will not be pumping out cheap AI slop, but are instead weaving it into their process to the point it is not even clear AI was used at the end. They are not the same and should not be confused.

        • petrol_sniff_king@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          1
          arrow-down
          4
          ·
          6 days ago

          a rite of passage for your work to become so liked by others that they take your ideas,

          ChatGPT is not a person.

          People learn from the works they see […] and that is what AI does as well.

          ChatGPT is not a person.

          It’s actually really easy: we can say that chatgpt, which is not a person, is also not an artist, and thus cannot make art.

          The mathematical trick of putting real images into a blender and then outputting a Legally Distinct™ one does not absolve the system of its source material.

          but are instead weaving it into their process to the point it is not even clear AI was used at the end.

          The only examples of AI in media that I like are ones that make it painfully obvious what they’re doing.