Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      3 months ago

      “Inexplicable Cimmerian Vibes” is the name of my next band.

      Bonus points if this turns out to be the output of an LLM trained by phrenologists.

    • BigMuffin69@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      3 months ago

      omg, next time my wife asks me how she looks, I’m definitely dropping that “legible magyar admixture”

      Edit: Didn’t work. She started talking about how in the old country, the Hungarians chased her family out of the village for being religious minorities. I give this approach 0 bags of popcorn and a magen david.

      • Mike@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 months ago

        “Babe, you’re looking Haplogroup I-M437 tonight. No. Not M-437. Damn, girl, you’re an M-438.”

        • o7___o7@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          3 months ago

          When you really want to confuse your astronomy post-doc partner.

          EDIT: I’ve been reliably informed that that’s too many Messier objects.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 months ago

      That’s certainly one approach to commenting on someone’s picture. Pretty sure it’s better to stick with the standard “Wow! 😍😍😍” but this certainly sticks out from the crowd?

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          9
          ·
          3 months ago

          In particular she felt like Anna, whom she’d been closest with, was being dishonest about what they hoped to achieve with this whole project. Sanje further alleged that Anna’s good standing largely stemmed from her incomprehensibility, because people don’t have a clue what this is actually all about. Possibly Anna doesn’t, either.

          most straightforward hegelian

        • blakestacey@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          3 months ago

          I got as far as “Dimes Square bohemians” in the fourth sentence before realizing that everything in that article I recognized, I would regret.

        • Amoeba_Girl@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 months ago

          Haela Hunt-Hendrix, the singer from the black metal band Litvrgy, was one of the principal organizers of this “symposium.” […] Besides making music, she seems to be interested in esoteric religious themes, numerology, and Orthodox iconography. In any case, Hunt-Hendrix claimed that Anna was stealing her ideas and twisting them in a “cryptofascist” manner.

          Oh no! How could she!

          • Amoeba_Girl@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            3 months ago

            Oh my god I only now notice the title is a reference to Tiqqun’s Preliminary Materials For a Theory of the Young-Girl, this is like a supercollision of dumbfuck cryptoreactionary nonsense I’ve obsessed over.

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 months ago

          Hegelian e-girls’ VIP “symposium”

          Excuse the rather formal philosophical latin but qvid in fvck?

          I tried looking through the post to find out what possibly they could have to do with Hegel and found

          Thankfully, Matthew shared the Googledoc the e-girls had sent him with their prepared remarks. My commentary over the next several paragraphs will only make sense if you read over them (they’re mercifully short), so I’d urge everyone to open up the hyperlink and give it a quick look.

          Okay, first of all, it’s like 5 pages, “mercifully short” lol, go take a hike. Second,

          Concrete philosophizing means applying insight to the alchemical transformation of everyday life.

          This is in the first paragraph. I feel like reading this would make me devolve into an entire day of incoherent screaming and I have enough respect for my coworkers and loved ones to not subject them to that

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    3 months ago

    The once and future king + ol’ muskrat give their most sensible total nuclear annihilation takes. Fellas, are we cooked?

    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      3 months ago

      This has to be hands down the absolute dumbest take I’ve seen from Musk ever. Dude has the mental capacity of a boiled pear.

    • mountainriver@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      They are both stupid men who repeat stuff they hear to make them look good. So the question is who are this time the “very smart people” that are telling numbnuts like these two that nuclear war is survivable - and by extension winnable? Because if that is the US defense establishment, then yeah we might be cooked.

      • BigMuffin69@awful.systems
        link
        fedilink
        English
        arrow-up
        14
        ·
        3 months ago

        I cannot get over the fact that this man child who is so concerned with “the future of humanity” is both out right trying to buy the presidency and downplaying the very real weapons that can easily wipe out 70% of the Earth’s population in 2 hours. Remember ya’ll, the cost of microwaving the world is negligible compared to the power of spicy autocomplete.

    • imadabouzu@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 months ago

      Watching this election has been amazing! LIKE WOAH what a fucking obviously self destructive end to delusion. Can I be optimistic and hope that with EA leaning explicitly heavier into the hard right Trump position, when it collapses and Harris takes it, maybe some of them will self reflective on what the hell they think “Effective” means anyways.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      3 months ago

      Considering they were saying this while having trouble doing internet radio at scale, a problem basically solved 20 years ago, I’m not sure we should listen to them.

      Related to Musk, Trump and all the other fools. PrimalPoly revealing just how shallow and culture war brainwormed thinker he is.

      Image Description.

      Musk: Happy to host Kamala on an 𝕏 Spaces too PrimalPoly: Suggested questions for Kamala:

      1. How do crypto blockchains work, & why are so many Americans skeptical of Central Bank Digital Currencies?

      2. How would you stop the US gov’t from colluding with Big Tech social media companies to censor Americans?

      3. What is the main cause of inflation?

      4. What is a woman? Description ends, question I have for anybody with a screenreader, does this spoiler method work? And also does the screenreader properly work with the letter: X as used on twitter, namely 𝕏.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 months ago
        1. They don’t & yall are “skeptical” of ID cards because it’s the Mark of the Beast, so go figure
        2. Easy, regulate Big Tech to the ground until there’s only Small-to-Medium Tech left
        3. Air, most of the time.
        4. Your mom.

        Can I be a VP or at least Chief of Staff now

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          3 months ago

          “Well If you had read my paper on evolutionary psychology I did while looking at sex workers, accusations of weirdness is a actually sign of …”

      • flavia@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        Spoilers are an html element so they should work everywhere. Mastodon just shows the text without spoiler or CW. Letters in a different typeface specified by Unicode are announced the same as regular letters for this purpose, emphasis, to the dismay of mathematicians that would want “double-stroke X” to be announced.

    • saucerwizard@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 months ago

      The Bismarck Analysis crew were sneering at Sagan being a filthy peace activist so I would hazard that the era of ‘survivable nuclear war’ rides again.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    3 months ago

    This came up in a podcast I listen to:

    WaPo: "OpenAI illegally barred staff from airing safety risks, whistleblowers say "

    archive link https://archive.is/E3M2p

    OpenAI whistleblowers have filed a complaint with the Securities and Exchange Commission alleging the artificial intelligence company illegally prohibited its employees from warning regulators about the grave risks its technology may pose to humanity, calling for an investigation.

    While I’m not prepared to defend OpenAI here I suspect this is just to shut up the most hysterical employees who still actually believe they’re building the P(doom) machine.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      I mean, if you play on the doom to hype yourself, dealing with employees that take that seriously feel like a deserved outcome.

    • imadabouzu@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 months ago

      Short story: it’s smoke and mirrors.

      Longer story: This is now how software releases work I guess. Alot is running on open ai’s anticipated release of GPT 5. They have to keep promising enormous leaps in capability because everyone else has caught up and there’s no more training data. So the next trick is that for their next batch of models they have “solved” various problems that people say you can’t solve with LLMs, and they are going to be massively better without needing more data.

      But, as someone with insider info, it’s all smoke and mirrors.

      The model that “solved” structured data is emperically worse at other tasks as a result, and I imagine the solution basically just looks like polling multiple response until the parser validates on the other end (so basically it’s a price optimization afaik).

      The next large model launching with the new Q* change tomorrow is “approaching agi because it can now reliably count letters” but actually it’s still just agents (Q* looks to be just a cost optimization of agents on the backend, that’s basically it), because the only way it can count letters is that it invokes agents and tool use to write a python program and feed the text into that. Basically, it is all the things that already exist independently but wrapped up together. Interestingly, they’re so confident in this model that they don’t run the resulting python themselves. It’s still up to you or one of those LLM wrapper companies to execute the likely broken from time to time code to um… checks notes count the number of letters in a sentence.

      But, by rearranging what already exists and claiming it solved the fundamental issues, OpenAI can claim exponential progress, terrify investors into blowing more money into the ecosystem, and make true believers lose their mind.

      Expect more of this around GPT-5 which they promise “Is so scary they can’t release it until after the elections”. My guess? It’s nothing different, but they have to create a story so that true believers will see it as something different.

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 months ago

        Yeah, I’m not in any doubt that the C-level and marketing team are goosing the numbers like crazy to keep the buuble from bursting, but I also think they’re the ones that are most cognizant of the fact that ChatGPT is definitely not the Doom Machine. But I also believe they have employees who they cannot fire because they would spread a hella lot doomspeak if they did, who are True Believers.

        • BlueMonday1984@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          3 months ago

          I also believe they have employees who they cannot fire because they would spread a hella lot doomspeak if they did, who are True Believers.

          Part of me suspects they probably also aren’t the sharpest knives in OpenAI’s drawer.

          • imadabouzu@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            3 months ago

            It can be both. Like, probably OpenAI is kind of hoping that this story becomes wide and is taken seriously, and has no problem suggesting implicitly and explicitly that their employee’s stocks are tied to how scared everyone is.

            Remember when Altman almost got outed and people got pressured not to walk? That their options were at risk?

            Strange hysteria like this doesn’t need just one reason. It just needs an input dependency and ambiguity, the rest takes of itself.

        • imadabouzu@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 months ago

          Q*

          My understanding is that it was renamed or rebranded to Strawberry which itself nebulous marketting maybe it’s the new larger model or maybe it’s GPT-5 or maybe…

          it’s all smoke and mirrors. I think my point is, they made some cost optimizations and mostly moved around things that existed, and they’ll keep doing that.

          • froztbyte@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 months ago

            OH

            I first saw this then later saw the “openai employees tweeted 🍓” and thought the latter was them being cheeky dipshits about the former. admittedly I didn’t look deeper (because ugh)

            but this is even more hilarious and dumb

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 months ago

      ‘TESCREAL’ refers to a nonsense conspiracy theory that disparages people such as Nick Bostrom without citing any sources that are credible on the question of whether Nick Bostrom is an ‘evil eugenicist’ or whatever.

      WP:LOL. WP:LMAO even.

      • imadabouzu@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        3 months ago

        I’m ok with this because everytime Nick Bostrom’s name is used publicly to defend anything, and then I show people what Nick Bostrom believes and writes, I robustly get a, “What the fuck is this shit? And these people are associated with him? Fuck that.”

  • cornflake@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    3 months ago

    Post from July, tweet from today:

    It’s easy to forget that Scottstar Codex just makes shit up, but what the fuck “dynamic” is he talking about? He’s describing this like a recurring pattern and not an addled fever dream

    There’s a dynamic in gun control debates, where the anti-gun side says “YOU NEED TO BAN THE BAD ASSAULT GUNS, YOU KNOW, THE ONES THAT COMMIT ALL THE SCHOOL SHOOTINGS”. Then Congress wants to look tough, so they ban some poorly-defined set of guns. Then the Supreme Court strikes it down, which Congress could easily have predicted but they were so fixated on looking tough that they didn’t bother double-checking it was constitutional. Then they pass some much weaker bill, and a hobbyist discovers that if you add such-and-such a 3D printed part to a legal gun, it becomes exactly like whatever category of guns they banned. Then someone commits another school shooting, and the anti-gun people come back with “WHY DIDN’T YOU BAN THE BAD ASSAULT GUNS? I THOUGHT WE TOLD YOU TO BE TOUGH! WHY CAN’T ANYONE EVER BE TOUGH ON GUNS?”

    Embarrassing to be this uninformed about such a high profile issue, no less that you’re choosing to write about derisively.

    • maol@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      Surely this is 3 or 4 different anti-gun control tropes all smashed together.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 months ago

    Who had Trump accusing the Harris campaign of using AI to inflate crowd size photos on their Election ‘24 bingo card? Anyway, I’m sure that being associated with fraud and fakes is Good For AI.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 months ago

    I found the most HN comment of all time:

    What sort of mating strategy are you optimizing for?

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 months ago

      I used to wonder if they had thought about deadlock/livelock re self driving cars. Thanks to modern technology I no longer have to wonder. Thanks!

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 months ago

      Oh this sounds like a dog I used to have as a kid! They needed more enrichment during the day or else she’d bark into the void all night and get super excited when another dog barked back.

      Have they tried taking the waymos out for walkies?

    • froztbyte@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      saw a video of this yesterday, that “honk” title extremely understates how fucking dumb the problem is

      in the video I saw, those dumb-ass things are literally crawling forward and back in the parking lot, because the one in front of it is also doing it, because…

      yes, a multi-car movement deadlock, with a visually clear solution (which any human driver would be able to implement in seconds) that nonetheless still doesn’t happen because….? I guess waymo didn’t code in inter-car communication or something

      seriously, find a copy and watch. it’ll give a lovely kicker to your day :>

    • hrrrngh@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      3 months ago

      “help artists with tasks such as animating a custom character or using the character as a model for clothing etc”

      The “deepfake” and “(uncensored)” in the repo description have me questioning that ever so slightly

  • hrrrngh@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    3 months ago

    https://www.reddit.com/r/CharacterAI/comments/1eqsoom/guys_we_have_to_do_somthing_about_this_fiӏtеr/

    This community pops up on /r/all every so often and each time it scares me.

    Sometimes I see kids games (and all games really) have ultra-niche, super-online protests that are like “STOP Zooshacorp from DESTROYING K-Smog vs. Batboy Online”, and when I look closer it’s either even more confusing or it’s about something people didn’t like in the latest update. This is like that, but with an awful twist where it’s about people getting really attached to these AI girlfriend/sex roleplay apps. The spelling and sentences make it seem like it’s mostly kids, too.

    edit: here’s a terrible example!

    • FredFig@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 months ago

      It’s really funny that this was probably the closest thing to a killer app powered by genAI to exist.

      Wonder if they’re getting rid of this stuff because they realized it’s actually a liability to mine these ERP convos for data and they’re burning money on every conversation as it is.

    • froztbyte@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 months ago

      Yesterday I saw a link to some podcast/post float by, of an interview with some genml company “discussing people falling in love with, having relations with, and even wanting to marry”, where the ceo is “okay with it”. didn’t click because ugh, but will see if I can find it

      and ofc all these weird fucking things will pop the moment their vc runs out or openai raises prices or whatever. bet you they don’t have any therapy contingency for helping people with their ai partners suddenly getting vc-raptured

      • Mii@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 months ago

        I remember 15 years ago when I read about a Japanese man marrying a character from a dating sim game (source, archive link).

        The internet clowned on him, but he was very serious, and it was the first time when I realized that these “anime waifu” people probably aren’t all just taking the piss.

        There’s a whole socio-economic angle there, of course, which I don’t think I wanna get into here, but to me this whole “AI girlfriend” market really seems like a low-effort take on “dating sim as a service” with as much game removed as possible but the exploitative nature turned up to fucking eleven.

        • imadabouzu@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          3 months ago

          The weird thing, is. From my perspective. Nearly every, weird, cringy, niche internet addiction I’ve ever seen or partaken in myself, has produced both two things: people who live through it and their perspective widens, and people who don’t.

          Like, I look back at my days of spending 2 days at a time binge playing World of Warcraft with a deep sense of cringe but also a smirk because I survived and I self regulated, and honestly. Made a couple of lifetime friends. Like whatever response we have to anime waifus, I hope we still recognize the humanity in being a thing that wants to be entertained or satisfied.

      • froztbyte@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 months ago

        post still seems up here. try old.reddit if attempting to view from mobile (the “new” reddit site is remarkably rabid on mobile)

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 months ago

          There’s a link to what appears to have been a picture (reddit.com/gallery) but it’s dead. If you go through “new” Reddit it just says “removed by moderators”.

          I can’t really tell what this is about from “we HAVE to do something about this” when “this” is an empty space :/

          • hrrrngh@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            ·
            edit-2
            3 months ago

            Oh whoops, I should have archived it.

            There were about 7 images posted of users roleplaying with bots, all ending with a bot response that cut off halfway with an error message that read “This content may violate our policies; blablabla; please use the report button if you believe this is a false positive and we will investigate.” The last one was some kind of parody image making fun of the warning.

            Most of them were some kind of romantic roleplay with bad spelling. One was like, “i run my hand down your arm and kiss you”, and the bots response triggered the warning. Another one was like, “*is slapped in the face* it’s okay, I still love you” and the rest of the message generated a warning. There wasn’t enough context for that one, so the person might have been writing it playfully (?), but that subreddit has a lot of blatant sexual violence regardless.

    • 200fifty@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      3 months ago

      Can AI companies legally ingest copyrighted materials found on the internet to train their models, and use them to pump out commercial products that they then profit from? Or, as the tech companies claim, does generative AI output constitute fair use?

      This is kind of the central issue to me honestly. I’m not a lawyer, just a (non-professional) artist, but it seems to me like “using artistic works without permission of the original creators in order to create commercial content that directly competes with and destroys the market for the original work” is extremely not fair use. In fact it’s kind of a prototypically unfair use.

      Meanwhile Midjourney and OpenAI are over here like “uhh, no copyright infringement intended!!!” as though “fair use” is a magic word you say that makes the thing you’re doing suddenly okay. They don’t seem to have very solid arguments justifying them other than “AI learns like a person!” (false) and “well google books did something that’s not really the same at all that one time”.

      I dunno, I know that legally we don’t know which way this is going to go, because the ai people presumably have very good lawyers, but something about the way everyone seems to frame this as “oh, both sides have good points! who will turn out to be right in the end!” really bugs me for some reason. Like, it seems to me that there’s a notable asymmetry here!

      • BlueMonday1984@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        3 months ago

        I dunno, I know that legally we don’t know which way this is going to go, because the ai people presumably have very good lawyers

        You’re not wrong on the AI corps having good lawyers, but I suspect those lawyers don’t have much to work with:

        If I were a betting man, I’d put my money on the trial being a bloodbath in the artists’ favour, and the resulting legal precedent being one which will likely kill generative AI as we know it.

        • 200fifty@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          3 months ago

          God, that would be the dream, huh? Absolutely crossing my fingers it all shakes out this way.

          • imadabouzu@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            3 months ago

            Stranger things have happened. But in either case, we should commit to supporting every effort. If one punch doesn’t work take another. Death by a million cuts.

      • imadabouzu@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 months ago

        Like, it seems to me that there’s a notable asymmetry here!

        I think that’s a great framing here.

    • froztbyte@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 months ago

      jesus, that’s telling. and I can 100% see that sentence forming in the heads of the types of people who fall over themselves to create something like these tools. so caught up in the math and the technical cool, they can’t appreciate other beauty

  • skillissuer@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    3 months ago

    yall might want to take notice of this thing https://discuss.tchncs.de/post/20460779

    https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2024-08-14/Recent_research

    STORM: AI agents role-play as “Wikipedia editors” and “experts” to create Wikipedia-like articles, a more sophisticated effort than previous auto-generation systems

    ai slop in extruded text form, now longer and worse! and burns extra square kilometers of rainforest

      • imadabouzu@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        3 months ago

        LLM, tell me the most obviously persuasive sort of science devoid of context. Historically, that’s been super helpful so let’s do more of that.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      3 months ago

      we propose the STORM paradigm for the Synthesis of Topic Outlines through Retrieval and Multi-perspective Question Asking

      oh come the fuck on

      The authors hail from Monica S. Lam’s group at Stanford, which has also published several other papers involving LLMs and Wikimedia projects since 2023 (see our previous coverage: WikiChat, “the first few-shot LLM-based chatbot that almost never hallucinates” – a paper that received the Wikimedia Foundation’s “Research Award of the Year” some weeks ago).

      from the same minds as STOTRMPQA comes: we constructed this LLM so it won’t generate a response unless similar text appears in the Wikipedia corpus and now it almost never entirely fucks up. award-winning!

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    3 months ago

    Babe, new AI doom vector just dropped: AGI will corrupt knitting sites so crafters make Langford visual hack patterns![1]

    https://www.zdnet.com/article/how-ai-scams-are-infiltrating-the-knitting-and-crochet-world/


    [1] doom scenario is my interpretation, not actually included in ZDnet article.

    Sadly, Langford hacks seem to have never achieved memetic takeoff. Having an internet legally enforced on pain of death to be text-only would probably be a good thing.

    comp.basilisk.faq

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 months ago

    this will probably become a NotAwfulTech post after I explore a bit more, but here’s a quick follow-up to my post last stubsack looking for language learning apps:

    the open source apps for the learning system I want to use do exist! that system is essentially an automation around reading an interesting text in Spanish (or any other language), marking and translating terms and phrases with a translation dictionary, and generating flash cards/training materials for those marked terms and phrases. there’s no good name for the apps that implement this idea as a whole so I’m gonna call them the LWT family for reasons that will become clear.

    briefly, the LWT family apps I’ve discovered so far are:

    • LWT (Learning With Texts) is the original open source system that implemented the learning system I described above (though LWT itself originated as an open source clone of LingQ with some ideas from other learning systems). the Hugo Fara fork is the most recently-maintained version of LWT, but it’s generally considered finished (and extraordinarily difficult to modify) software. I need to look into LWT more since it’s still in active use; I believe it uses an Anki exporter for spaced repetition training. it doesn’t seem to have a mobile UI, which might be a dealbreaker since I’ll probably be doing a lot of learning from my phone
    • Lute (Learning Using Texts) is a modernized LWT remake. this one is being developed for stability, so it’s missing features but the ones that exist are reputedly pretty solid. it does have a workable mobile UI, but it lacks any training framework at all (it may have an extremely early Anki plugin to generate flash cards)
    • LinguaCafe is a completely reworked LWT with a modern UI. it’s got a bunch of features, but it’s a bit janky overall. this is the one I’m using and liking so far! installing it is a fucking nightmare (you have to use their docker-compose file only, with docker not podman, and absolutely slaughter the permissions on your bind mounts, and no you can’t fire it up native) but the UI’s very modern, it works well on mobile (other than jank), and it has its own spaced repetition training framework as well as (currently essentially useless) Anki export. it supports a variety of freely available translation dictionaries (which it keeps in its own storage so they’re local and very fast) and utterly optional DeepL support I haven’t felt the need to enable. in spite of my nitpicks, I really am enjoying this one so far (but I’m only a couple days in)
    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      you have to use their docker-compose file only, with docker not podman, and absolutely slaughter the permissions on your bind mounts, and no you can’t fire it up native

      yeah I have no idea what any of these words mean

        • froztbyte@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          speaking of

          one of my endeavours the last few days (although heavily split into pieces between migraines and other downtimes) was to figure out how to segment containers into vlan splits (bc reasons), and doing this on podman

          the docs will (by omission or directly) lie to you so much. the execution boundaries of root vs rootless cause absolutely hilarious failure modes. things that are required for operation are Recommended packages (in the apt/dpkg sense)

          utter and complete clownshow bullshit. it does my head in to think how much human time has been wasted on falling arse-over-face to get in on this shit purely after docker ran a multi-year vc-funded pr campaign. and even more to see, at every fucking interaction with this shit, just how absolutely infantile the implementations of any of the ideas and tooling are

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            our entire industry will regret using Docker in the relatively near term, but nobody will learn a damn thing from the mistake