![](https://fedia.io/media/0d/90/0d9097fcd085a5a00c935073e45acc5736f8f471cfdec99dfe7b6d12f3dd3710.png)
![](https://lemmy.world/pictrs/image/ea4c7d39-bb1c-4f59-b46b-c795c3ee0536.jpeg)
My advice against getting too deeply invested applies to those companies and communities as well.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
My advice against getting too deeply invested applies to those companies and communities as well.
I once got permabanned from a politics subreddit (I think it was /r/canadapolitics) that had a “downvoting is not permitted” rule, because there was a guy getting downvotes and I offered him an explanation for why I thought he was getting them. That counted as evidence that I had downvoted him, I guess.
My response: I sent one message to the mods that was essentially “really?” And then when there was no response I unsubbed from that subreddit and moved on. I see no point in participating in subreddits with ridiculous rules and ridiculous enforcement.
Granted, unsubbing from politics subreddits is generally a good idea even when not banned. But eh.
The only other subreddit I’m banned in is /r/artisthate, which I never visited in the first place. Apparently they scan other subreddits for signs of users who don’t hate artificial intelligence enough and preemptively ban them. That was kind of hilarious.
Anyway, I guess my advice is don’t get too deeply “invested” in a community that can be so easily and arbitrarily taken away from you in the first place. And also manage your passwords better.
Also Library Genesis.
The IA is appealing the decision so they’re not out of the woods just yet.
Oh now suddenly you care about leaks?
A material witness can be prevented from leaving the country.
Note also that I said housed, not imprisoned.
There is a middle ground between these two extremes. I don’t see why they can’t be housed on shore. Give them a temporary visa or somesuch.
Even if you trained the AI yourself from scratch you still can’t be confident you know what the AI is going to say under any given circumstance. LLMs have an inherent unpredictability to them. That’s part of their purpose, they’re not databases or search engines.
if I were to download a pre-trained model from what I thought was a reputable source, but was man-in-the middled and provided with a maliciously trained model
This is a risk for anything you download off the Internet, even source code could be MITMed to give you something with malicious stuff embedded in it. And no, I don’t believe you’d read and comprehend every line of it before you compile and run it. You need to verify checksums
As I said above, the real security comes from the code that’s running the LLM model. If someone wanted to “listen in” on what you say to the AI, they’d need to compromise that code to have it send your inputs to them. The model itself can’t do that. If someone wanted to have the model delete data or mess with your machine, it would be the execution framework of the model that’s doing that, not the model itself. And so forth.
You can probably come up with edge cases that are more difficult to secure, such as a troubleshooting AI whose literal purpose is messing with your system’s settings and whatnot, but that’s why I said “99% of the way there” in my original comment. There’s always edge cases.
And as I suspected would be the case, some other folks have responded to my comment with a bunch of additional “simple” suggestions for what to do in this case. Which have hidden exceptions of their own, which will have unexpected impacts and loopholes, which will then elicit further “simple” suggestions for how to fix them, and before you know it we’ve got a complex tax code again.
There’s an old quote of unclear providence that I think applies here, “everything should be made as simple as possible, but no simpler”.
Taxes usually start out simple, since that appeals to people. Then over time they get more complicated as people discover more and more edge cases to exploit.
If you make it all income tax, well, what counts as “income”? Elon Musk just got “paid” $46 billion worth of stock in Tesla, for example. But it’s not actually 46 billion dollars. It’s a share in ownership of a company. Those shares can’t actually be sold for 46 billion dollars. Trying to sell them would cause their price to drop. He can’t actually sell them at all right away, for that matter - they’re restricted stock. He has to hold on to them for a while, as incentive to keep doing a good job as CEO.
So if he keeps doing a good job as CEO and the stock goes up in value by 10 billion dollars, was that rise in value income? What if it goes down by 10 billion instead?
This stuff is inherently complicated. I’m not sure that any simple tax system is going to work.
Ironically, as far as I’m aware it’s based off of research done by some AI decelerationists over on the alignment forum who wanted to show how “unsafe” open models were in the hopes that there’d be regulation imposed to prevent companies from distributing them. They demonstrated that the “refusals” trained into LLMs could be removed with this method, allowing it to answer questions they considered scary.
The open LLM community responded by going “coooool!” And adapting the technique as a general tool for “training” models in various other ways.
That would be part of what’s required for them to be “open-weight”.
A plain old binary LLM model is somewhat equivalent to compiled object code, so redistributability is the main thing you can “open” about it compared to a “closed” model.
An LLM model is more malleable than compiled object code, though, as I described above there’s various ways you can mutate an LLM model without needing its “source code.” So it’s not exactly equivalent to compiled object code.
Fortunately, LLMs don’t really need to be fully open source to get almost all of the benefits of open source. From a safety and security perspective it’s fine because the model weights don’t really do anything; all of the actual work is done by the framework code that’s running them, and if you can trust that due to it being open source you’re 99% of the way there. The LLM model just sits there transforming the input text into the output text.
From a customization standpoint it’s a little worse, but we’re coming up with a lot of neat tricks for retraining and fine-tuning model weights in powerful ways. The most recent bit development I’ve heard of is abliteration, a technique that lets you isolate a particular “feature” of an LLM and either enhance it or remove it. The first big use of it is to modify various “censored” LLMs to remove their ability to refuse to comply with instructions, so that all those “safe” and “responsible” AIs like Goody-2 can turned into something that’s actually useful. A more fun example is MopeyMule, a LLaMA3 model that has had all of his hope and joy abliterated.
So I’m willing to accept open-weight models as being “nearly as good” as a full-blown open source model. I’d like to see full-blown open source models develop more, sure, but I’m not terribly concerned about having to rely on an open-weight model to make an AI system work for the immediate term.
They’re not claiming it’s AGI, though. You’re missing a broad middle ground between dumb calculators and HAL 9000.
Ukraine: “You don’t seem to understand. I’m not trapped in here with you, you’re trapped in here with me!”
It’s similar to my own reaction to the people getting angry about Reddit data being used to train AIs. As someone who’s been commenting rather prolifically on Reddit for 13 years I’m actually quite pleased by the thought that my views and interests are being incorporated into the foundations of modern AI. The only downside is that all those people I argued with over that period are also getting in there. :)
The term “AI” has been in use since 1956 to describe a wide variety of computer algorithms and capabilities. Neural nets and large language models fall very firmly under the term’s umbrella.
What you’re talking about is a specific kind of AI, artificial general intelligence (AGI). Very few people believe that an LLM on its own can become AGI and even fewer believes that current LLMs are AGI, so unfortunately you’re jousting with a strawman here.
And thus future AIs will have a bias toward having American attitudes because that’s where the data they’re built on comes from. A win for Europe?
Of course it is! We are simultaneously facing a labor shortage and mass unemployment. The important thing is to keep being angry and frightened, the specific subject you’re angry about at any given time is flexible.