No I will not fucking hear you out Jeremy
No I will not fucking hear you out Jeremy
The median is an average. But it isn’t the mean, which is presumably what the other comment was using.
No thanks, I’ll stay home
Well he seems to survive every time so… Penultimate destination?
No they aren’t, the males just live in the hive and their only purpose is to fuck
Maco does have a nice ring to it if they take off. Punch a nazi, punch a maco.
How do you figure that? Also “hermaphrodite” is not the accurate term - that would refer to an organism which creates both gametes, which humans never do, even intersex ones.
What??
Oh, you thought that was a p? Haha! You fool! You imbecile! It was ρ! Look at this mathematical incompetent!
If you took all the racists and bigots in the world and put them in one country… It still wouldn’t be justified to wipe them out. I wouldn’t want to go to that country - it would certainly be among the worst places in the world - but I also wouldn’t suggest we invade and start murdering them.
Oho, but he didn’t stop there, the old goat, he didn’t just stop at one alphabet, did he? Because we use the Latin and Greek alphabets, and even the odd spicy Hebrew character and god knows what else in the darker corners of mathematics.
And let’s be honest, not because they were fascists.
AI models don’t resynthesize their training data. They use their training data to determine parameters which enable them to predict a response to an input.
Consider a simple model (too simple to be called AI but really the underlying concepts are very similar) - a linear regression. In linear regression we produce a model which follows a straight line through the “middle” of our training data. We can then use this to predict values outside the range of the original data - albeit will less certainty about the likely error.
In the same way, an LLM can give answers to questions that were never asked in its training data - it’s not taking that data and shuffling it around, it’s synthesising an answer by predicting tokens. Also similarly, it does this less well the further outside the training data you go. Feed them the right gibberish and it doesn’t know how to respond. ChatGPT is very good at dealing with nonsense, but if you’ve ever worked with simpler LLMs you’ll know that typos can throw them off notably… They still respond OK, but things get weirder as they go.
Now it’s certainly true that (at least some) models were trained on CSAM, but it’s also definitely possible that a model that wasn’t could still produce sexual content featuring children. It’s training set need only contain enough disparate elements for it to correctly predict what the prompt is asking for. For example, if the training set contained images of children it will “know” what children look like, and if it contains pornography it will “know” what pornography looks like - conceivably it could mix these two together to produce generated CSAM. It will probably look odd, if I had to guess? Like LLMs struggling with typos, and regression models being unreliable outside their training range, image generation of something totally outside the training set is going to be a bit weird, but it will still work.
None of this is to defend generating AI CSAM, to be clear, just to say that it is possible to generate things that a model hasn’t “seen”.
Depends on the size of screen, surely.
Overwatch is a particularly successful Team-Fortress-like
Wait, it’s all regex?
Always has been
It is normal, isn’t it? “Adult film actress” is a euphemism, I can’t imagine hearing anyone I know say it.
It’s interesting that when talking about films and TV we often use actor to describe both men and women, but in porn women are always actresses haha.
What do you mean? I loved playing through the adventures of Lihb the dairy farmer.
Pattern recognition is one thing that our brains do, it is a very long way away from the only thing our brains do.