• 5 Posts
  • 17 Comments
Joined 5 years ago
cake
Cake day: June 8th, 2019

help-circle
rss

  • A lot of coopyleft or p2pp projects adopt the license and it’s not discussed that much in the identity of the project.

    I personally believe that software freedom shouldn’t come at the expense of people’s freedom, and I consider the FOSS movement a political failure because it’s completely incapable of mediating between the two things. New generations are growing more and more alienated from a movement they consider a relic of the past.

    For my projects, I avoid FOSS licenses, but they are also not relevant enough to get insights from them.


  • Since here the answers are split between edgy kids and people repeating a bland, stale narrative about comfort and fear of death, I will try to bring a different perspective.

    For context: I grew up in a Catholic country but in a very secular family and in a very secular region. I’ve had an edgy atheist phase that lasted between 8yo and probably around 30yo.

    I studied a STEM discipline and have always been surrounded by mostly atheist or agnostic people.

    I was afraid of death up until I was 27/28yo, but the cope was gnostic transhumanism, not Abrahamitic religions. At some point I took acid, my gf at the time told me I was going to die, I cried my eyes out for a few minutes and then I was fine and I’m still fine. I had a near-death experience in the hospital that further consolidated the idea that I’m going to die, and it’s chill: if you’re sick, you have a bunch of people looking after you, everybody gives you attention, you spend all your day chilling in bed on drugs. Dream life death.

    I was still agnostic at that point. I started approaching spirituality later on, not much because of an emotional need, but because further studies both in STEM disciplines and Philosophy highlighted the limit of reason to explain and understand the world. Reason is a tool among others, with its limits. Limits that can be reasoned about using reason itself. You cannot investigate or explain what lies outside though, let alone change it, something for which you need different tools: faith, spirituality, trust. I got closer to what Erik Davis calls “Cyborg Spiritualism”, but it doesn’t mean much since it’s not an organized movement, but more of a shared intuition and meaning-making process to which, in the last 60 years, more and more people arrived. Especially people dealing with disciplines like system theory, cybernetics, system design, and information theory, but also people disillusioned with the New Age movement or other Western Gnostic practices. Mixed in it there’s plenty of animism.

    Atheists believe that all religions are about speaking to God, and hoping for an answer, while many religions are about listening to God because they are already talking to us all the time.










  • to a reasonably large audience

    That’s a measure of success that makes sense only in a for-profit, growth-oriented environment. Software just has to be sustainable and “bigger” doesn’t necessarily imply "more sustainable.

    That said, what is now possible with social media is extremely restricted and our idea of what a social media is is constrained by profit motives. Social media could be much more, connect humans for collaboration and exchange instead of data extraction. We are so used to the little crumbs of positive experiences on social media that we normalized it.

    Bonfire, for example, if we want to stick to the fediverse, is trying to challenge this narrative and push the boundaries of what a social media is supposed to do.

    Another space would be non-siloed notion-like tools.

    Anothe entire can of worms would be to go beyond the “dictatorship of the app” and start building software and UX around flexibility and customizability for the average user, rather than keeping this a privilege for tools targeting power users. Flexibility in UX means harder trackability and less CTR, so most end-user “apps” avoid that.








  • This paper explain a taxonomy of harms created by LLMs: https://dl.acm.org/doi/pdf/10.1145/3531146.3533088

    OpenAI released ChatGPT without systems to prevent or compensate these harms and being fully aware of the consequences, since this kind of research has been going on for several years. In the meanwhile they've put some paper-thin countermeasures on some of these problems but they are still pretty much a shit-show in terms of accountability. Most likely they will get sued into oblivion before regulators outlaw LLMs with dialogical interfaces. This won't do much for the harm that open-source LLMs will create but at least will limit large-scale harm to the general population.


  • They published a deliberately harmful tool against the advice of civil society, experts and competitors. They are not only reckless but tasked since their foundation with the mission to create chaos. Don't forget the idea behind OpenAI in the beginning was to damage the advantage that Google and Facebook had on AI by releasing machine learning technology in open source. They definitely did it and now they are expanding their goals. They are not in for the money (ChatGPT will never be profitable), they are playing a bigger game.

    Pushing the AI panic is not just a marketing strategy but a way to build power. The more they are considered dangerous, the more regulations will be passed that will impact the whole sector. https://fortune.com/2023/05/30/sam-altman-ai-risk-of-extinction-pandemics-nuclear-warfare/


  • In the picture you can see organizations moving in the public sphere around AI. On the left you have right-wing and libertarian think tanks, corporations and frontline actors that fuel a sense of panic around AI, either to sabotage their business competitors or to leverage this panic to project an idea of being sellers of a very powerful tool while at the same time deflecting responsibility. If the AI is dangerous and sentient, you won't care much about the engineers behind.

    On the right you have several public orgs or NGOs operating in the field of algorithmic accountability, digital rights and so on. They push the opposite of the AI panic, pointing the finger at the corporations and powers that create and govern AI