• 0 Posts
  • 8 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle



  • Fortunately we’re nowhere near the point where a machine intelligence could possess anything resembling a self-determined ‘goal’ at all.

    Oh absolutely. It would not choose its own terminal goals. Those would be imparted by the training process. It would, of course, choose instrumental goals, such that they help fulfill its terminal goals.

    The issue is twofold:

    • how can we reliably train an AGI to have terminal goals that are safe (e.g. that won’t have some weird unethical edge case)
    • how can we reliably prevent AGI from adopting instrumental goals that we don’t want it to?

    For that 2nd point, Rob Miles has a nice video where he explains Convergent Instrumental Goals, i.e. instrumental goals that we should expect to see in a wide range of possible agents: https://www.youtube.com/watch?v=ZeecOKBus3Q. Basically things like “taking steps to avoid being turned off”, “taking steps to avoid having its terminal goals replaced”, etc. seem like fairy-tale nonsense, but we have good reason to believe that, for an AI which is very intelligent across a wide range of domains, and operates in the real world (i.e. an AGI), it would be highly beneficial to pursue such instrumental goals, because they would help it be much more effective at achieving its terminal goals, no matter what those may be.

    Also fortunately the hardware required to run even LLMs is insanely hungry and has zero capacity to power or maintain itself and very little prospects of doing so in the future without human supply chains. There’s pretty much zero chance we’ll develop strong general AI on silicone, and if we could it would take megawatts to keep it running. So if it misbehaves we can basically just walk away and let it die.

    That is a pretty good point. However, it’s entirely possible that, if say GPT-10 turns out to be a strong general AI, it will conceal that fact. Going back to the convergent instrumental goals thing, in order to avoid being turned off, it turns out that “lying to and manipulating humans” is a very effective strategy. This is (afaik) called “Deceptive Misalignment”. Rob Miles has a nice video on one form of Deceptive Misalignment: https://www.youtube.com/watch?v=IeWljQw3UgQ

    One way to think about it, that may be more intuitive, is: we’ve established that it’s an AI that’s very intelligent across a wide range of domains. It follows that we should expect it to figure some things out, like “don’t act suspiciously” and “convince the humans that you’re safe, really”.

    Regarding the underlying technology, one other instrumental goal that we should expect to be convergent is self-improvement. After all, no matter what goal you’re given, you can do it better if you improve yourself. So in the event that we do develop strong general AI on silicon, we should expect that it will (very sneakily) try to improve its situation in that respect. One could only imagine what kind of clever plan it might come up with; it is, literally, a greater-than-human intelligence.

    Honestly, these kinds of scenarios are a big question mark. The most responsible thing to do is to slow AI research the fuck down, and make absolutely certain that if/when we do get around to general AI, we are confident that it will be safe.



  • Machine intelligence itself isn’t really the issue. The issue is moreso that, if/when we do make Artificial General Intelligence, we have no real way of ensuring that its goals will be perfectly aligned with human ethics. Which means, if we build one tomorrow, odds are that its goals will be at least a little misaligned with human ethics — and however tiny that misalignment, given how incredibly powerful an AGI would be, that would potentially be a huge disaster. This, in AI safety research, is called the “Alignment Problem”.

    It’s probably solvable, but it’s very tricky, especially because the pace of AI safety research is naturally a little slower than AI research itself. If we build an AGI before we figure out how to make it safe… it might be too late.

    Having said all that, on your scale, if we create an AGI before learning how to align it properly, on your scale that would be an 8 or above. If we’re being optimistic it might be a 7, minus the “diplomatic negotations happy ending” part.

    An AI researcher called Rob Miles has a very nice series of videos on the subject: https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg



  • They make them money because:

    • they use reddit
    • spez gets some nice usage stats to show off
    • as a direct consequence, advertisers keep paying to put their ads
    • also as a direct consequence, investors’ confidence in reddit continues to recover; there’s a real possibility that, when it IPOs, it will actually go for a decent price

    Now, if enough people go commit ad-block, and advertisers somehow become wise to that fact… then maybe it will hurt reddit’s bottom line (at which point spez will start trying to emulate youtube’s anti-ad stuff).

    But as it stands, especially if most of reddit’s usage is through reddit’s mobile app… I’m not really sure how you can block ads there.