

This is a good summary of half of the motive to ignore the real AI safety stuff in favor of sci-fi fantasy doom scenarios. (The other half is that the sci-fi fantasy scenarios are a good source of hype.) I hadn’t thought about the extent to which Altman’s plan is “hey morons, hook my shit up to fucking everything and try to stumble across a use case that’s good for something” (as opposed to the “we’re building a genie, and when we’re done we’re going to ask it for three wishes” he hypes up), that makes more sense as a long term plan…
That was literally the inflection point on my path to sneerclub. I had started to break from less wrong before, but I hadn’t reached the tipping point of saying it was all bs. And for ssc and Scott in particular I had managed to overlook the real message buried in thousands of words of equivocating and bad analogies and bad research in his earlier posts. But “you are still crying wolf” made me finally question what Scott’s real intent was.