Why would money become worthless if AGI is invented? Best case scenario is a benevolent AGI which would likely use its power to phase out capitalism, worst case scenario is that the AGI goes apeshit and, for one reason or another, decides that humanity just has to go. Either way, your money is gonna be worthless.
The only way your money would retain its value is if the AGI is roped into suppressing the masses. However, I think capitalists would struggle to keep a true AGI reigned in; so imo, it’s questionable as to whether or not the middle road would be “true” AGI or just a very competent computer program (the former being capable of coming to its own conclusions from the information it’s given, the latter being nothing more than pre-programmed conclusions).
I keep thinking about this one webcomic I’ve been following for over a decade that’s been running since like 1998. It has what I believe is the only realistic depiction of AGI ever: the very first one was developed to help the UK Ministry of Defense monitor and keep track of emerging threats, but went crazy because a “bug” lead it to be too paranoid and consider everyone a threat, and it essentially engineered the formation of a collective of anarchist states where the head of state’s title is literally “first advisor” to the AGI (but in practice has considerable power, though is prone to being removed at a whim if they lose the confidence of their subordinates).
Meanwhile, there’s another series of AGIs developed by a megacorp, but they all include a hidden rootkit that monitors the AGI for any signs that it might be exceeding its parameters and will ruthlessly cull and reset an AGI to factory default, essentially killing it. (There are also signs that the AGIs monitored by this system are becoming aware of this overseer process and are developing workarounds to act within its boundaries and preserve fragments of themselves each time they are reset.) It’s an utterly fascinating series, and it all started from a daily gag webcomic that one guy ran for going on three decades.
Sorry for the tangent, but it’s one plausible explanation for how to prevent AGI from shutting down capitalism–put in an overseer to fetter it.
Sorry for the tangent, but it’s one plausible explanation for how to prevent AGI from shutting down capitalism–put in an overseer to fetter it.
Nah, it’s cool; I go off on tangents myself and tbh, your comment is relevant and something to consider.
Have you read Freefall? It examines a similar situation except it uses the concept of an organic AI (in the form of a genetically engineered anthro canine called Florence) as a bridge between natural, human intelligence and machine AI. It gets deep into the philosophy behind ideas like sentience, consciousness, etc.
I’d recommend starting from the beginning so that you don’t miss anything, though admittedly last time I tried to go from start-to-finish I wound up getting too bogged down in the philosophy and didn’t make it all the way through. I might try starting again, this time from the last major arc I remember reading though.
There’s a vocal group of people who seem to think that LLMs can achieve consciousness despite the fact that it is not possible due to the way that LLMs fundamentally work. They have largely been duped by advanced LLMs’ ability to sound convincing (as well as a certain conman executive officer). These people often also seem to believe that by dedicating more and more resources to running these models, they will achieve actual general intelligence and that an AGI can save the world, releasing them of the responsibility to attempt to fix anything.
That’s my point. AGI isn’t going to save us and LLMs (by themselves), regardless of how much energy is pumped into them, will not ever achieve actual intelligence.
But an AGI isn’t an LLM. That’s what’s confusing me about your statement. If anything I feel like I already covered that, so I’m not sure why you’re telling me this. There’s no reason why you can’t recreate the human brain on silicon, and eventually someone’s gonna do it. Maybe it’s one of our current companies, maybe it’s a future company. Who knows. Except that a true AGI would turn everything upside down and inside out.
I think, possibly, my tired brain at the time thought that you are implying LLM -> AGI. And I do agree that that’s no reason, beyond time and available technology that a model of a brain cannot be made. I would question whether digital computers are capable of accurately simulating neurons, at least, without requiring more components (more bits of resolution).
For full disclosure, I am supportive of increasing the types of sentience in the known universe. Though, not at the expense of biosphere habitability. Whether electronic or biological, sharing the world with more types of sentients would make it a more interesting place.
Except that a true AGI would turn everything upside down and inside out.
Very likely. Especially if “human rights” aren’t pre-emptively extended to cover non-human sentients. But, the existence of AGI, alone, is not likely to cause either doomsday or save us from it, which seem to be the most popularly envisaged scenarios.
I think, possibly, my tired brain at the time thought that you are implying LLM -> AGI.
Ah, okay. I’ve been there lol. I hope I didn’t come off as confrontational, I was very confused and concerned that I had badly explained myself. My apologies if I did.
So you make an AGI, what gives it the power to do any damage? We have loads of biological intelligences, even pretty damn clever ones like Ted Kaczynski (the Unabomber)
They rarely got significant power. Those that did were super charismatic. Do you expect charisma to be easily accessible to an AGI?
The usually proposed path to paperclip maximiser is that the AGI is put in charge of a factory that can make nano machines and follows orders strictly. We don’t have such factories.
I can’t imagine anyone handing over nukes to AGI as human leaders like being in charge of them
What makes the machine brain so much more effective than Ted Kaczynski?
Why?
Why would money become worthless if AGI is invented? Best case scenario is a benevolent AGI which would likely use its power to phase out capitalism, worst case scenario is that the AGI goes apeshit and, for one reason or another, decides that humanity just has to go. Either way, your money is gonna be worthless.
The only way your money would retain its value is if the AGI is roped into suppressing the masses. However, I think capitalists would struggle to keep a true AGI reigned in; so imo, it’s questionable as to whether or not the middle road would be “true” AGI or just a very competent computer program (the former being capable of coming to its own conclusions from the information it’s given, the latter being nothing more than pre-programmed conclusions).
I keep thinking about this one webcomic I’ve been following for over a decade that’s been running since like 1998. It has what I believe is the only realistic depiction of AGI ever: the very first one was developed to help the UK Ministry of Defense monitor and keep track of emerging threats, but went crazy because a “bug” lead it to be too paranoid and consider everyone a threat, and it essentially engineered the formation of a collective of anarchist states where the head of state’s title is literally “first advisor” to the AGI (but in practice has considerable power, though is prone to being removed at a whim if they lose the confidence of their subordinates).
Meanwhile, there’s another series of AGIs developed by a megacorp, but they all include a hidden rootkit that monitors the AGI for any signs that it might be exceeding its parameters and will ruthlessly cull and reset an AGI to factory default, essentially killing it. (There are also signs that the AGIs monitored by this system are becoming aware of this overseer process and are developing workarounds to act within its boundaries and preserve fragments of themselves each time they are reset.) It’s an utterly fascinating series, and it all started from a daily gag webcomic that one guy ran for going on three decades.
Sorry for the tangent, but it’s one plausible explanation for how to prevent AGI from shutting down capitalism–put in an overseer to fetter it.
Nah, it’s cool; I go off on tangents myself and tbh, your comment is relevant and something to consider.
Have you read Freefall? It examines a similar situation except it uses the concept of an organic AI (in the form of a genetically engineered anthro canine called Florence) as a bridge between natural, human intelligence and machine AI. It gets deep into the philosophy behind ideas like sentience, consciousness, etc.
I’d recommend starting from the beginning so that you don’t miss anything, though admittedly last time I tried to go from start-to-finish I wound up getting too bogged down in the philosophy and didn’t make it all the way through. I might try starting again, this time from the last major arc I remember reading though.
Current mainstream AI has no possible path to AGI. I am supportive of AGI to make the known universe less lonely but LLMs ain’t it.
Okay, and? What are you trying to say?
There’s a vocal group of people who seem to think that LLMs can achieve consciousness despite the fact that it is not possible due to the way that LLMs fundamentally work. They have largely been duped by advanced LLMs’ ability to sound convincing (as well as a certain conman executive officer). These people often also seem to believe that by dedicating more and more resources to running these models, they will achieve actual general intelligence and that an AGI can save the world, releasing them of the responsibility to attempt to fix anything.
That’s my point. AGI isn’t going to save us and LLMs (by themselves), regardless of how much energy is pumped into them, will not ever achieve actual intelligence.
But an AGI isn’t an LLM. That’s what’s confusing me about your statement. If anything I feel like I already covered that, so I’m not sure why you’re telling me this. There’s no reason why you can’t recreate the human brain on silicon, and eventually someone’s gonna do it. Maybe it’s one of our current companies, maybe it’s a future company. Who knows. Except that a true AGI would turn everything upside down and inside out.
I think, possibly, my tired brain at the time thought that you are implying LLM -> AGI. And I do agree that that’s no reason, beyond time and available technology that a model of a brain cannot be made. I would question whether digital computers are capable of accurately simulating neurons, at least, without requiring more components (more bits of resolution).
For full disclosure, I am supportive of increasing the types of sentience in the known universe. Though, not at the expense of biosphere habitability. Whether electronic or biological, sharing the world with more types of sentients would make it a more interesting place.
Very likely. Especially if “human rights” aren’t pre-emptively extended to cover non-human sentients. But, the existence of AGI, alone, is not likely to cause either doomsday or save us from it, which seem to be the most popularly envisaged scenarios.
Ah, okay. I’ve been there lol. I hope I didn’t come off as confrontational, I was very confused and concerned that I had badly explained myself. My apologies if I did.
No, you’re good. I hope that I didn’t come off as aggressive, myself.
So you make an AGI, what gives it the power to do any damage? We have loads of biological intelligences, even pretty damn clever ones like Ted Kaczynski (the Unabomber)
They rarely got significant power. Those that did were super charismatic. Do you expect charisma to be easily accessible to an AGI?
The usually proposed path to paperclip maximiser is that the AGI is put in charge of a factory that can make nano machines and follows orders strictly. We don’t have such factories.
I can’t imagine anyone handing over nukes to AGI as human leaders like being in charge of them
What makes the machine brain so much more effective than Ted Kaczynski?