We probably don’t want to use the current leader in cause of death for kids as a template for good policy.
We probably don’t want to use the current leader in cause of death for kids as a template for good policy.
If you do the search I suggested you will find relevant reviews immediately. If you add keywords based on my post text you will find the primary sources immediately.
https://www.cdc.gov/mmwr/volumes/66/wr/mm6630a6.htm
Teenage suicide rates were declining for over a decade, especially in males. Now they are increasing in both males and females. You would have to be a complete monster to not want to study, understand, and reverse this trend.
Lots of stretching here. The paper uses simulations of microtubules to show quantum effects when tryptophan residues are excited by UV light. The paper only did simulations of microtubules, and those simulations did not include the bends and many many dynein molecules found on microtubules. The reason this is important is that researchers have been hitting every biomolecule with UV excitation for decades, including microtubules, and have never observed this effect.
A key finding missing from this video is that microtubules are dynamic. They are constantly disassembling and reassembling and recycling components. This occurs at very short timescales. Also, they do not bridge cell membranes. If information is passing through networks of microtubules, it is constantly disrupted and not affecting other cells. Synapses do handle cell-cell information transfer (where the role of microtubules is already well studied and not quantum in nature). Why would quantum microtubule information be limited to a single cell? Maybe it could influence coordinated assembly and disassembly at the termini, but the authors offer no evidence that there is any chemical effect of this quantum phenomenon, which would be required to change anything about how those enzymes behave.
We already know of a mechanism by which information is transported across microtubules: physical transport of signalling molecules. They are walked (quite literally, dynein is cool) along the microtubules to different sites in the cell. No quantum effects needed to explain this phenomenon.
Go to pubmed. Type “social media mental health”. Read the studies, or the reviews if you don’t have the time.
The average American teenager spends 4.8 hours/day on social media. Increased use of social media is associated with increased rates of depression, eating disorders, body image dissatisfaction, and externalizing problems. These studies don’t show causation, but guess what, we literally cannot show causation in most human studies because of ethics.
Social media drastically alters peer interactions, with negative interactions (bullying) associated with increased rates of self harm, suicide, internalizing and externalizing problems.
Mobile phone use alone is associated with sleep disruption and daytime sleepiness.
Looking forward to your peer-reviewed critiques of these studies claiming they are all “just vibes.”
This is a health issue, not a morality issue.
What do you mean by work? Do they stop everyone from doing stupid things? No. Do they have a measurable effect on behavior? Yes.
They sell CBD oil with this little droppers for dosing, but when you read the studies the dosage is like a mouthful of oil. It’s like the exact opposite problem of melatonin dosing.
My shower thoughts are always repeating cycles of “fuuuuuuuck this feels niiiiiiiiice” and “time to turn up the heat a smidge.” Am I doing it wrong?
I remember hearing this argument before…about the Internet. Glad that fad went away.
As it has always been, these technologies are being used to push us forward by teams of underpaid unnamed researchers with no interest in profit. Meanwhile you focus on the scammers and capitalists and unload your wallets to them, all while complaining about the lack of progress as measured by the products you see in advertisements.
Luckily, when you get that cancer diagnosis or your child is born with some rare disease, that progress will attend to your needs despite your ignorance if it.
Read again. I have made no such claim, I simply scrutinized your assertions that LLMs lack any internal representations, and challenged that assertion with alternative hypotheses. You are the one that made the claim. I am perfectly comfortable with the conclusion that we simply do not know what is going on in LLMs with respect to human-like capabilities of the mind.
I have a different interpretation of those close calls: we were very very lucky and should not rely on defiance as a mechanism to avoid the apocalypse.
Nor can we assume that they cannot have the same emergent properties.
These cases are interesting tests of our first amendment rights. “Real” CP requires abuse of a minor, and I think we can all agree that it should be illegal. But it gets pretty messy when we are talking about depictions of abuse.
Currently, we do not outlaw written depictions nor drawings of child sexual abuse. In my opinion, we do not ban these things partly because they are obvious fictions. But also I think we recognize that we should not be in the business of criminalizing expression, regardless of how disgusting it is. I can imagine instances where these fictional depictions could be used in a way that is criminal, such as using them to blackmail someone. In the absence of any harm, it is difficult to justify criminalizing fictional depictions of child abuse.
So how are AI-generated depictions different? First, they are not obvious fictions. Is this enough to cross the line into criminal behavior? I think reasonable minds could disagree. Second, is there harm from these depictions? If the AI models were trained on abusive content, then yes there is harm directly tied to the generation of these images. But what if the training data did not include any abusive content, and these images really are purely depictions of imagination? Then the discussion of harms becomes pretty vague and indirect. Will these images embolden child abusers or increase demand for “real” images of abuse. Is that enough to criminalize them, or should they be treated like other fictional depictions?
We will have some very interesting case law around AI generated content and the limits of free speech. One could argue that the AI is not a person and has no right of free speech, so any content generated by AI could be regulated in any manner. But this argument fails to acknowledge that AI is a tool for expression, similar to pen and paper.
A big problem with AI content is that we have become accustomed to viewing photos and videos as trusted forms of truth. As we re-learn what forms of media can be trusted as “real,” we will likely change our opinions about fringe forms of AI-generated content and where it is appropriate to regulate them.
We do not know how LLMs operate. Similar to our own minds, we understand some primitives, but we have no idea how certain phenomenon emerge from those primitives. Your assertion would be like saying we understand consciousness because we know the structure of a neuron.
You seem pretty confident that LLMs cannot have an internal representation simply because you cannot imagine how that capability could emerge from their architecture. Yet we have the same fundamental problem with the human brain and have no problem asserting that humans are capable of internal representation. LLMs adhere to grammar rules, present information with a logical flow, express relationships between different concepts. Is this not evidence of, at the very least, an internal representation of grammar?
We take in external stimuli and peform billions of operations on them. This is internal representation. An LLM takes in external stimuli and performs billions of operations on them. But the latter is incapable of internal representation?
And I don’t buy the idea that hallucinations are evidence that there is no internal representation. We hallucinate. An internal representation does not need to be “correct” to exist.
No. Human evolution is driven primarily by mate selection.
How do hallucinations preclude an internal representation? Couldn’t hallucinations arise from a consistent internal representation that is not fully aligned with reality?
I think you are misunderstanding the role of tokens in LLMs and conflating them with internal representation. Tokens are used to generate a state, similar to external stimuli. The internal representation, assuming there is one, is the manner in which the tokens are processed. You could say the same thing about human minds, that the representation is not located anywhere like a piece of data; it is the manner in which we process stimuli.
My thesis is that we are asserting the lack of human-like qualities in AIs that we cannot define or measure. Assertions should be made on data, not uneasy feelings arising when an LLM falls into the uncanny valley.
X lost half a billion dollars in the first quarter of 2023. Odd that the financial expert didn’t mention this even though it is literally in the same sentence as the “40% drop in revenue” statement in the article.