• 3 Posts
  • 141 Comments
Joined 10 months ago
cake
Cake day: January 25th, 2024

help-circle




  • To put it very simply, the ‘kernel’ has significant control over your OS as it essentially runs above everything else in terms of system privileges.

    It can (but not always) run at startup, so this means if you install a game with kernel-level anticheat, the moment your system turns on, the game’s publisher can have software running on your system that can restrict the installation of a particular driver, stop certain software from running, or, even insidiously spy on your system’s activity if they wished to. (and reverse-engineering the code to figure out if they are spying on you is a felony because of DRM-related laws)

    It basically means trusting every single game publisher with kernel-level anticheat in their games to have a full view into your system, and the ability to effectively control it, without any legal recourse or transparency, all to try (and usually fail) to stop cheating in games.



  • SBF’s case was completely different, since the legality of his actions was much more easily provable as a crime. Not only was every transaction on the actual blockchain, which is immutable and couldn’t have possibly been faked, but his actions didn’t exactly have any nuance that could be argued in court. There were funds, they weren’t his, but he used them. Case closed.

    Trump’s case involves not only a lot more possible statutes he could have violated, but also a lot of arbitrary actions that don’t perfectly fall into a rigid box of “this is legal” or “this is illegal.”

    Plus, if you have more money to draw out legal fights, you can keep them going for longer, regardless of your case. SBF had most of his assets confiscated since they were almost entirely from the fraud, so he didn’t have the same luxuries.




  • ArchRecord@lemm.eetoScience Memes@mander.xyzClever, clever
    link
    fedilink
    English
    arrow-up
    9
    ·
    15 days ago

    Computers are a fundamental part of that process in modern times.

    If you were taking a test to assess how much weight you could lift, and you got a robot to lift 2,000 lbs for you, saying you should pass for lifting 2000 lbs would be stupid. The argument wouldn’t make sense. Why? Because the same exact logic applies. The test is to assess you, not the machine.

    Just because computers exist, can do things, and are available to you, doesn’t mean that anything to assess your capabilities can now just assess the best available technology instead of you.

    Like spell check? Or grammar check?

    Spell/Grammar check doesn’t generate large parts of a paper, it refines what you already wrote, by simply rephrasing or fixing typos. If I write a paragraph of text and run it through spell & grammar check, the most you’d get is a paper without spelling errors, and maybe a couple different phrases used to link some words together.

    If I asked an LLM to write a paragraph of text about a particular topic, even if I gave it some references of what I knew, I’d likely get a paper written entirely differently from my original mental picture of it, that might include more or less information than I’d intended, with different turns of phrase than I’d use, and no cohesion with whatever I might generate later in a different session with the LLM.

    These are not even remotely comparable.

    Assuming the point is how well someone conveys information, then wouldn’t many people better be better at conveying info by using machines as much as reasonable? Why should they be punished for this? Or forced to pretend that they’re not using machines their whole lives?

    This is an interesting question, but I think it mistakes a replacement for a tool on a fundamental level.

    I use LLMs from time to time to better explain a concept to myself, or to get ideas for how to rephrase some text I’m writing. But if I used the LLM all the time, for all my work, then me being there is sort of pointless.

    Because, the thing is, most LLMs aren’t used in a way that conveys info you already know. They primarily operate by simply regurgitating existing information (rather, associations between words) within their model weights. You don’t easily draw out any new insights, perspectives, or content, from something that doesn’t have the capability to do so.

    On top of that, let’s use a simple analogy. Let’s say I’m in charge of calculating the math required for a rocket launch. I designate all the work to an automated calculator, which does all the work for me. I don’t know math, since I’ve used a calculator for all math all my life, but the calculator should know.

    I am incapable of ever checking, proofreading, or even conceptualizing the output.

    If asked about the calculations, I can provide no answer. If they don’t work out, I have no clue why. And if I ever want to compute something more complicated than the calculator can, I can’t, because I don’t even know what the calculator does. I have to then learn everything it knows, before I can exceed its capabilities.

    We’ve always used technology to augment human capabilities, but replacing them often just means we can’t progress as easily in the long-term.

    Short-term, sure, these papers could be written and replaced by an LLM. Long-term, nobody knows how to write papers. If nobody knows how to properly convey information, where does an LLM get its training data on modern information? How do you properly explain to it what you want? How do you proofread the output?

    If you entirely replace human work with that of a machine, you also lose the ability to truly understand, check, and build upon the very thing that replaced you.


  • ArchRecord@lemm.eetoScience Memes@mander.xyzClever, clever
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    15 days ago

    Schools are not about education but about privilege, filtering, indoctrination, control, etc.

    Many people attending school, primarily higher education like college, are privileged because education costs money, and those with more money are often more privileged. That does not mean school itself is about privilege, it means people with privilege can afford to attend it more easily. Of course, grants, scholarships, and savings still exist, and help many people afford education.

    “Filtering” doesn’t exactly provide enough context to make sense in this argument.

    Indoctrination, if we go by the definition that defines it as teaching someone to accept a doctrine uncritically, is the opposite of what most educational institutions teach. If you understood how much effort goes into teaching critical thought as a skill to be used within and outside of education, you’d likely see how this doesn’t make much sense. Furthermore, the heavily diverse range of beliefs, people, and viewpoints on campuses often provides a more well-rounded, diverse understanding of the world, and of the people’s views within it, than a non-educational background can.

    “Control” is just another fearmongering word. What control, exactly? How is it being applied?

    Maybe if a “teacher” has to trick their students in order to enforce pointless manual labor, then it’s not worth doing.

    They’re not tricking students, they’re tricking LLMs that students are using to get out of doing the work required of them to get a degree. The entire point of a degree is to signify that you understand the skills and topics required for a particular field. If you don’t want to actually get the knowledge signified by the degree, then you can put “I use ChatGPT and it does just as good” on your resume, and see if employers value that the same.

    Maybe if homework can be done by statistics, then it’s not worth doing.

    All math homework can be done by a calculator. All the writing courses I did throughout elementary and middle school would have likely graded me higher if I’d used a modern LLM. All the history assignment’s questions could have been answered with access to Wikipedia.

    But if I’d done that, I wouldn’t know math, I would know no history, and I wouldn’t be able to properly write any long-form content.

    Even when technology exists that can replace functions the human brain can do, we don’t just sacrifice all attempts to use the knowledge ourselves because this machine can do it better, because without that, we would be limiting our future potential.

    This sounds fake. It seems like only the most careless students wouldn’t notice this “hidden” prompt or the quote from the dog.

    The prompt is likely colored the same as the page to make it visually invisible to the human eye upon first inspection.

    And I’m sorry to say, but often times, the students who are the most careless, unwilling to even check work, and simply incapable of doing work themselves, are usually the same ones who use ChatGPT, and don’t even proofread the output.



  • I completely get your point, and to an extent I agree, but I do think there’s still an argument to be made.

    For instance, if a theme park was charging an ungodly amount for admission, or maybe, say, charged you on a per-ride basis after you paid admission, slowly adding more and more charges to every activity until half your time was spent just handing over the money to do things, if everyone were to stop going in, the theme park would close down, because they did something that turned users away.

    Websites have continually added more and more ads, to the point that reading a news article feels like reading 50% ads, and 50% content. If they never see any pushback, then they’ll just keep heaping on more and more ads until it’s physically impossible to cram any more in.

    I feel like this is less of a dunk on the site by not using it in that moment, and more a justifiable way to show that you won’t tolerate the rapidly enshittified landscape of digital advertising, and so these sites will never even have a chance of getting your business in the future.

    If something like this happens enough, advertisers might start finding alternative ways to fund their content, (i.e. donation model, subscription, limited free articles then paywall) or ad networks might actually engage with user demands and make their systems less intrusive, or more private. (which can be seen to some degree with, for instance, Mozilla’s acquisition of Anonym)

    Even citing Google’s own research, 63% of users use ad blockers because of too many ads, and 48% use it because of annoying ads. The majority of these sites that instantly hit you with a block are often using highly intrusive ads that keep popping up, getting in the way, and taking up way too much space. The exact thing we know makes users not want to come back. It’s their fault users don’t want to see their deliberately maliciously placed ads.

    A lot of users (myself most definitely included) use ad blockers primarily for privacy reasons. Ad networks bundle massive amounts of surveillance technology with their ads, which isn’t just intrusive, but it also slows down every single site you go to, across the entire internet. Refusing that practice increases the chance that sites more broadly could shift to more privacy-focused advertising methods.

    Google recommends to “Treat your visitors with respect,” but these sites that just instantly slap up an ad blocker removal request before you’ve even seen the content don’t actually respect you, they just hope you’re willing to sacrifice your privacy, and overwhelm yourself with ads, to see content you don’t even know anything about yet. Why should I watch your ads and give up my privacy if you haven’t given me good reason to even care about your content yet?

    This is why sites with soft paywalls, those that say you have “x number of free articles remaining,” or those that say “you’ve read x articles this month, would you consider supporting us?” get a higher rate of users disabling adblockers or paying than those that just slap these popups in your face the moment you open the site.








  • I’m not a big expert on database technology, but I am aware of there being at least a few database systems (“In-Memory”) that use the RAM of the computer for transient storage, and since RAM doesn’t use files as a concept in the same way, the data stored there isn’t exactly inside a “file,” so to speak.

    That said, they are absolutely dwarfed by the majority of databases, which use some kind of file as a means to store the database, or the contents within it.

    Obviously, that’s not to say using files is bad in any way, but the possibilities for how database software could have developed, had we not used files as a core computing concept during their inception, are now closed off. We simply don’t know what databases could have looked like, because of “lock-in.”


  • ArchRecord@lemm.eetoFediverse@lemmy.worldWhy is Mastodon struggling to survive?
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    1 month ago

    That’s what some databases are. Most databases you’ll see today still inevitably store the whole contents of the DB within a file with its own format, metadata, file extension, etc, or store the contents of the database within a file tree.

    The notion of “lock in” being used here doesn’t necessarily mean that alternatives don’t or can’t exist, but that comparatively, investment into development, and usage, of those systems, is drastically lower.

    Think of how many modern computing systems involve filesystems as a core component of their operation, from databases, to video games, to the structure of URLs, which are essentially usually just ways to access a file tree. Now think of how many systems are in use that don’t utilize files as a concept.

    The very notion of files as an idea is so locked-in, that we can rarely fathom, let alone construct a system that doesn’t utilize them as a part of its function.

    Regardless, the files example specifically wasn’t exactly meant to be a direct commentary on the state of microblogging platforms, or of all technology, but more an example for analogy purposes than anything else.

    What social media platforms don’t have some kind of character limit?

    What platforms don’t use a feed?

    What platforms don’t use a like button?

    What platforms don’t have some kind of hashtags?

    All of these things are locked-in, not necessarily technologically, but socially.

    Would more people from Reddit have switched to Lemmy if it didn’t have upvotes and downvotes? Are there any benefits or tradeoffs to including or not including the Save button on Lemmy, and other social media sites? We don’t really know, because it’s substantially less explored as a concept.

    The very notion of federated communities on Lemmy being instance-specific, instead of, say, instances all collectively downloading and redistributing any posts to a specific keyword acting as a sort of global community not specific to any one instance, is another instance of lock-in, adapted from the fediverse’s general design around instance-specific hosting and connection.

    In the world of social media, alternative platforms, such as Minus exist, that explore unique design decisions not available on other platforms, like limited total post counts, vague timestamps, and a lack of likes, but compared to all the other sites in the social media landscape, it’s a drop in the bucket.

    The broader point I was trying to make was just that the very way microblogging developed as a core part of social media’s design means that any shift away from it likely won’t actually gain traction with a mainstream audience, because of the social side of the lock-in.