• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 24th, 2023

help-circle





  • itsnotlupus@lemmy.worldtoLinux@lemmy.mlraw man files?
    link
    fedilink
    arrow-up
    27
    ·
    edit-2
    1 year ago

    You can list every man page installed on your system with man -k . , or just apropos .
    But that’s a lot of random junk. If you only want “executable programs or shell commands”, only grab man pages in section 1 with a apropos -s 1 .

    You can get the path of a man page by using whereis -m pwd (replace pwd with your page name.)

    You can convert a man page to html with man2html (may require apt get man2html or whatever equivalent applies to your distro.)
    That tool adds a couple of useless lines at the beginning of each file, so we’ll want to pipe its output into a | tail +3 to get rid of them.

    Combine all of these together in a questionable incantation, and you might end up with something like this:

    mkdir -p tmp ; cd tmp
    apropos -s 1 . | cut -d' ' -f1 | while read page; do whereis -m "$page" ; done | while read id path rest; do man2html "$path" | tail +3 > "${id::-1}.html"; done
    

    List every command in section 1, extract the id only. For each one, get a file path. For each id and file path (ignore the rest), convert to html and save it as a file named $id.html.

    It might take a little while to run, but then you could run firefox . or whatever and browse the resulting mess.

    Or keep tweaking all of this until it’s just right for you.





  • More appropriate tools to detect AI generated text you mean?

    It’s not a thing. I don’t think it will ever be a thing. Certainly not reliably, and never as a 100% certainty tool.

    The punishment for a teacher deciding you cheated on a test or an assignment? I don’t know, but I imagine it sucks. Best case, you’d probably be at risk of failing the class and potentially the grade/semester. Worst case you might get expelled for being a filthy cheater. Because an unreliable tool said so and an unreliable teacher chose to believe it.

    If you’re asking what’s the answer teachers should know to defend against AI generated content, I’m afraid I don’t have one. It’s akin to giving students math homework assignments but demanding that they don’t use calculators. That could have been reasonable before calculators were a thing, but not anymore and so teachers don’t expect that to make sense and don’t put those rules on students.




  • I was watching the network traffic sent by Twitter the other day, as one does, and apparently whenever you stop scrolling for a few seconds, whatever post is visible on screen at that time gets added to a little pile that then gets “subscribed to” because it generated “engagement”, no click needed.
    This whole insidious recommendation nonsense was probably a subplot in the classic sci-fi novel Don’t Create The Torment Nexus.

    Almost entirely unrelated, but I’ve been playing The Algorithm (part of the Tenet OST, by Ludwig Göransson) on repeat for a bit now. It’s also become my ring tone, and if I can infect at least one other hapless soul with it, I’ll be satisfied.



  • Running strange software grabbed from unknown sources will never not be a risky proposition.

    Uploading the .exe you just grabbed to virustotal and getting the all clear can indicate two very different things: It’s either actually safe, or it hasn’t yet been detected as malware.

    You should expect that malware writers had already uploaded some variant of their work to virustotal before seeding it to ensure maximum impact.
    Getting happy results from virustotal could simply mean the malware author simply tweaked their work until they saw those same results.

    Notice I said “yet” above. Malware tends to eventually get flagged as such, even when it has a headstart of not being recognized correctly.
    You can use that to somewhat lower the odds of getting infected, by waiting. Don’t grab the latest crack that just dropped for the hottest game or whatever.
    Wait a few weeks. Let other people get infected first and have antiviruses DBs recognize a new malware. Then maybe give it a shot.

    And of course, the notion that keygens will often be flagged as “bad” software by unhelpful antivirus just further muddies the waters since it teaches you to ignore or altogether disable your antivirus in one of the most risky situation you’ll put yourself into.

    Let’s be clear: There’s nothing safe about any of this, and if you do this on a computer that has access to anything you wouldn’t want to lose, you are living dangerously indeed.


  • There are a near infinity of those out there, many of which just grab other scanlation groups’ output and slap their ads on top of it.

    Mangadex is generally my happy place, but you’ll have to wander out and about for various specific mangas.

    Several of the groups that post on Mangadex also have their own website and you may find more stuff there.

    For example right now I’ve landed on asurascans.com, which has a bunch of Korean and Chinese long strips, with generally good quality translations.

    The usual sticky points with all those manga sites is the ability to track where you are in a series and continue where you left off when new chapters are posted.
    Even Mangadex struggles with that, their “Updates” page is the closest thing they have to doing that and it’s still not very good.

    If you’re going to stick to one site for any length of time, and you happen to be comfortable with userscripts, Id’ suggest you head over to greasyfork.org, search for the manga domain you’re using, and look for scripts that might improve your binging experience there.


  • One of my guilty pleasures is to rewrite trivial functions to be statements free.

    Since I’d be too self-conscious to put those in a PR, I keep those mostly to myself.

    For example, here’s an XPath wrapper:

    const $$$ = (q,d=document,x=d.evaluate(q,d),a=[],n=x.iterateNext()) => n ? (a.push(n), $$$(q,d,x,a)) : a;
    

    Which you can use as $$$("//*[contains(@class, 'post-')]//*[text()[contains(.,'fedilink')]]/../../..") to get an array of matching nodes.

    If I was paid to write this, it’d probably look like this instead:

    function queryAllXPath(query, doc = document) {
        const array = [];
        const result = doc.evaluate(query, doc);
        let node= result.iterateNext();
        while (node) {
            array.push(node);
            n = result.iterateNext();
        }
        return array;
    }
    

    Seriously boring stuff.

    Anyway, since var/let/const are statements, I have no choice but to use optional parameters instead, and since loops are statements as well, recursion saves the day.

    Would my quality of life improve if the lambda body could be written as => if n then a.push(n), $$$(q,d,x,a) else a ? Obviously, yes.


  • There have been efforts to build reputation systems that don’t rely on central servers, like early day bitcoin’s Web of Trust, which allowed folks to rate other folks with public key crypto, thus ensuring an accurate and fair trust rating for participants, without the possibility of a middle-man putting their thumb on the scale.

    One problem with it is that it was still perfectly practical for bad actors to accumulate good ratings, then cash out their hard-earned reputation into large scams, such as the “Bitcoin Savings & Trust” (for $40 million in that particular case), which quite possibly made it measurably worse than not having a system that induced participants into making faulty judgments in the first place.

    I think the main practical value of something like reddit’s karma is an indication of age and account activity, both of which can probably be measured in other, if less gamified ways.