Just an explorer in the threadiverse.

  • 3 Posts
  • 75 Comments
Joined 1 year ago
cake
Cake day: June 4th, 2023

help-circle
  • Like helping to find a bug, discussing about how to setup an application for a certain use case or anything like that? Answering questions on Stack overflow is an example but is that the best way?

    Generally the best way to help out is to do a thing that’s needed and that you can figure out how to do. Your list includes a bunch of good options, and I’ve been thanked for doing all those things at one point or another. Some common growth paths include:

    1. Using the software
    2. Encountering bugs, problems, or small opportunities for improvement.
    3. Discussing those informally in forums and helping people find workarounds.
    4. Identifying some of those issues as common things other things experience as well, so filing bugs for them with clear explanations and links to related forum discussions.
    5. Reading source code to better understand bugs.
    6. Discussing potential fixes in developer bug threads (or in GitHub or whatever).
    7. Submitting small fixes for simple bugs as pull requests.

    Another path might be:

    1. Using the software and reading forums/docs for help.
    2. Answering basic questions on forums, looking to old threads and relevant docs.
    3. Learning about common questions.
    4. Writing blogs or forum posts about common questions.
    5. Submitting improvements to official docs to clarify common areas of confusion.

    There are other paths as well, the main thing is to use a thing so you learn about it and then use that knowledge to make it a little easier for the next person. Good luck!



  • I don’t think titles directly transfer between companies, and yet the industry allows it. It’s a very useful tool for advancement.

    This may be true on some corners of the industry, but at the more competitive end (both in terms of competitive pay, and a competitive pool of candidates)… I believe it’s common to relevel on hire. I’ve seen folks go from director to senior and from senior to junior at my org. The candidates being offered those seemingly big “demotions” often seem to be somewhere between unphased and enthusiastic about the change, presumably because the compensation package we offer at the lower level beats what they were getting with an inflated title and because they know their inflated title is nonsense and they’re frustrated with the other aspects of organizational dysfunction that accompany title inflation at their current company.

    What you say is real, and sometimes a promotion in one org can help bridge you into an org that would have been hard to get hired into as a junior, or harder to get promoted in. It’s not without risk though. All things being equal, I’d much rather spend my time working on a strong team and learning a lot and being challenged than to be in a weaker org that’s handing out inflated titles. Getting gud isn’t a guarantee of advancement, but it’s at least as reliable over the long haul as title inflation.




    1. If there are clear spam posts, report them. Save their links so you can track whether the mods respond. You may find here that they’re not inactive after all, in which case the problem is solved.
    2. Reach out to the mods, in a post asking to be added to the mod team. Again, save the url to track whether the mods respond. You may find here that they’re willing to accept help and add you as a mod, again problem solved.
    3. If reaching out to the mods as described above fails, you need admin help. Post in !moderators@lemmy.world or email admin@lemmy.world with the results of 1 and 2, along with any violations of the moderators guide pinned in the moderators community.

    The admins generally won’t intervene until you’ve made a good faith attempt to coordinate directly with the mods and documented a clear case that they’re unresponsive or malicious.



  • My take echoes this. If one puts any stock in streamer recommendations, Baalorlord who has at various times held spire world record winstreaks, has recently cited Monster Train as his current favorite spirelike (other than spire itself), and also cited Griftlands as a playthrough a highlight.

    Baalor probably doesn’t have an opinion on Inscryption as he tends to avoid things with even a slight horror theme. I enjoyed what I played of Inscryption a lot, but very little about playing it evoked the vibe of playing spire. Monster Train is quite adjacent though, the mechanics are different enough to feel fresh but it slots into the same gameplay mood for me whereas Inscryption is just a different (and still very good) thing.

    Neither has the tight balance of Spire or feels quite as deep strategically to me (though in all honesty I’m probably not a strong enough player to be trusted in this regard), but both are fun.



  • That’s an interesting report but it’s possible to “work” at different latencies. And unless you have specialized audio capture/playback hardware and have done some tuning and testing to determine the lowest stable latency that your system is capable of achieving… “works” for you is likely to mean something very different than it does to someone who does a lot of music production.

    It remains an interesting question to some users whether Wayland changes the minimum stable latency relative to X and if so whether it does so for better or worse.



  • I’d consider asking in a Linux audio or music production community (I’m not aware of any on Lemmy that are big enough to have a likely answer though). If music production is a primary use case and audio latency matters to you, almost no general users are going to be able to comment on the difference between X and Wayland from a latency perspective. There may not be a difference, but there might and you won’t be likely to learn about it outside of an audio-focused discussion.



  • A common trope in fiction of the time was that of an aging married couple that does not divorce but that quietly… or even secretly… resents each other.

    In contrast, pets are almost universally loved. The idea of living with a pet that you have a quiet resentment for is is humorously foreign and surprising.

    Another comment in the thread is hinting at an obscure reference. If there is such a reference, I don’t get it either. The cultural touchstone of the resentful married couple is clearly part of the bit though.


  • Yeah, snapshots sent to a separate and often remote pool is an extremely common backup strategy for folks who have long-term settled on ZFS. There’s very nice tooling for this that presents a more traditional schedule/retention based interface to save you scripting snapshots and sends directly.

    • Sanoid is an old standby in that space.
    • Zrepl is getting a lot of traction lately and seems to be an up-and-coming option.
    • I use pyznap, but I don’t recommend it to others as as the maintainer is on a multi-year hiatus which makes it undermaintained. It works great, but isn’t getting active development which makes it a poor bet in a crowded space with many great options. I plan to eval Zrepl when I get around to it.

  • I don’t know if what you’re suggesting is possible, which as I read it is to split your “live” raid-1 in half and use one drive to rebuild the “live” pool and the other drive to rebuild the “backups” pool. It might be, but I can’t think of any advantage to that approach and it’s not something I would have thought to attempt.

    I’d do one of:

    • Ship the data over the network using ZFS send or something like syncoid/sanoid (which use ZFS send under the hood). It might be slow, but is that an issue? Waiting a week for the initial sync might be fine.
    • But syncing by sneakernet is a good strategy too, and can be faster if your backup site is close or your connectivity is slow. In this case, I’d build the backup pool at the live site… ideally in an external drive bay… but one could plug it in internally as well. Then sync them with a local ZFS send, export the backup pool, detach and transport the backup pool to the backup site, them reattach the backup pool at the backup site and import it. Et Voila, the backup pool is running at the remote site fully populated with data and subsequent ZFS sends will be incremental.

    Splitting and rebuilding your live pool might be possible, but I can imagine a lot of that might go wrong and I can’t see any reason to do it that way over export/import.


  • It may seem kinda stupid to consider that an accomplishment, but I feel quite genuinely proud of myself for actually succeeding at this instead of just throwing in the towel…

    Way to go. I’ve been at this a decent while and do some pretty esoteric stuff at work and at home… but this loop of feeling stupid, doing the work, and feeling good about a success has been a constant throughout. I spent a week struggling to port some advanced container setups to podman a month or so ago, same feeling of pride when I got them humming.

    It’s not stupid to be proud of an accomplishment even if it’s a fundamental one that’s early in a bigger learning curve. Soak it in, then on to the next high. Good luck.




  • I replied to the parent comment here to say that governments HAVE set up CSAM detection services. I linked a review of them in my original comment.

    • They’ve set them up through commercial partnerships with technology companies… but that’s no accident. CSAM fighting orgs don’t have the tech reach of a major tech company so they ask for help there.
    • Those partnerships are limited to major/successful orgs… which makes it hard to participate as an OSS dev. But again, that’s on-purpose as the same access that would empower OSS devs to improve detection would enable CSAM producers to improve evasion. Secrecy is useful in this race, even if it has a high cost.

    Plus with the flurry of hugely privacy-invading or anti-encryption legislation that shows up every few months under the guise of “protecting the children online”, it seems like that should be a top priority for them, right?! Right…?

    This seems like inflammatory bait but I’ll bite once.

    • Improving CSAM detection is absolutely a top priority of these orgs, and in the last 10y the scope and reach of the detection tools they’ve created with partners has expanded in reach from scanning zero images to scanning hundreds of millions or billions of images annually. It’s a fairly massive success story even if it’s nowhere near perfect.
    • Building global internet infrastructure to scan all/most images posted to the internet is itself hugely privacy invading even if it’s for a good cause. Nothing prevents law-makers from coopting such infrastructure for less noble goals once it’s been created. Lemmy is in desperate need of help here, and CSAM detection tools are necessary in some form, but they are also very much scary scary privacy invading tools that are subject to “think of the children” abuse.

  • I’m not sure I follow the suggestion.

    • NCMEC, the US-based organization tasked with fighting CSAM, has already partnered with a list of groups to develop CSAM detection tools. I’ve already linked to an overview of the resulting toolsets in my original comment.
    • The datasets used to develop these tools are private, but that’s not an oversight. The datasets are… well… full of CSAM. Distributing them openly and without restriction would be contrary to NCMEC’s mission and to US law, so they limit the downside by partnering only with serious/capable partners who are able to commit to investing significant resources to developing and long-term maintaining detection tools, and who can sign onerous legal paperwork promising to handle appropriately the access they must be given to otherwise illegal material to do so.
    • CSAM detection tools are necessarily a cat and mouse game of CSAM producers attempting to evade detection vs detection experts trying to improve detection. In such a race, secrecy is a useful… if costly… tool. But as a result, NCMEC requires a certain amount of secrecy from their partners about how the detection tools work and who can run them in what circumstances. The goal of this secrecy is to prevent CSAM producers from developing test suites that allow them to repeatedly test image manipulation strategies that retain visual fidelity but thwart detection techniques.

    All of which is to say…

    … seems like law enforcement would have such a data set and seems they should of course allow tools to be trained on it. seems but who knows? might be worth finding out.)

    Law enforcement DOES have datasets, and DO allow tools to be trained on them… I’ve linked the resulting tools. They do NOT allow randos direct access to the data or tools, which is a necessary precaution to prevent attackers from winning the circumvention race. A Red Hat or Mozilla scale organization might be able to partner with NCMEC or another organization to become a detection tooling partner, but db0, sunaurus, or the Lemmy devs likely cannot without the support of a large technology org with a proven track record or delivering and maintaining successful/impactful technology products. This has the big downside of making a true open-source detection tool more or less impossible… but that’s a well-understood tradeoff that CSAM-fighting orgs are not likely to change as the same access that would empower OSS devs would empower CSAM producers. I’m not sure there’s anything more to find out in this regard.