• 1 Post
  • 357 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle

  • This happened in Sweden as well, and has worked quite well for SD, sadly. The dropped youth group had some more openly radicals in it, hence dropped by SD. They started a new party (AfS, Alternativ för Sverige) with a little more radical policies. SD then could use AfS to gauge the acceptance of more radical policies without having to lose face. “Can we start saying replacement theory stuff yet? How did that work for AfS?”, “Can we go for mass deportations now? AfS has laid the ground work there already”.





  • I’ve been thinking of a dreaming-like algorithm for neural networks (NN) which I have wanted to try.

    When training an NN, you have a large set of inputs and corresponding desired outputs. You make random subsets of this and for each subset you adjust the NN to correspond more to the outputs. You do this over and over and eventually your NN is close to the outputs (hopefully). This training takes a long time and will only be done this initial time. (This is very a simplified picture of the training)

    Now for the dreaming. When the NN is “awake” it accumulates new input/output entries. We want to adjust the NN to also incorporate these entries. But if we use only these for training we will lose some of the information the NN has learned in the initial training. We might want to train on the original data + the new data, but that is a lot, so no. Lets assume we do no longer even have the original data. We want to train on what we know and what we have accumulated during the waking time. Here comes the dreaming:

    1. Get an “orthogonal” set of input/outputs of what the NN already knows (e.g. if the network outputs vectors, take some random input, save vector. Use a global optimization algorithm to find the next vector such that is orthogonal to the first. Do this until you have a spanning set).
    2. Repeat point 1 until you have maybe one set per newly accumulated input/output entry, or however much appears to not move you too far from the optimization extrema your NN is in – this set should still be a lot smaller than the original training set.
    3. Fine-tune train your NN on the accumulated data and this data we have generated. The generated data should act as an anchor, not allowing the NN deviate too much from the optimization extrema and the new data will also be invorporated.

    I see this as a form of dreaming as we have a wake and sleep portion. During waking we accumulate new experiences. During sleeping we incorporate these experiences into what we already know by “dreaming”, that is make small training sessions on our NN.






  • I guess it is different reasons for different people. But for me, I started using ubuntu in 2005. When I was learning linux, it was just not complete enough. You install another DE/WM, to try it out, and stuff started to break. So I switched pretty quickly. I tried to return every now and then, because it had an environment of newer packages which I waned/needed. But it was never worth it, this or that always broke when you tried to do something peculiar. I use ubuntu every now and then, but it is mostly no good. The issue is really just snap. Snap firefox on rpi, which is the default, is just trash and unusable. It is crazy that they made it the default. I have also had servers where snap-services just eats too much cpu and first thing I have to do is to purge it. So, in summary, I don’t really trust them to provide a reliable system, and I am sceptical of their direction.








  • The arch wiki is a good substitute, but the gentoo wiki when it was still around and at its peak was amazing.

    But I agree… Gentoo is not quite keeping up with a lot of details. Like experimenting with refind, dracut, efistubs, I felt I was in the dark a lot of the time. I ended up making very few mistakes, because the distro is very good at working for special cases even if all the details are not explained. Still my favourite distro.