• supericy@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    It’s clickbait and doesn’t have any real impact. It just effects the amount of time each process has before being preempted for multi tasking. You wouldn’t want this number to scale to a large number even with a large number of cores.

    The Linux kernel supports using more than 8 cores.

    (*not an expert and this is just from reading around a bit)

    • Ordinary-Mistake-279@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      yes it shows you the cores, that’s not the question but does the scheduler use them all or is he throwing the tasks between the cores which results in more latency/overhead to switch also the level1 or 2 cache inbetween. i running also linux for more then 5 years know, but it really would be a game changer if you not use all potential of all cores you have on your rig … i really want to believe it, because that would have a great impact on mutlicore systems

      • Plaidomatic@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Did you read the article? It says it just caps the maximum runtime per thread, which they assume is suboptimal, but they don’t do any real analysis, provide any data, or anything. Could the scaling factor be changed and allow threads to run longer? Sure. Would it make a difference? Who knows, because this guy made a claim and provided no evidence.

        Will Linux use more than 8 cores? Of course it will. I’ve got 40 in my home server and I can saturate all of them with no issue. There’s super computers with single system images running thousands of cores on Linux that don’t have a problem.

  • jayaram13@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    It’s a complete misunderstanding of what’s going on. The way the scheduler switches applications to ensure they all get prioritized equally was built with 8 cores in mind. All cores were always used.

  • BobTheSCV@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s pretty sensible behavior. Things that are desirable on desktop-like multicore systems aren’t always desirable on manycore systems.

    The ability to thrash on 128 cores is probably not something anyone is missing.

  • anothercorgi@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If this had been a real issue, it should have been detected quite a while ago. However as far as I can tell, this limit is reached if one is running very very quick running programs, say if it only takes 5ms to fork and run the program, then one may run into this 8 core issue.

    This will basically slow down poorly written shell scripts that constantly runs subprograms - if the subprograms run in parallel (which they are not unless multiple instances of the script are running at the same time). Also it means fork bombs will create processes slower than expected on machines with more then 8 processors. I highly doubt someone would worry about this case.