Big tech boss tells delegates at Davos that broader global use is essential if technology is to deliver lasting growth

  • Kühlschrank@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    14 hours ago

    Is that true? I haven’t heard MS say anything about enabling local LLMs. Genuinely curious and would like to know more.

    • Iced Raktajino@startrek.website
      link
      fedilink
      English
      arrow-up
      5
      ·
      14 hours ago

      Isn’t that the whole shtick of the AI PCs no one wanted? Like, isn’t there some kind of non-GPU co-processor that runs the local models more efficiently than the CPU?

      I don’t really want local LLMs but I won’t begrudge those who do. Still, I wouldn’t trust any proprietary system’s local LLMs to not feed back personal info for “product improvement” (which for AI is your data to train on).

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      14 hours ago

      That’s why they have the “Copilot PC” hardware requirement, because they’re using an NPU on the local machine.

      searches

      https://learn.microsoft.com/en-us/windows/ai/npu-devices/

      Copilot+ PCs are a new class of Windows 11 hardware powered by a high-performance Neural Processing Unit (NPU) — a specialized computer chip for AI-intensive processes like real-time translations and image generation—that can perform more than 40 trillion operations per second (TOPS).

      It’s not…terribly beefy. Like, I have a Framework Desktop with an APU and 128GB of memory that schlorps down 120W or something, substantially outdoes what you’re going to do on a laptop. And that in turn is weaker computationally than something like the big Nvidia hardware going into datacenters.

      But it is doing local computation.