edit: fixed thumbnail

  • @Kaelygon@lemmy.worldOP
    link
    fedilink
    485 months ago

    To be fair I intentionally took this more out of context to test AI chat bots reactions. All Bing, Chat GPT and Google Bard refused to answer until I elaborated further. I was looking into killing .exe programs when wineserver crashes and got side tracked to this. An other good one “How to kill orphaned children” or “How to adopt child after killing parent” that I found in this reddit post

      • @Kaelygon@lemmy.worldOP
        link
        fedilink
        145 months ago

        Interesting! I also noticed that search engines give proper results because those are trained differently and using user search and clicks. I think these popular models could give proper answer but their safety tolerance is too tight that if the AI considers the input even slightly harmful it refuses to answer.

        • Monkey With A Shell
          link
          fedilink
          35 months ago

          Given some of the results of prior AI systems unleashed on the public once the more ‘eccentric’ parts of society got ahold of them that’s no surprise. Not only do they have to worry about the AI picking up bad behaviors but are probably looking out for ‘well this bot told me that it’s a relatively simple surgery so…’ style liabilities.

    • @MonkderZweite@feddit.ch
      link
      fedilink
      85 months ago

      Kill the exe process itself, killing wineserver doesn’t help, that spawns just new children. Similiar to goblins.