Ilya tweet:

After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm. It was an honor and a privilege to have worked together, and I will miss everyone dearly. So long, and thanks for everything. I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.

Jan tweet:

I resigned

this comes precisely 6mo after Sam Altman’s job at OpenAI was rescued by the Paperclip Maximiser. NYT: “Dr. Sutskever remained an OpenAI employee, but he never returned to work.” lol

orange site discussion: https://news.ycombinator.com/item?id=40361128

lesswrong discussion: https://www.lesswrong.com/posts/JSWF2ZLt6YahyAauE/ilya-sutskever-and-jan-leike-resign-from-openai

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Reasons are unclear (as usual when safety people leave OpenAI).

    no, you fucking dipshit. the reason is crystal clear. he was on the team that attempted to oust sammyboi, and was on borrowed time from the moment it failed

    jfc how are these people this clueless

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 months ago

      Cade Metz was the NYT journalist who doxxed Scott Alexander

      “bro if you just keep saying it it’ll become true. trust me bro I’ve done it hundreds of times bro”

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Also all the weirdness re Metz comes from not understanding how media like that works (A thing which iirc is warned for in the sequences of all things) and N=1. All these advanced complex ideas rushing around in their mind, talk of being aware of your own bias, bayes, bla bla defeated by an N=1 perceived anti-grey tribe event.

        Also lol how much they fall back into talking like robots when talking about this event:

        Enforcing social norms to prevent scapegoating also destroys information that is valuable for accurate credit assignment and causally modelling reality.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    The HN thread quickly devolves into ‘tech is great’ and ‘this is the next revolution’, and im again amazed at our collective inability to learn things. Even if LLM does great things, redesigning all your workflows/etc around a company which you could and will change the cost and terms of service on a whim is a bad idea. Even more so in the era where there is no more almost zero money lending. Good luck when the LLM shit enshittifies so much that you will need to divest from it and all your programmers can’t code without it (and stackexchange is also gone).

  • Deborah@hachyderm.io
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    6 months ago

    JFC I avoid AGI boosters mostly so I’m always aghast when I’m reminded of what they believe. HN commenter says (https://news.ycombinator.com/item?id=40365850) AGI will bring:

    Solve CO2 Levels
    End sickness/death
    Enhance cognition by integrating with willing minds.
    Safe and efficient interplanetary travel.
    End of violent conflicts
    Fair yet liberal resource allocation (if still needed), “from scarcity to abundance”

    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      And we can do all of that by just scaling up autocomplete which is basically already AGI (if you squint).

      How come the goal posts for AGI are always the best of what people can do?

      I can’t diagnose anyone, yet I have GI.

      But it shouldn’t surprise me that their benchmark of intelligence is basically that something can put together somewhat coherent sounding technobabble while being unable to do something my five year-old kindergartner can.

      Yup, basically AGI.