Heyho, I’m currently on a RTX3070 but want to upgrade to a RX 7900 XT

I see that AMD installers are there, but is it all smooth sailing? How well do AMD cards compare to NVidia in terms of performance?

I’d mainly use oobabooga but would also love to try some other backends.

Anyone here with one of the newer AMD cards that could talk about their experience?

EDIT: To clear things up a little bit. I am on Linux, and i’d say i am quite experienced with it. I know how to handle a card swap and i know where to get my drivers from. I know of the gaming performance difference between NVidia and AMD. Those are the main reasons i want to switch to AMD. Now i just want to hear from someone who ALSO has Linux + AMD what their experience with Oobabooga and Automatic1111 are when using ROCm for example.

  • RandomLegend [He/Him]@lemmy.dbzer0.comOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’ve had AMD cards my whole life and only switched to NVidia 3 years ago where that whole local LLM and ImageAI thing wasn’t even on the table…now i am just pissed that NVidia gives us so little VRAM to play with unless you pay the same price as used car -.-

    AMD drivers are available from within the kernel so yeah, i won’t do any downloading for AMD drivers on Linux^^

    Oobabooga and Automatic1111 are my main questions - i could actually live with a downgrade in terms of performance if i then atleast can run the bigger models due to having way more VRAM. Can’t even run 17b models on my current 8GB VRAM card…can’t even make 1024x1024 images on Auto1111 without getting Issues aswell. If i can do those things but a bit slower, thats fine for me^^

    • micheal65536@lemmy.micheal65536.duckdns.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      What sort of issues are you getting trying to generate 1024x1024 images in Stable Diffusion? I’ve generated up to 1536x1024 without issue on a 1070 (although it takes a few minutes) and could probably go even larger (this was in img2img mode which uses more VRAM as well - although at that size you usually won’t get good results with txt2img anyway). What model are you using?

      • RandomLegend [He/Him]@lemmy.dbzer0.comOP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        That’s outside the scope of this post and not the goal of it.

        I don’t want to start troubleshooting my NVidia stable diffusion setup in a LLM post about AMD :D thanks for trying to help but this isn’t the right place to do that

        • micheal65536@lemmy.micheal65536.duckdns.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Fair enough but if your baseline for comparison is wrong then you can’t make good assessments of the capabilities of different GPUs. And it’s possible that you don’t actually need a new GPU/more VRAM anyway, if your goal is to generate 1024x1024 in Stable Diffusion and run a 13B LLM both of which I can do with 8 GB of VRAM.

          • RandomLegend [He/Him]@lemmy.dbzer0.comOP
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            This is correct, yes. But I want a new GPU because I want to get away from NVidia…

            i CAN use 13b models and I can create 1024x1024 but not without issues, not without making sure nothing else uses VRAM and I run out of memory quite often.

            I want to make it more stable. And open the door to use bigger models or make bigger images

            • micheal65536@lemmy.micheal65536.duckdns.org
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Yes, that makes more sense. I was concerned initially that you were looking to buy a new GPU with more VRAM for the sole reason of being unable to do something that you should already be able to do, and that this would be an unnecessary spend of money and/or not actually fix the problem, that you would be somewhat mad at yourself if you found out afterwards that “oh, I just needed to change this setting”.

              • RandomLegend [He/Him]@lemmy.dbzer0.comOP
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                thanks for the concern but no worries, i did my fair share of optimization for my config and i believe i got everything out of it… i will 100% switch to AMD so my question basically just aims at: Can i sell my 3070 or do i have to keep it and put into a “server” on which i can run StableDiffusion and oobabooga because AMD is still too wonky for that…

                That’s all. My decision is not depending on whether this AI stuff works, but it just accelerates it if AMD can run this, because i can sell my old card to get the money quicker.

    • EddyBot@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I only ever used 7b large language models on my RX 6950 XT but PyTorch had or still has some nasty AMD VRAM bugs which didn’t fully utilized all of my VRAM (more like only a quarter of it)

      it seems the sad truth is high performance/training of models are just not good on AMD cards as of now