• katy ✨@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    60
    arrow-down
    1
    ·
    6 days ago

    accessibility is honestly the first good use of ai. i hope they can find a way to make them better than youtube’s automatic captions though.

    • ℍ𝕂-𝟞𝟝@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      18
      ·
      6 days ago

      There are other good uses of AI. Medicine. Genetics. Research, even into humanities like history.

      The problem always was the grifters who insist calling any program more complicated than adding two numbers AI in the first place, trying to shove random technologies into random products just to further their cancerous sales shell game.

      The problem is mostly CEOs and salespeople thinking they are software engineers and scientists.

    • jol@discuss.tchncs.de
      link
      fedilink
      arrow-up
      14
      ·
      6 days ago

      The app Be My Eyes pivoted from crowd sourced assistance to the blind, to using AI and it’s just fantastic. AI is truly helping lots of people in certain applications.

    • hector@sh.itjust.works
      link
      fedilink
      arrow-up
      9
      ·
      6 days ago

      While LLMs are truly impressive feats of engineering, it’s really annoying to witness the tech hype train once again.

    • yonder@sh.itjust.works
      link
      fedilink
      arrow-up
      12
      ·
      6 days ago

      I know Jeff Geerling on Youtube uses OpenAIs Whisper to generate captions for his videos instead of relying on Youtube’s. Apparently they are much better than Youtube’s being nearly flawless. I would have a guess that Google wants to minimize the compute that they use when processing videos to save money.

      • Zetta@mander.xyz
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        6 days ago

        Spoiler, they will! I use FUTO keyboard on android, it’s speech to text uses an ai model and it is amazing how great it works. The model it uses is absolutely tiny compared to what a PC could run so VLC’s implementation will likely be even better.

        • Landless2029@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          6 days ago

          I also use FUTO and it’s great. But subtitles in a video are quite different than you clearly speaking into a microphone. Even just loud music will mess with a good Speech-to-text engine let alone [Explosions] and [Fighting Noises]. At the least I hope it does pick up speech well.

  • pastaPersona@lemmy.world
    link
    fedilink
    arrow-up
    52
    arrow-down
    8
    ·
    7 days ago

    I know AI has some PR issues at the moment but I can’t see how this could possibly be interpreted as a net negative here.

    In most cases, people will go for (manually) written subtitles rather than autogenerated ones, so the use case here would most often be in cases where there isn’t a better, human-created subbing available.

    I just can’t see AI / autogenerated subtitles of any kind taking jobs from humans because they will always be worse/less accurate in some way.

    • x00z@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      7 days ago

      Autogenerated subtitles are pretty awesome for subtitle editors I’d imagine.

        • glimse@lemmy.world
          link
          fedilink
          arrow-up
          8
          ·
          7 days ago

          We started doing subtitling near the end of my time as an editor and I had to create the initial English ones (god forbid we give the translation company another couple hundred bucks to do it) and yeah…the timestamps are the hardest part.

          I can type at 120 wpm but that’s not very helpful when you can only write a sentence at a time

          • Kazumara@discuss.tchncs.de
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            7 days ago

            and yeah…the timestamps are the hardest part.

            So, if you can tell us, how did the process work?

            Do you run the video and type the subtitles in some program at the same time, and it keeps score of the time at which you typed, which you manually adjust for best timing of the subtitle appearance afterwards? Or did you manually note down timestamps from the start?

            • glimse@lemmy.world
              link
              fedilink
              arrow-up
              4
              ·
              7 days ago

              We were an Adobe house so I did it inside of premiere. I can’t remember if it was built in or a plugin but there was two ways depending on if the shoot was scripted or ad-libbed. If it was scripted, I’d import a txt file into premiere and break it apart as needed with markers on the timeline. It was tedious but by far better than the alternative - manually typing it at each marker.

              I initially tried making the markers all first but I kept running into issues with the timing. Subtitles have both a beginning and an end timestamp and I often wouldn’t leave enough room to be able to actually read it.

              This was over a decade ago, I’ll bet it’s gotten easier. I know Premiere has a transcription feature that’s pretty good

              • Kazumara@discuss.tchncs.de
                link
                fedilink
                arrow-up
                4
                ·
                edit-2
                7 days ago

                That’s interesting thank you.

                I only did it once for a school project involving translation of a film scene (also over a decade ago) but we just manually wrote an SRT file, that was miserable 😄

          • DerArzt@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            7 days ago

            Is there a cross section of people who do live subtitles and people that have experience being a stenographer? Asking as I would imagine that using a stenographic keyboard would allow them to keep up with what’s being said.

    • ArgentRaven@lemmy.world
      link
      fedilink
      arrow-up
      15
      arrow-down
      2
      ·
      7 days ago

      Yeah this is exactly what we should want from AI. Filling in an immediate need, but also recognizing it won’t be as good as a pro translation.

    • Kilgore Trout@feddit.it
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      7 days ago

      I can’t see how this could possibly be interpreted as a net negative here

      Not judging this as bad or good, but for sure if it’s offline generated it will bloat the size of the program.

  • Alice@beehaw.org
    link
    fedilink
    arrow-up
    33
    arrow-down
    1
    ·
    7 days ago

    My experience with generated subtitles is that they’re awful. Hopefully these are better, but I wish human beings with brains would make them.

    • lime!@feddit.nu
      link
      fedilink
      English
      arrow-up
      26
      ·
      7 days ago

      subtitling by hand takes sooooo fucking long :( people who do it really are heroes. i did community subs on youtube when that was a thing and subtitling + timing a 20 minute video took me six or seven hours, even with tools that suggested text and helped align it to sound. your brain instantly notices something is off if the subs are unaligned.

      • Alice@beehaw.org
        link
        fedilink
        arrow-up
        15
        ·
        7 days ago

        Oh shit, I knew it was tedious but it sounds like I seriously underestimated how long it takes. Good to know, and thanks for all you’ve done.

        Sounds to me like big YouTubers should pay subtitlers, but that’s still a small fraction of audio/video content in existence. So yeah, I guess a better wish would be for the tech to improve. Hopefully it’s on the right track.

        • lime!@feddit.nu
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 days ago

          i just did it for one video :P it really is tedious and thankless though so it would be a great application of ml.

      • Nate@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 days ago

        I did this for a couple videos too. It’s actually still a thing, it was just so time consuming for no pay that almost nobody did it, so creators don’t check the box to allow people to contribute subs

      • onnekas@sopuli.xyz
        link
        fedilink
        arrow-up
        2
        ·
        6 days ago

        You can use tools like whishper to pre generate the subtitles. You will have pretty accurate su titles at the right times. Then you can edit the errors and maybe adjust the timings.

        But I guess this workflow will work with VLC in the future as well

      • boomzilla@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        6 days ago

        Jup. That should always be paid work. It takes forever. I tried to subtitle the first Always Sunny Episode. I got very nice results. Especially when they talked over another. But to get the perfect timing when one line was about to get hidden and the other appears was tedious af. All in all the 25 minutes cost me about the same number of hours. It’s just not feasible.

    • OsrsNeedsF2P@lemmy.ml
      link
      fedilink
      arrow-up
      29
      ·
      7 days ago

      Iirc this is because of how they’ve optimized the file reading process; it genuinely might be more work to add efficient frame-by-frame backwards seeking than this AI subtitle feature.

      That said, jfc please just add backwards seeking. It is so painful to use VLC for reviewing footage. I don’t care how “inefficient” it is, my computer can handle any operation on a 100mb file.

      • Feathercrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        7 days ago

        If you have time to read the issue thread about it, it’s infuriating. There are multiple viable suggestions that are dismissed because they don’t work in certain edge cases where it would be impossible for any method at all to work, and which they could simply fail gracefully for.

        • stevestevesteve@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          7 days ago

          That kind of attitude in development drives me absolutely insane. See also: support for DHCPv6 in Android. There’s a thread that has been raging for I think over a decade now

  • mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    6 days ago

    Still no live audio encoding without CLI (unless you stream to yourself), so no plug and play with Dolby/DTS

    Encoding params still max out at 512 kpbs on every codec without CLI.

    Can’t switch audio backends live (minor inconvenience, tbh)

    Creates a barely usable non standard M3A format when saving a playlist.

    I think that’s about my only complaints for VLC. The default subtitles are solid, especially with multiple text boxes for signs. Playback has been solid for ages. Handles lots of tracks well, and doesn’t just wrap ffmpeg so it’s very useful for testing or debugging your setup against mplayer or mpv.

  • moosetwin@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    5
    ·
    7 days ago

    I don’t mind the idea, but I would be curious where the training data comes from. You can’t just train them off of the user’s (unsubtitled) videos, because you need subtitles to know if the output is right or wrong. I checked their twitter post, but it didn’t seem to help.

      • nova_ad_vitum@lemmy.ca
        link
        fedilink
        arrow-up
        12
        ·
        7 days ago

        They may have to give it some special training to be able to understand audio mixed by the Chris Nolan school of wtf are they saying.

        • MDCCCLV@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 days ago

          No, if you have a center track you can just use that. Volume isn’t a problem for a computer listening to it since they don’t use the physical speakers.

          • leftytighty@slrpnk.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 days ago

            I took the other comment as a joke but this is accurate and interesting additional information!

    • Warl0k3@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      7 days ago

      I hope they’re using Open Subtitles, or one of the many academic Speech To Text datasets that exist.

      • GolfNovemberUniform@infosec.pub
        link
        fedilink
        English
        arrow-up
        19
        ·
        7 days ago

        VLC doesn’t innovate. And it’s basically dead on Linux.

        Afaik it’s false. It had a major update recently and it’s installed on a lot of Linux systems.

        • BB_C@programming.dev
          link
          fedilink
          arrow-up
          2
          arrow-down
          15
          ·
          7 days ago

          installed on a lot of Linux systems.

          Fake info. It would be fake too if you made the opposite claim. Because such info is simply not available.

          VLC being an MPlayer clone with better branding has been a running half-joke for decades.

          The latest released version of VLC is not compatible with ffmpeg versions > 4.4 🤗. Some distros have actually considered dropping the package for that reason. Maybe some did, I don’t know. But if the situation doesn’t change, some definitely will.

          And VLC 4, which those who still care for some reason have been waiting for it to be released for years, is centered around libplacebo, a library that was factored out of mpv 😎 .

          I’m not emotionally charged against VLC or anything. In fact, I occasionally use it on Android. But what’s stated above is just facts.

  • Not a replicant@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 days ago

    I’ve been waiting for this break-free playback for a long time. Just play Dark Side of the Moon without breaks in between tracks. Surely a single thread could look ahead and see the next track doesn’t need any different codecs launched, it’s technically identical to the current track, there’s no need to have a break. /rant