• AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 months ago

    This is the best summary I could come up with:


    OpenAI spokesperson Lindsay Held told The Verge in an email that the company curates “unique” datasets for each of its models to “help their understanding of the world” and maintain its global research competitiveness.

    The Times article says that the company exhausted supplies of useful data in 2021, and discussed transcribing YouTube videos, podcasts, and audiobooks after blowing through other resources.

    By then, it had trained its models on data that included computer code from Github, chess move databases, and schoolwork content from Quizlet.

    The new policy was reportedly intentionally released on July 1st to take advantage of the distraction of the Independence Day holiday weekend.

    It was also apparently limited in the ways it could use consumer data by privacy-focused changes it made in the wake of the Cambridge Analytica scandal.

    But the companies’ other option is using whatever they can find, whether they have permission or not, and based on multiple lawsuits filed in the last year or so, that way is, let’s say, more than a little fraught.


    The original article contains 650 words, the summary contains 171 words. Saved 74%. I’m a bot and I’m open source!