Advertisement

AI video startup Runway reportedly trained on ‘thousands’ of YouTube videos without permission

A YouTube spokesperson pointed Engadget to a previous comment suggesting it would be a ‘clear violation’ of its terms.

Runway

AI company Runway reportedly scraped “thousands” of YouTube videos and pirated versions of copyrighted movies without permission. 404 Media obtained alleged internal spreadsheets suggesting the AI video-generating startup trained its Gen-3 model using YouTube content from channels like Disney, Netflix, Pixar and popular media outlets.

An alleged former Runway employee told the publication the company used the spreadsheet to flag lists of videos it wanted in its database. It would then download them without detection using open-source proxy software to cover its tracks. One sheet lists simple keywords like astronaut, fairy and rainbow, with footnotes indicating whether the company had found corresponding high-quality videos to train on. For example, the term “superhero” includes a note reading, “Lots of movie clips.” (Indeed.)

Other notes show Runway flagged YouTube channels for Unreal Engine, filmmaker Josh Neuman and a Call of Duty fan page as good sources for “high movement” training videos.

“The channels in that spreadsheet were a company-wide effort to find good quality videos to build the model with,” the former employee told 404 Media. “This was then used as input to a massive web crawler which downloaded all the videos from all those channels, using proxies to avoid getting blocked by Google.”

Screnshot of the Runway AI homepad.
Runway

A list of nearly 4,000 YouTube channels, compiled in one of the spreadsheets, flagged “recommended channels” from CBS New York, AMC Theaters, Pixar, Disney Plus, Disney CD and the Monterey Bay Aquarium. (Because no AI model is complete without otters.)

In addition, Runway reportedly compiled a separate list of videos from piracy sites. A spreadsheet titled “Non-YouTube Source” includes 14 links to sources like an unauthorized online archive of Studio Ghibli films, anime and movie piracy sites, a fan site displaying Xbox game videos and the animated streaming site kisscartoon.sh.

In what could be viewed as a damning confirmation that the company used the training data, 404 Media found that prompting the video generator with the names of popular YouTubers listed in the spreadsheet spit out results bearing an uncanny resemblance. Crucially, entering the same names in Runway’s older Gen-2 model — trained before the alleged data in the spreadsheets — generated “unrelated” results like generic men in suits. Additionally, after the publication contacted Runway asking about the YouTubers’ likenesses appearing in results, the AI tool stopped generating them altogether.

“I hope that by sharing this information, people will have a better understanding of the scale of these companies and what they’re doing to make ‘cool’ videos,” the former employee told 404 Media.

When contacted for comment, a YouTube representative pointed Engadget to an interview its CEO Neal Mohan gave to Bloomberg in April. In that interview, Mohan described training on its videos as a “clear violation” of its terms. “Our previous comments on this still stand,” YouTube spokesperson Jack Malon wrote to Engadget.

Runway did not respond to a request for commeInt by the time of publication.

At least some AI companies appear to be in a race to normalize their tools and establish market leadership before users — and courts — catch onto how their sausage was made. Training with permission through licensed deals is one thing, and that’s another tactic companies like OpenAI have recently adopted. But it’s a much sketchier (if not illegal) proposition to treat the entire internet — copyrighted material and all — as up for grabs in a breakneck race for profit and dominance.

404 Media’s excellent reporting is worth a read.