Artificial intelligence has been dabbling in art to increasing acclaim over the past few months, but a new study brings its uncanny proficiency to the far-more-complicated world of video.
Specifically, researchers have now found a way to apply the style of famous artists to entire videos, meaning that you can now watch Star Wars presented with your favorite artist's touch.
Researchers in the University of Freiburg's department of computer science tapped deep neural networks to accomplish their task.
"Given an artistic image, we transfer its particular style of painting to the entire video," explains the researchers' recently published paper, which aims to build on earlier related work on still images.
One problem with video is that processing each frame independently leads to flickering and false discontinuities, the researchers found. To preserve a smoother transition between frames, they added what they call a temporal constraint that takes optical flow into account.
Another thing they added was a better way to reconstruct objects or scenery in the video hidden in one portion but re-exposed in another, making it difficult to make up for the lost time in terms of continuity.
"To solve this, we make use of long-term motion estimates," they said. "This allows us to enforce consistency of the synthesized frames before and after the occlusion."
Meanwhile, a multi-pass algorithm that processes the video in alternating directions using both forward and backward flow helps to get rid of artifacts at the image's edges.