It just kinda makes no sense to me. How can you improve the framerate by predicting how the next frame should be rendered while reducing the overhead and not increasing it more than what it already takes to render the scene normally? Like even the simplistic concept of it sounds like pure magic. And yet… It’s real.


It doesn’t predict next frame, it interpolates two frames. It takes the most recent frame, previous frame, the motion vectors associated with them and tries to produce approximation that is close enough so your brain is deceived.
All that at the price of annoying input lag that’s inherently one frame bigger than normally.