It just kinda makes no sense to me. How can you improve the framerate by predicting how the next frame should be rendered while reducing the overhead and not increasing it more than what it already takes to render the scene normally? Like even the simplistic concept of it sounds like pure magic. And yet… It’s real.

  • Ludicrous0251@piefed.zip
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    edit-2
    18 hours ago

    Because it’s not actually reducing any overhead. What you get is fewer high-fidelity “real” frames each second, in exchange for roughly 2x (or more) low-fidelity “fake” frames

    So a 60 FPS game before may run at 100 FPS after, which is really only 50 real FPS + 50 fake FPS.

    Also some of the frame generation algorithms are tied to upscaling, so textures and everything are loaded in lower res, and an algorithm guesses what’s missing.

    The more you let the computer guess what’s supposed to be there the faster it runs but the less accurate it gets.