It just kinda makes no sense to me. How can you improve the framerate by predicting how the next frame should be rendered while reducing the overhead and not increasing it more than what it already takes to render the scene normally? Like even the simplistic concept of it sounds like pure magic. And yet… It’s real.

  • SharkAttak@kbin.melroy.org
    link
    fedilink
    arrow-up
    1
    ·
    8 hours ago

    Nice explanation. Now call me old fashioned, but I can’t help to wonder if all the effort in making the DLSS neural prediction engine and the “render at low res and then enlarge” would’ve been better spent in a regular rendering engine…

    • Danitos@reddthat.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      Each game usually has their own engine, while DLSS/FSR is done by Nvidia/AMD and everybody uses either Nvidia, AMD or Intel GPUs, so the overhead in developing time for the game devs is bigger in optimizing the game vs supporting DLSS/FSR.

      However I agree with the sentiment, I wish game studios invest more effort actually optimizing their game instead instead of doing the lazy “let’s just add DLSS/FSR support and call it a day”.