It just kinda makes no sense to me. How can you improve the framerate by predicting how the next frame should be rendered while reducing the overhead and not increasing it more than what it already takes to render the scene normally? Like even the simplistic concept of it sounds like pure magic. And yet… It’s real.

  • Wimopy@feddit.uk
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    19 hours ago

    Since it’s ELI5 I’ll keep it very simple. It’s not like I know the exact mechanics anyway. No guarantee of pedantic correctness. I’m sure if I get anything overly wrong then someone who wouldn’t comment otherwise will correct me (please and thank you).

    Let’s start from interpolation. It’s a simple maths idea: inter for between, poles for points. Let’s say you have two points. You could draw a line between them, take the middle point of that line. You’ve now introduced a new point.

    This concept is used a lot in physics or maths in general. Let’s say you are writing down the speed of a car over time. You have 1 speed value per second. But you’re interested in the speed at 23.33 seconds for some reason.

    Now you have a few options:

    • You could take the speeds at 23 and 24 seconds and just the same as before: draw an imaginary straight line between them, and read what speed that is at for 23.33.
    • You could also look at how the speed changed from 22 to 23 instead, especially if you didn’t have the 24s time written down.
    • You could look at more of the speed values and try to figure out how the car’s speed changes over time, since it’s unlikely to be linear. That gets you to more complex forms of interpolation. That’s what’s used to find a more descriptive equation of motion for objects.

    That may have been a bit of a tangent, but it does get us back to frame generation. We are interpolating where each pixel is between frames. Or perhaps even saying: okay, this visual object moved from X to Y, what happened between them?

    The key part is: graphics already have this information. It would be wasteful to re-render an entire scene every frame, so you just look at what needs updating and how. But that means you know what happens one frame to the next. So now you just take that information and do some simple maths to figure out the in-between step, and show that to the user as well.

    Performance-wise it’s not costly. The tough calculation is the update from frame to frame. It does take a bit of time though, introducing some tiny lag in your display.

    Of course the actual frame gen algorithms can take a lot more data into account, but the simple idea is: between Point X and point Y there exists a point A which we can calculate relatively cheaply and display first.

    • SharkAttak@kbin.melroy.org
      link
      fedilink
      arrow-up
      1
      ·
      8 hours ago

      Nice explanation. Now call me old fashioned, but I can’t help to wonder if all the effort in making the DLSS neural prediction engine and the “render at low res and then enlarge” would’ve been better spent in a regular rendering engine…

      • Danitos@reddthat.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        Each game usually has their own engine, while DLSS/FSR is done by Nvidia/AMD and everybody uses either Nvidia, AMD or Intel GPUs, so the overhead in developing time for the game devs is bigger in optimizing the game vs supporting DLSS/FSR.

        However I agree with the sentiment, I wish game studios invest more effort actually optimizing their game instead instead of doing the lazy “let’s just add DLSS/FSR support and call it a day”.