Friday 13 February 2009

pygame: decoupling rendering from game logic

The basic execution profile of any computer game almost always looks basically the same. The devil is in the details, of course, but the basics are always like this:
  • listen to user input, change game data appropriately
  • update game objects (move objects, do collision detection, etc.)
  • render world to the screen
rinse and repeat that as fast as possible, and you have a game. However, there is a subtle problem with this approach. On faster computers, the game will run faster. If this seems desirable to you, think about networked games. Do we really want those fast computers to slow down just for the slow ones? and how exactly would we do that? And what about scenes that are harder to render? do we have to slow the game down just for that?

We can see that it is, therefore, desirable to make your game run at the same speed no matter what happens. With one caveat: We always want to render as many frames as we possibly can. More frames is a smoother experience, after all.

So we can see that it is in our interest to decouple the game logic updates from the rendering. The rendering should then proceed as fast as possible, but the game logic should proceed at a constant rate regardless of the frame rate.

The right way to do this, of course, is to look at the clock. First we must decide how many logic updates we want to do per second. For most games, about 20 to 25 will suffice. So the time between to updates should be 1/25 seconds. Then, every pass through the rendering loop (which still runs as fast as possible) we check to see how much time has passed, and, only if it is necessary, we make an update to the game. Then we proceed with rendering. If the update is skipped, we need the renderer to interpolate between the two logic updates, so that it does not just render the same world multiple times. This will result in a smoother game experience for fast computers, but will not slow down the game on the slower, older hardware.

in my gunge framework, there is a new Clock class in the experimental branch time-exp that handles this transparently for you. It watches over two things: real time, which advances continuously, and game time, which can be advanced by 1/(updates_per_second) by updating the game. It is worthy to note how game time advances in discrete units, since the game world is updated in steps every iteration of the main loop.

To decide whether the game should update, the Clock has the concept of game latency, or how far game time lags behind the real time. The continuously advancing real time increases the game latency, and it is decreased when the game updates, by 1/(updates_per_second). There are two strategies for deciding when to update the game:
  • game_latency > 1/(updates_per_second) -- this updates the game if the latency lags behind more than one update_length
  • |game_latency - update_length| < |game_latency| -- this updates if it reduces the absolute value of the latency, i.e. if the game time comes closer to real time because of the update.
The caveat of the second method is that it allows game time to be ahead of real time, with the advantage that this may allow the game time to get closer to the real time than it could if it must always lag behind. Another disadvantage of the second method is that it becomes harder to calculate the interpolation factor for renders, but it allows for updates at times when the first method must helplessly fall further behind real time.

3 comments:

Gecko said...

Uhh, what about threads?

Hugo Arts said...

threads make it sound simple, but that is deceptive. Both updates and renders make use of the same data. While it is true that renders only need read access, if a render happens right in the middle of an update the game will be in an inconsistent state (partially updated).

So you need for a big lock around the update code, but that of course means that updates and renders can't run concurrently anyway, and there's not much of a point in running them as two threads.

Anonymous said...

what about a clock timer with a custom event?