An old article I wrote for the Linux Game Development Center, copied from http://web.archive.org/web/20010629110435/http://lgdc.sunsite.dk/articles/22.html


Programming: Making the game world independent of CPU speed

Written 2000-08-13 UTC by Erik Greenwald (Last Change: 2001-06-08 UTC)

The game world in modern games appears to run at speeds that do not depend on the speed of the CPU. This is critical for a game that involves network interaction between two computers of different speeds, as well as providing the best possible experience the players machine can give. Frames per second, or FPS, is a common way of benchmarking a systems performance, and is the inverse of the time delta. The time delta is the amount of time it takes from one frame to the next. Once the time delta is calculated, the physics and controls can be scaled against it, giving the consistency necessary for a modern game. It is extremely important that you only set the time delta once every full frame, or your physics and input models will be inaccurate. Two of the most common languages for developing action-oriented video games are C and C++, so we will explore how to implement a couple methods of finding the time delta and FPS using these languages. The first four examples are comprised of three parts:

The last demo is an OpenGL demonstration of how to glue it all together. Several popular game development SDK's provide their own time handling functions. These functions are intended to provide a uniform interface to the operating systems time facilities, but the way to use them should be approximately the same as the methods presented here. In a project where accuracy and portability are important, a cross platform SDK with a timer function or facility should be used. One SDK that does this is SDL.

Calculating the FPS is extremely easy once the delta time function is in place. The average fps can be calculated by dividing the number of total number of frames displayed by the total elapsed time in seconds. By keeping count of the frames drawing, and storing the time the display was started, a single computation gives the programs average FPS speed. Most of the time, you want to know the instantaneous FPS, not the average. Since the delta time function returns seconds per frame, the reciprocal is frames per second. The easiest way to calculate this is 1/delta_time. The formula to calculate the frame rate over several frames is frames/delta_time. Make sure that you do not update the time delta when computing frames per second, or it will skew your fps and your time handling. The timestamp should only be updated once every frame.

One way of calculating the time delta is by using the clock() function. This function does not return a real time value, but real lengths of time can be calculated by using the CLOCKS_PER_SEC constant defined in time.h.

oldtime = newtime;
newtime = clock();
delta_time = (newtime - oldtime)/CLOCKS_PER_SEC;

A bonus to this function is that it is ANSI standard, therefore portable. The disadvantages are that the time resolution is only .01 seconds and it is not a true measure of time. Having a ceiling of 100 frames per second may sound acceptable, but that is not the problem with the time resolution. A very notable instance is when two samples are taken in such quick succession that the system rounds the difference to be 0. When this happens, your frame rate jumps to infinity (assuming we disallow negative time). Another issue is the quantitization of frame rates. Since the time resolution is .01 seconds, the fastest frame can happen in .01 seconds, yielding 100 FPS. The second fastest will be 50 FPS, the third fastest 33 FPS. While game play may be fine with this going on, if you display an FPS counter, this will be painfully obvious. One way to compensate for this is to calculate the frame rate for a number of frames at a time. Instead of finding the FPS every frame, find it once every 10 frames, tracking the time for the 10 frames, then divide 10 by the time for the 10 frames to commence. I think most games track for a clump of frames; otherwise the display would constantly be flickering with different numbers. The second disadvantage can be a bit more serious. When your game is the only cpu hungry program running on the system, and it does not use any sleep() or usleep() functions, it runs fine. If the CPU is shared or if there are sleep() calls, a time skew is introduced. Software running on Microsoft Windows commonly use this method because the software is written to squeeze as high a framerate as possible (no sleep calls) and the computer system that the game is being played on is rarely doing any real multiprocessing. If other programs are running, they are suspended waiting for input from the user, therefore use no CPU time. The following is an implementation using a set interval of time instead of a set interval of frames.

    /* called once, during the init phase */
time_per_fps_disp = CLOCKS_PER_SEC * .5; /* .5 seconds */

    /* called every frame */
oldtime = newtime;
newtime = clock();
framecount++;
if ((newtime-last_fps_time) > time_per_fps_disp)
{
    update_fps_counter( (CLOCKS_PER_SEC * framecount) / (newtime -last_fps_time));
    framecount=0;
}
clock.c (html | source) clock.cc (html | source)

Another method is by using the gettimeofday() function. The function modifies a struct you pass to it, and the structs members are then used to get the time in seconds and microseconds. gettimeofday() conforms to SVr4 and BSD4.3, so most UNIX and variants support it. MS-DOS, Window9x, MacOS, and I think NT do not support it. The advantages are that it has a much higher time resolution, the current time can be extrapolated from it, and it is accurate even if your program sleeps or shares processor time. The disadvantage is that lesser operating systems do not support it.

    /* global (or object wide) */
struct timeval t1, t2;

     /* the delta time function */
double delta_time()
{
     gettimeofday(&t1,NULL);
     dt=(t1.tv_sec - t2.tv_sec)+(t1.tv_usec - t2.tv_usec)/1000000.0; /* 1000000
microseconds in a second... */
     gettimeofday(&t2,NULL);
     return dt;
}
gettimeofday.c (html | source) gettimeofday.cc (html | source)

One thing that often complicates the issue and confuses people is that when time scaling is introduced, EVERYTHING must be scaled. Not just movement, but input, also. When this is not done, the sensitivity of the controls can have drastic differences between different computer systems. This only affects input devices that depend on time, such as the keyboard or a joystick. Mice are usually not suspect to this. Note that even though the time delta being used to scale the physics and input is actually the size of the last drawn frame, the results are still very accurate. The instance where large changes in the speed a frame is drawn is incredibly infrequent, and any quirks that would show up will be fixed in the next frame, as the timestamp is of the real time in either clocks since the program was started, or milliseconds since 1970.

int roll, throttle;
double dt = delta_time();
double max_degrees_per_second=180; /* notice it is per second, not frame */
/* various other declarations, like min/max throttle */
double throttle_change=75;
int key = getkey();     /* find which key is pressed */

switch (key)
{
        case CURSOR_RIGHT:
                roll = (roll + (dt * max_degrees_per_second)) % 360;
                break;
        case CURSOR_LEFT:
                roll = (roll - (dt * max_degrees_per_second)) % 360;
                break;
        case CURSOR_UP:
                throttle = (throttle + (dt * throttle_change));
                if(throttle > max_throttle)
                        throttle = max_throttle;
                break;
        case CURSOR_DOWN:
                throttle = (throttle - (dt * throttle_change));
                if(throttle < min_throttle)
                        throttle = min_throttle;
                break;
}

Network synchronization is not magically fixed using just this method, but there are some things that can be done to help normalize the state of the game world over a network. One way would be to have a complete separate time counting system for networking that works just like the graphics/physics counter. The more common method I think is to set a 'tame' speed to shoot for and transmit packets at a regular interval. Receiving would simply be polling the socket for updates. A common method for compressing this data is to send delta information frequently, so latency is good, and occasionally send a full state packet to reduce the amount of drift in the game world. Implementation and introspection of network interaction is beyond the scope of this document.

Some concern was raised about the quantization from calculating the physics and input in an iterative method. One example where this would be a problem is if a missile were flying towards a jet in a flight simulator, one frame was drawn just before the missile strikes the jet, the next frame places the missile on the other side of the jet. The result is the missile seems to pass through the jet without striking it. The maximum delta time can be calculated by comparing minimum sized bounding boxes and maximum velocities. Another artifact of quantization is if a player has a machine gun that can fire 20 rounds per second, and the FPS drops below 20. The result is the input is queried LESS than 20 times a second, so it slows the rate of the machinegun to the FPS. Again, minimum FPS can be easily calculated for the game world. There are two ways to deal with the issue. One is to interpolate between frames, which can be computationally expensive. The other is to avoid addressing the issue in software and design the elements of the game world to function correctly at an 'acceptable' frame rate. For example, if you have a machinegun and you expect people may be playing with as little as 10 frames per second, reduce the guns firing rate to 10 shells a second. If the quantization makes it impossible to leap from one platform to another, move the platforms closer together.

Here is a demo pulling it all together into a simple glut application. It uses the clock() method, and is portable. The speed of rotation is controlled with the + and - keys, space toggles the rotation, and esc quits. Click here for a browsable version of the source, and here for the compilable version. You can get all the sources for this tutorial, with the tutorial itself and a makefile, in this tarball.

This document and the pertinent source code can be found at http://math.smsu.edu/~br0ke/gamedev/timer/.