Home / Blog / Auto-scalability – Smart adaptation to PC capabilities 

Auto-scalability – Smart adaptation to PC capabilities 

In my previous article I described scalability systems in general. So, if you are not familiar with it, I highly recommend checking it first to fully understand dependencies. Auto-scalability applies to PCs because they are significantly more diverse than consoles and you are not able to predict end user hardware. PCs may use different CPUs (x32 vs x64, frequency, number of cores, etc.), GPUs (number of cores, memory, vendor specific features, etc.), amount of memory. They affect how fast the game can run and how many resources it can allocate. Players demand an optimal balance between performance and visual quality. They may not change the default settings and simply assume the quality based on their initial impressions. Neither low level of details on high-end hardware, nor low frame rate on weaker hardware is OK.  Providing a sensible initial configuration is essential, necessitating robust auto-scalability in your game to cater to a broad range of hardware. 

Challenging topic 

Each project presents unique challenges, and you should be ready for some work to have the auto-scalability system in place. Unfortunately, it is also a reason I do not go into too much detail. There are some obvious cases where you can prioritize quality (e.g. during cutscenes) or framerate (during gameplay). However, fast FPS genre has different performance priorities compared to turn-based strategies. I will briefly describe factors to consider when determining the best settings for the end user specific to your game.

Known hardware configurations 

During development, it’s common for team members to have varied hardware setups, which can be leveraged to test initial scalability settings. So, you should be able to check them and apply the settings that  result in a satisfying performance and visuals. However, it is not possible to check all the hardware configurations. Preparation of a short list mapping hardware (especially GPUs) to specific scalability levels is definitely feasible. Exercise caution in inferring performance from hardware names alone, as naming conventions can be misleading and evolve over time. At the time of writing this article, the latest and most powerful GPUs from NVIDIA are the ones from the GeForce RTX 40 Series – e.g. RTX 4090. However, listing similarly named ones we had – sorted by performance – GeForce GT 640, GeForce GTX 660M, GeForce GTX 660 and GeForce GTX 660 Ti. If you think that you are close to find a pattern somehow related to the number, I want to mention that more than 20 years ago another GPU from this vendor was released – GeForce FX 5200. Even if you collect historical data, you cannot predict future hardware naming. 

GPU memory 

For our in-house engine, we also check available GPU memory. If you are running the game on a laptop that has an integrated GPU, then it will not be capable of running at maximum quality.  These cards feature less memory, which typically requires implementing additional optimizations and savings in graphical fidelity. One of the approaches is entirely skipping of loading of the highest LODs if there is more than one. In other words, instead of defaulting to a model of 8000 polygons, the one containing only 3000 polygons may be the highest loaded quality. The quality suffers from it, but the framerate is improved. 

Performance benchmarks 

Adjusting settings based on benchmarking and test results allows for dynamic optimization tailored to the current hardware’s capabilities.  However, relying solely on performance benchmarks can be misleading due to their inherent variability under different conditions. Even if you have a high-end gaming laptop – and the game could run at maximum quality level but this laptop is at the moment only battery-powered – in power-saving mode – then the result of the benchmark will not reflect the capabilities of the hardware. Similarly, you may have a decent machine, but on a heavy load when you run the benchmark that may affect the results. 

While implementing performance benchmarks, you should ensure that you run them for the same set of parameters. If you measure performance of rendering, ensure each test is run under identical parameters (like resolution) to avoid variability. There’s no need to display the running test on-screen. Benchmarks can run in the background to minimize disruptions. The reasonable trade-off when such benchmark takes more than a second is to trigger it on player action, e.g. when the player clicks on resetting to default settings in the game menu. 

It is risky to run the benchmark silently more than once. During the first game startup (between loading screens) the players will probably not notice it. However, because of fragility of this approach, each game startup may produce a slightly different result. If the users notice visual differences, they may perceive them as bugs. It is better to run the performance benchmark once and save the results to load them later. Also, beware of where you save them. In case they end up in a location automatically synchronized between the player different PCs, then old laptop may share the results with more powerful machine. Similarly, quality settings set manually should be machine-specific. 

Display resolution 

I described the usage of display resolution to control the quality of the game in my previous article (“Different settings based on the current game state” section). If you attach only full HD monitors to your PC, then there is no point in trying to adjust the settings in a way that you could achieve 60 FPS when running at 4K resolution. The game will run at only 1080p resolution. This means that you can use this power somewhere else. For instance, you may use more particles for special effects. If the players decide to change the monitor, they can guess that they need to adjust the game setting. 

Keep in mind that 4K contains four times more pixels than 1080p resolution. However, not all render passes use the full display resolution. Render resolution may even differ between frames when implementing such techniques as dynamic resolution. Finally, you may boost the performance by upscaling rendering results. Technologies like DLSS and FSR are specifically designed for this purpose, utilizing deep learning and sophisticated interpolation methods, respectively. All these methods affect the weight of display resolution on default quality settings, and you should consider taking them into account. 

Corner cases 

When developing auto-scalability for our in-house engine, we faced many corner cases. They were related for instance to build machines (machines building game executables) and hot desks (shared PCs prepared to which we connect remotely) which often present unique challenges due to their unconventional hardware/permission/software setups. These machines very often do not have any computer monitor attached to them. In such cases you cannot check currently connected display resolution. Also operating system provide a bit different reports about hardware in ‘remote scenarios’. 

User choices 

When having both scalability and auto-scalability functionalities, you need to ensure that they are also ready for different user choices. If the game guesses the performance of end-user hardware but such user still prefers to go for better visuals, even when sacrificing the frame rate etc., then the game must respect it. In other words, the game should not override user settings. Potential performance benchmark should be re-run when the user resets to default values etc. 

If your game uses the same graphic settings constantly, the situation is quite simple. However, it becomes more difficult to manage when you dynamically update them during gameplay but let the players change the settings. What if player forces shadows at a specific level, but based on auto-scalability settings, they should be disabled during a specific game state? You should collect values computed by your scalability system and then override them with user choices as described in the last touches section in my first article about scalability systems. Applying user settings last ensures that the game always respects them. The drawback is that the user will set ‘global’ settings. They will be used for example for each game state – so no better visuals during cutscenes and so on. 

Summary 

To summarize, auto-scalability lets you automatically configure the game, so the player experiences a good balance between performance and visual quality. Developers should view auto-scalability as a dynamic tool, not just a set-it-and-forget-it feature. Continual refinement based on player feedback and new hardware releases will keep your game optimally balanced. If you have an existing auto-scalability system functionality, it will speed up your development significantly. Depending on your project timeline and budget, you may extend it or only adjust its settings. If you use Unreal Engine, you can start your research here

See also: Scalability. Adjusting visual quality for all platforms 

Author

Wiktor Ławski

Senior Software Engineer

Wiktor has worked on software development for more than 10 years already. He focuses on rendering and supporting different platforms.

See more posts by this author

Sounds promising?

We’re recruiting - reach out to us to learn more about
the projects we’re currently working on!

Hi! We use cookies and similar technologies to better know you and improve your experience with our website.
You can find out more by reading our Privacy Policy.