Constrain used GPU memory?

classic Classic list List threaded Threaded
5 messages Options
Himbeertoni Himbeertoni
Reply | Threaded
Open this post in threaded view
|

Constrain used GPU memory?

Hi,
I use osgEarth in an environment with relatively low-end GPUs. I noticed at least on Linux with the Nvidia driver, the whole system becomes unuseable when there is an over-subscription of memory, i.e. the total GPU memory allocated by all processes is close to (or maybe exceeds?) the maximum available dedicated GPU memory.

Is there a way in osgEarth to restrict the amount of allocated GPU resources, e.g. the number currently bound tile textures? Like I would like osgEarth not to use more than 1GB of GPU memory and if it does reach that limit it should start paging out the tiles unless it is below than that threshold.

Does that sound feasible? Thanks
gwaldron gwaldron
Reply | Threaded
Open this post in threaded view
|

Re: Constrain used GPU memory?

Hello,

There is a little-known OSG option that lets you create and use a texture pool; you could try that:

https://groups.google.com/g/osg-users/c/yUY8zl67GLY/m/4qIxNX6Pei4J

Hope this helps.
Glenn Waldron / Pelican Mapping
Himbeertoni Himbeertoni
Reply | Threaded
Open this post in threaded view
|

Re: Constrain used GPU memory?

Thank you Glenn for this valuable advice, I did not know about this texture pool concept in OSG. I just set the texture pool size to 512 MB (via
osg::DisplaySettings::instance()->setMaxTexturePoolSize(512 * 1024 * 1024)
). Now, performance feels a lot more consistent, frame drops/hitches when the tile textures are loaded are much less noticeable. But I have to do more extensive profiling.

However, I'm not sure whether setting the max texture pool size really sets a maximum bound for the texture allocation, i.e. if the pool has no more memory available, more textures are probably just allocated as if not using the texture pool?

Besides, do you think osgEarth would also benefit from setting the OSG buffer pool size (via
osg::DisplaySettings::setMaxBufferObjectPoolSize()
)?

Do you know some heuristics/back-of-the-envelope calculations one could employ to determine the amount of required GPU memory for a given set of map data (maybe using number of layers, number of LODs, tile resolution, ...?)

Is there a way to configure osgEarth to use GL texture compression? I noticed there is an option called "compressNormalMaps" in the 'TerrainOptions' class. But I don't use normal maps, just offline TMS image/elevation layers without lighting.
Also, in this class there is an option "expirationThreshold" which I set to 0. The way I interpret this is that by doing so it could also reduce GPU memory since unused tiles are expired immediately instead of keeping them around? I did not notice any effect on memory however.

gwaldron gwaldron
Reply | Threaded
Open this post in threaded view
|

Re: Constrain used GPU memory?

You can set the buffer object size too, but the BO's are much smaller than the textures in general so it won't have as critical of an impact. Can't hurt though? I really don't know much at all about the implementation.

To enable texture compression by default you can set this in your earth file:
<map>
  <options>
    <terrain texture_compression="auto" ...



Glenn Waldron / Pelican Mapping
Himbeertoni Himbeertoni
Reply | Threaded
Open this post in threaded view
|

Re: Constrain used GPU memory?

Thanks for the suggestion. I set the texture compression to "gpu" since "auto" or "cpu" did not work for me, these options caused a crash in the FastDXTProcessor:



I also tweaked the memory sizes for both the texture pool and the bufferObject pool but I have not noticed any difference. I have to study the code to understand how the pooling works.