Top
Enterprise Postgres 18 Knowledge DataManagement FeatureUser's Guide

5.2.4 Resource Control

In this feature, the model is loaded into memory for calculations based on the model by the inference server. During actual calculations, CPU resources are used. In this feature, resources on the same machine are shared between the database and the inference server. Therefore, it is necessary to consider the impact of resource usage such as memory, CPU, and disk by the database and inference server on the main operations and appropriately control the resources. Additionally, for each resource, consider the following to ensure that the resources used by this feature are not insufficient.

The memory used by this feature can be controlled with the following parameters.

Check and manage the official documentation of Triton Inference Server regarding the resources used by Triton Inference Server.

https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/onnxruntime_backend/README.html