cloverdx.ai.caching
Common properties of AI components
Model
AI components which use locally hosted models require you to specify the model you want to use. The choice of model determines the capabilities of the components.
Models are not distributed as part of the core product and need to be downloaded and deployed to CloverDX Server by the admin before use. You can provide your own model (some configuration required - for experienced users) or choose from a curated set of free, ready-to-use models available in our online CloverDX Marketplace (recommended).
Models from the CloverDX Marketplace are provided as libraries you can install to your CloverDX Server. Once installed, the model will be available to the components via Server model property, no further configuration is required.
-
Server model: (Recommended) In a Server project, select pre-configured models available from libraries installed on the Server. Go to CloverDX Marketplace to download models you need and install them on the Server.
-
Classification model directory: (for experienced users) The model can be also specified as URI of its directory. For details, see Machine Learning Models.
Models downloaded from the CloverDX Marketplace and selected via Server model property will automatically configure the following model properties.
Model name is a read-only property which shows the name from model configuration files.
Device determines whether the model is run on processor (CPU) or graphics card (GPU). Processing on GPU is much faster but you need a specialized hardware to use it.
Model arguments, Tokenizer arguments and Translator arguments allow to modify model behavior. They are mode-dependent.
Input/output parameters
Fields to classify specify fields to be analyzed.
Token/text classes and thresholds allow to define classes whose scores shall be computed. The threshold specifies the minimum score at which the class will be included in the output.
Classification output field sets an output field which will store the analysis results. It must be of variant type. If the field already contains some analysis, the analyses are merged, so that you can concatenate several AI components and use their combined output.
Batch size sets number of records processed by model together.
Error handling
Token overflow policy determines what happens when some input field value cannot be encoded because it exceeds the model-specific maximum length. The strict policy causes the component to fail while lenient just logs a warning and truncates the input.
Advanced
Transform allows to control what units are used to generate output records. A separate record can be created for each input record, each sequence-class pair, or both.
Cache
DJL_CACHE_DIR
is a system property or environment variable you can set to change the global cache location. Changing this variable will change the location for both model and engine native files.
ENGINE_CACHE_DIR
is a system property or environment variable you can set to change the Engine cache location.
AI Component Execution and Model Caching
This feature is designed to prevent native memory leaks that may occur when AI models are repeatedly loaded and unloaded. By caching models for the lifetime of the Java process and controlling prediction execution via a dedicated thread pool, the system ensures stable and predictable memory usage during inference.
Model Caching
Once a model is loaded and used by an AI component, it is cached in memory and remains there for the lifetime of the Java process (i.e., it is never unloaded).
Prediction Thread Pool
Inference tasks are executed in a fixed-size thread pool. This pool isolates prediction work to dedicated threads. The default pool size is 4, which limits the number of AI components that can run concurrently.
Configuration via System Properties
Default behaviors can be customized using system properties.
Key | Description | Default Value |
---|---|---|
When true, models are cached and remain in memory for the duration of the JVM to avoid native memory leaks and improve performance. |
true |
|
|
Enables a dedicated thread pool for running predictions. If false, predictions run in the component thread. |
true |
|
Maximum number of threads in the prediction thread pool, limiting how many AI components can run in parallel. |
4 |