FlexiCapture Verification Station slow when opening/closing tasks


When using the Verification Station it takes a long time to open/close tasks, open batches, or navigate the batches list.


This might be caused be one of the following factors:

  • The FlexiCapture installation is not properly distributed;
  • The total system resources allocated for FlexiCapture are not sufficient for the size of the production;


The Scaling guide from FlexiCapture's Performance Guide states the following:

In commercial projects, the Processing Station should never be installed on a computer hosting FlexiCapture servers or Database server, because it hogs up all resources and server performance deteriorates.

In a setup where the components are installed on one machine, FlexiCapture components may "compete for resources" with each other when the processing occurs, causing a general slowdown of the complex. So the first thing to check in this case is the system distribution - the Processing Server, Processing Station(s) and the SQL Server should be moved onto separate different machines within the environment.

This might also be resolved by adding extra Processing Stations or allocating extra CPU cores for the processing station(s). This will help decrease the number of "Pending" tasks because the total number of tasks being in processing at a given moment in time would increase. A Processing Station may only actively work on a set number of tasks, leaving the rest in a "pending" status. Only when the processing of a task is completed the processing station would be able to pick up another one, always staying below or at the cap set by the "Maximum number of processes" set for a Processing Station.

Not all tasks would necessarily consume all the dedicated computing power or other system resources. The tasks are executed on a "per-core" basis - one CPU core can run two FlexiCapture processes (tasks) at a given moment in time. Some tasks, e.g. recognition and training, are more "CPU-heavy" and would use all of the core's processing power (displaying the core load at 100%), but some of them are not so resource-consuming and would not show core usage at 100%. This is why the CPU usage is not shown at 100% all the time - while some of the cores are performing recognition and/or training (displaying a 100% CPU core usage for them), the rest of the cores work on different, smaller and "easier" tasks, only loading up their respective core to 20-50-60%, making it so that overall CPU usage never goes above acceptable values. This is why the "CPU usage" can not be the determining factor in analyzing the cause of a bottleneck, so a hypothetical assumption "if the CPU usage is not at 100% there are sufficient resources for processing" would be incorrect.

More information on the matter is available in the following documentation articles:
Estimating the number of CPU cores required by the Application Server
How to calculate the number of Processing Stations
How to tune up the performance of a Processing Station
Performance Guide

Enabling the "Wait for all documents of a batch" parameter for the workflow stages might also help mitigate this issue since this will decrease the number of tasks created within FlexiCapture. For each "split" batch there are more tasks being created, filling up the queue of tasks to be processed, potentially causing the issue at hand.

Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request



Please sign in to leave a comment.