I have noticed that when I manually draw a region for a non-found field, the recognition accuracy can be much lower than if the same field was found automatically.
Is it because full-text recognition is used in this case?
In Vantage, when you have a non-found field in a skill, and you click on the value on the image, Vantage takes the corresponding part of the full text OCR result over into the field. The full text OCR might be of a lower quality as no field specific OCR settings are applied.
On the other hand, when you have an automatically found field in Vantage (i.e. the skill is trained) the corresponding text in the field is an OCR result, which takes field type specific settings into account, i.e. Amount of Money, Date, etc.