I want to implement the feature in iOS to get Questions and answers from the image taken by a camera from the phone. To exemplify, http://prntscr.com/kvnbca I need to fetch questions and answers from that. And especially the main focus is to detect the bold answer. So I would like to know is there any way to detect the bold or red color fonts from that through OCR while scanning and fetching text?
How to detect the bold or specific color fonts while fetching texts through OCR in iOS?
Was this article helpful?
0 out of 0 found this helpful
Please sorry for possible delay.
For your usage scenario we can recommend you to use the documentConversion profile and export the recognition results to the XML format with the enabled xml:writeFormatting property, for example:
The xml:writeFormatting property specifies whether the paragraph and character styles should be written to an output file in the XML format. The bold text should be also detected if the image has appropriate quality for OCR (please see the Source Image Recommendations article for details).
Your screenshot is not processed as we want due to low resolution (it has only 96 dpi). If we change the resolution just to 200 dpi all text is extracted accurately including the bold answers. Please see the attachments.
It's okay. Thank you for looking into my question and considering my request. I really appreciate your solution.
I have tried this solution but I can not get where the image is to be passed. May I know where the image is to be passed? Is it to be passed in any parameter? 'processImage' Is this parameter name or anything else? Can you provide me the hint for that?
Please let me explain how Cloud OCR SDK works. This is an online OCR service that provides Web API for OCR, this is the special solution for the developers and is not a ready-to-use software, so some programming skills are needed. As for the programming aspects, Cloud OCR SDK is implemented using the REST software architecture principles and can be accessed through the API by HTTP or HTTPS requests. It works as following:
You upload your images and the recognition parameters to our server. Please note that the images should be loaded from the computer directly. The image is transmitted in the request body of the corresponding processing method (in your case this is the processImages method), and the recognition settings are specified in the parameters of the specific method.
Your images are processed. You can get the status of your task using the getTaskStatus method.
Once the status of your task is "Completed" you get a service response containing the links (the resultUrl, resultUrl2 and resultUrl3 attributes) to the output files which you should download. Using these URLs you can download the recognition results in corresponding export formats.
Also to help you to get started we recommend you to study the following articles published at our site:
Hope this info will be helpful for you.
Please sign in to leave a comment.