Instead of using AVCaptureSession, I am just creating a CMSampleBuffer from an array of UIImage. I then pass this CMSampleBuffer to textCaptureService.add(sample) method to get it processed and extract text. But unfortunately textCaptureService doesn't do anything. I can see that the engine and the service do get created successfully but somehow the service doesn't process the buffer I am passing. Is there anyway to force the service to start after adding the sample buffer? Is the only way to make the service process the buffer is to call the add() from the captureOutput method of AVCaptureVideoDataOutputSampleBufferDelegate? Below is the snippet of my code where I am creating the buffer manually and adding it to the service:
for i in 1...10 {
timingInfo.presentationTimeStamp = CMTimeMake(Int64(600*i),600)
timingInfo.duration = CMTimeMake(1,1)
let err = CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixBuf!, &videoInfo)
print("CMF Status \(err)")
print("CMVF debug: \(videoInfo.debugDescription)")
let ret = CMSampleBufferCreateReadyWithImageBuffer(kCFAllocatorDefault, pixBuf!, videoInfo!, &timingInfo, &sample)
print("Buffer debug: \(sample.debugDescription)")
print("Status \(ret)")
self.textCaptureService?.add(sample)
}
Comments
1 comment
As described in Developer's documentation for free mobile OCR SDK you need to implement a delegate that adopts the AVCaptureVideoDataOutputSampleBufferDelegate protocol.Then you should instantiate an AVCaptureSession object, add video input and output and set the video output delegate. When the delegate receives a video frame via the captureOutput:didOutputSampleBuffer:fromConnection: method, pass this frame on to the text capture service by calling the addSampleBuffer: method.
If you would like to process a still image please take a look at a similar thread.
Please sign in to leave a comment.