ios使用AVFramework捕获图像
我使用此代码捕捉图像
#pragma mark - image capture // Create and configure a capture session and start it running - (void)setupCaptureSession { NSError *error = nil; // Create the session AVCaptureSession *session = [[AVCaptureSession alloc] init]; // Configure the session to produce lower resolution video frames, if your // processing algorithm can cope. We'll specify medium quality for the // chosen device. session.sessionPreset = AVCaptureSessionPresetMedium; // Find a suitable AVCaptureDevice AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; // Create a device input with the device and add it to the session. AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; if (!input) { NSLog(@"PANIC: no media input"); } [session addInput:input]; // Create a VideoDataOutput and add it to the session AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init]; [session addOutput:output]; // Configure your output. dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL); [output setSampleBufferDelegate:self queue:queue]; dispatch_release(queue); // Specify the pixel format output.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]; // If you wish to cap the frame rate to a known value, such as 15 fps, set // minFrameDuration. // Start the session running to start the flow of data [session startRunning]; // Assign session to an ivar. [self setSession:session]; } // Delegate routine that is called when a sample buffer was written - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { NSLog(@"captureOutput: didOutputSampleBufferFromConnection"); // Create a UIImage from the sample buffer data UIImage *image = [self imageFromSampleBuffer:sampleBuffer]; //< Add your code here that uses the image > [self.imageView setImage:image]; [self.view setNeedsDisplay]; } // Create a UIImage from sample buffer data - (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer { NSLog(@"imageFromSampleBuffer: called"); // Get a CMSampleBuffer's Core Video image buffer for the media data CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // Lock the base address of the pixel buffer CVPixelBufferLockBaseAddress(imageBuffer, 0); // Get the number of bytes per row for the pixel buffer void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); // Get the number of bytes per row for the pixel buffer size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); // Get the pixel buffer width and height size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); // Create a device-dependent RGB color space CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // Create a bitmap graphics context with the sample buffer data CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); // Create a Quartz image from the pixel data in the bitmap graphics context CGImageRef quartzImage = CGBitmapContextCreateImage(context); // Unlock the pixel buffer CVPixelBufferUnlockBaseAddress(imageBuffer,0); // Free up the context and color space CGContextRelease(context); CGColorSpaceRelease(colorSpace); // Create an image object from the Quartz image UIImage *image = [UIImage imageWithCGImage:quartzImage]; // Release the Quartz image CGImageRelease(quartzImage); return (image); } -(void)setSession:(AVCaptureSession *)session { NSLog(@"setting session..."); self.captureSession=session; }
捕获代码的作品。 但! 我需要改变的东西: – 在我看来相机的videostream。 – 从中获取图像(例如5秒)。 请帮助我,怎么办?
添加以下行
output.minFrameDuration = CMTimeMake(5, 1);
在评论下面
// If you wish to cap the frame rate to a known value, such as 15 fps, set // minFrameDuration.
但在上面
[session startRunning];
编辑
使用以下代码预览相机输出。
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session]; UIView *aView = self.view; CGRect videoRect = CGRectMake(0.0, 0.0, 320.0, 150.0); previewLayer.frame = videoRect; // Assume you want the preview layer to fill the view. [aView.layer addSublayer:previewLayer];
编辑2:好吧..
苹果提供了一种设置minFrameDuration的方法
所以,现在,使用下面的代码来设置帧的持续时间
AVCaptureConnection *conn = [output connectionWithMediaType:AVMediaTypeVideo]; if (conn.supportsVideoMinFrameDuration) conn.videoMinFrameDuration = CMTimeMake(5,1); if (conn.supportsVideoMaxFrameDuration) conn.videoMaxFrameDuration = CMTimeMake(5,1);
小心 – 从AVCaptureOutputcallback被张贴在您指定的调度队列中。 我看到你从这个callback执行UI更新,这是错误的。 你只能在主队列中执行它们。 例如
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { NSLog(@"captureOutput: didOutputSampleBufferFromConnection"); // Create a UIImage from the sample buffer data UIImage *image = [self imageFromSampleBuffer:sampleBuffer]; dispatch_async(dispatch_get_main_queue(), ^{ //< Add your code here that uses the image > [self.imageView setImage:image]; [self.view setNeedsDisplay]; } }
这里是一个Swift版本的imageFromSampleBuffer函数:
func imageFromSampleBuffer(sampleBuffer:CMSampleBuffer!) -> UIImage { let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)! CVPixelBufferLockBaseAddress(imageBuffer, 0) let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer) let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer) let width = CVPixelBufferGetWidth(imageBuffer) let height = CVPixelBufferGetHeight(imageBuffer) let colorSpace = CGColorSpaceCreateDeviceRGB() let bitmapInfo:CGBitmapInfo = [.ByteOrder32Little, CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue)] let context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, bitmapInfo.rawValue) let quartzImage = CGBitmapContextCreateImage(context) CVPixelBufferUnlockBaseAddress(imageBuffer, 0) let image = UIImage(CGImage: quartzImage!) return image }
以上video设置为我工作:
videoDataOutput = AVCaptureVideoDataOutput() videoDataOutput?.videoSettings = [kCVPixelBufferPixelFormatTypeKey:Int(kCVPixelFormatType_32BGRA)] videoDataOutput?.setSampleBufferDelegate(self, queue: queue)