pro mini bar ios camera

mini - iOS: captura la imagen de la cámara frontal



screenshot ipad mini (5)

Cómo capturar una imagen usando la cámara frontal AVFoundation:

Advertencias de desarrollo:

  • Comprueba cuidadosamente la configuración de orientación de la aplicación y la imagen.
  • AVFoundation y sus marcos asociados son gigantes desagradables y muy difíciles de entender / implementar. He hecho que mi código sea lo más sencillo posible, pero consulte este excelente tutorial para obtener una mejor explicación (el sitio web ya no está disponible, haga clic en archive.org): http://www.benjaminloulier.com/posts/ios4-and-direct-access-to-the-camera

ViewController.h

// Frameworks #import <CoreVideo/CoreVideo.h> #import <CoreMedia/CoreMedia.h> #import <AVFoundation/AVFoundation.h> #import <UIKit/UIKit.h> @interface CameraViewController : UIViewController <AVCaptureVideoDataOutputSampleBufferDelegate> // Camera @property (weak, nonatomic) IBOutlet UIImageView* cameraImageView; @property (strong, nonatomic) AVCaptureDevice* device; @property (strong, nonatomic) AVCaptureSession* captureSession; @property (strong, nonatomic) AVCaptureVideoPreviewLayer* previewLayer; @property (strong, nonatomic) UIImage* cameraImage; @end

ViewController.m

#import "CameraViewController.h" @implementation CameraViewController - (void)viewDidLoad { [super viewDidLoad]; [self setupCamera]; [self setupTimer]; } - (void)setupCamera { NSArray* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; for(AVCaptureDevice *device in devices) { if([device position] == AVCaptureDevicePositionFront) self.device = device; } AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:self.device error:nil]; AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc] init]; output.alwaysDiscardsLateVideoFrames = YES; dispatch_queue_t queue; queue = dispatch_queue_create("cameraQueue", NULL); [output setSampleBufferDelegate:self queue:queue]; NSString* key = (NSString *) kCVPixelBufferPixelFormatTypeKey; NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; [output setVideoSettings:videoSettings]; self.captureSession = [[AVCaptureSession alloc] init]; [self.captureSession addInput:input]; [self.captureSession addOutput:output]; [self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto]; self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession]; self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; // CHECK FOR YOUR APP self.previewLayer.frame = CGRectMake(0, 0, self.view.frame.size.height, self.view.frame.size.width); self.previewLayer.orientation = AVCaptureVideoOrientationLandscapeRight; // CHECK FOR YOUR APP [self.view.layer insertSublayer:self.previewLayer atIndex:0]; // Comment-out to hide preview layer [self.captureSession startRunning]; } - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(imageBuffer,0); uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); CGImageRef newImage = CGBitmapContextCreateImage(newContext); CGContextRelease(newContext); CGColorSpaceRelease(colorSpace); self.cameraImage = [UIImage imageWithCGImage:newImage scale:1.0f orientation:UIImageOrientationDownMirrored]; CGImageRelease(newImage); CVPixelBufferUnlockBaseAddress(imageBuffer,0); } - (void)setupTimer { NSTimer* cameraTimer = [NSTimer scheduledTimerWithTimeInterval:2.0f target:self selector:@selector(snapshot) userInfo:nil repeats:YES]; } - (void)snapshot { NSLog(@"SNAPSHOT"); self.cameraImageView.image = self.cameraImage; // Comment-out to hide snapshot } @end

¡Conecta esto a un UIViewController con un UIImageView para la instantánea y funcionará! Las instantáneas se toman programáticamente a intervalos de 2.0 segundos sin ninguna intervención del usuario. Comente las líneas seleccionadas para eliminar la capa de vista previa y los comentarios de la instantánea.

Más preguntas / comentarios, por favor hágamelo saber!

Estoy creando una aplicación en la que me gustaría capturar una imagen de la cámara frontal, sin presentar una pantalla de captura de ningún tipo. Quiero tomar una foto completamente en código sin ninguna interacción del usuario. ¿Cómo haría esto para la cámara frontal?


Convertí el código anterior de Objc a Swift 3, si alguien todavía busca una solución en 2017.

import UIKit import AVFoundation class CameraViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate { @IBOutlet weak var cameraImageView: UIImageView! var device: AVCaptureDevice? var captureSession: AVCaptureSession? var previewLayer: AVCaptureVideoPreviewLayer? var cameraImage: UIImage? override func viewDidLoad() { super.viewDidLoad() setupCamera() setupTimer() } func setupCamera() { let discoverySession = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: .front) device = discoverySession?.devices[0] let input: AVCaptureDeviceInput do { input = try AVCaptureDeviceInput(device: device) } catch { return } let output = AVCaptureVideoDataOutput() output.alwaysDiscardsLateVideoFrames = true let queue = DispatchQueue(label: "cameraQueue") output.setSampleBufferDelegate(self, queue: queue) output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable: kCVPixelFormatType_32BGRA] captureSession = AVCaptureSession() captureSession?.addInput(input) captureSession?.addOutput(output) captureSession?.sessionPreset = AVCaptureSessionPresetPhoto previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill previewLayer?.frame = CGRect(x: 0.0, y: 0.0, width: view.frame.width, height: view.frame.height) view.layer.insertSublayer(previewLayer!, at: 0) captureSession?.startRunning() } func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: .allZeros)) let baseAddress = UnsafeMutableRawPointer(CVPixelBufferGetBaseAddress(imageBuffer!)) let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!) let width = CVPixelBufferGetWidth(imageBuffer!) let height = CVPixelBufferGetHeight(imageBuffer!) let colorSpace = CGColorSpaceCreateDeviceRGB() let newContext = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue) let newImage = newContext!.makeImage() cameraImage = UIImage(cgImage: newImage!) CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: .allZeros)) } func setupTimer() { _ = Timer.scheduledTimer(timeInterval: 2.0, target: self, selector: #selector(snapshot), userInfo: nil, repeats: true) } func snapshot() { print("SNAPSHOT") cameraImageView.image = cameraImage } }

Además, encontré una solución más corta para obtener la imagen del CMSampleBuffer:

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { let myPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) let myCIimage = CIImage(cvPixelBuffer: myPixelBuffer!) let videoImage = UIImage(ciImage: myCIimage) cameraImage = videoImage }


Convertir el código anterior a Swift 4

import UIKit import AVFoundation class CameraViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate { @IBOutlet weak var cameraImageView: UIImageView! var device: AVCaptureDevice? var captureSession: AVCaptureSession? var previewLayer: AVCaptureVideoPreviewLayer? var cameraImage: UIImage? override func viewDidLoad() { super.viewDidLoad() setupCamera() setupTimer() } func setupCamera() { let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .front) device = discoverySession.devices[0] let input: AVCaptureDeviceInput do { input = try AVCaptureDeviceInput(device: device!) } catch { return } let output = AVCaptureVideoDataOutput() output.alwaysDiscardsLateVideoFrames = true let queue = DispatchQueue(label: "cameraQueue") output.setSampleBufferDelegate(self, queue: queue) output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String: kCVPixelFormatType_32BGRA] captureSession = AVCaptureSession() captureSession?.addInput(input) captureSession?.addOutput(output) captureSession?.sessionPreset = AVCaptureSession.Preset.photo previewLayer = AVCaptureVideoPreviewLayer(session: captureSession!) previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill previewLayer?.frame = CGRect(x: 0.0, y: 0.0, width: view.frame.width, height: view.frame.height) view.layer.insertSublayer(previewLayer!, at: 0) captureSession?.startRunning() } func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0)) let baseAddress = UnsafeMutableRawPointer(CVPixelBufferGetBaseAddress(imageBuffer!)) let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!) let width = CVPixelBufferGetWidth(imageBuffer!) let height = CVPixelBufferGetHeight(imageBuffer!) let colorSpace = CGColorSpaceCreateDeviceRGB() let newContext = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue) let newImage = newContext!.makeImage() cameraImage = UIImage(cgImage: newImage!) CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0)) } func setupTimer() { _ = Timer.scheduledTimer(timeInterval: 2.0, target: self, selector: #selector(snapshot), userInfo: nil, repeats: true) } @objc func snapshot() { print("SNAPSHOT") cameraImageView.image = cameraImage } }


Hay un método llamado takePicture en los documentos para la clase UIImagePickerController. Dice:

Utilice este método junto con una vista de superposición personalizada para iniciar la captura programática de una imagen fija. Esto permite tomar más de una imagen sin salir de la interfaz, pero requiere que oculte los controles predeterminados del selector de imágenes.


Probablemente necesite usar AVFoundation para capturar la secuencia de video / imagen sin mostrarla. A diferencia de UIImagePickerController , no funciona " UIImagePickerController ". Mire la AVCam de Apple como un ejemplo para comenzar.