我正在试图制作一个简单的相机应用程序,其中前置相机可以检测到人脸。 这应该是很简单:iOS相机面部跟踪(Swift 3 Xcode 8)
创建从UIImage的继承CameraView类,并将其放置在UI中。确保它实现AVCaptureVideoDataOutputSampleBufferDelegate以实时处理相机中的帧。
class CameraView: UIImageView, AVCaptureVideoDataOutputSampleBufferDelegate
在函数handleCamera,当CameraView被实例调用,建立一个AVCapture会话。从相机添加输入。
override init(frame: CGRect) { super.init(frame:frame) handleCamera() } func handleCamera() { camera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera, mediaType: AVMediaTypeVideo, position: .front) session = AVCaptureSession() // Set recovered camera as an input device for the capture session do { try input = AVCaptureDeviceInput(device: camera); } catch _ as NSError { print ("ERROR: Front camera can't be used as input") input = nil } // Add the input from the camera to the capture session if (session?.canAddInput(input) == true) { session?.addInput(input) }
创建输出。创建一个串行输出队列来传递将由AVCaptureVideoDataOutputSampleBufferDelegate处理的数据(本例中为类本身)。将输出添加到会话。
output = AVCaptureVideoDataOutput() output?.alwaysDiscardsLateVideoFrames = true outputQueue = DispatchQueue(label: "outputQueue") output?.setSampleBufferDelegate(self, queue: outputQueue) // add front camera output to the session for use and modification if(session?.canAddOutput(output) == true){ session?.addOutput(output) } // front camera can't be used as output, not working: handle error else { print("ERROR: Output not viable") }
设置相机预览视图,执行会话
// Setup camera preview with the session input previewLayer = AVCaptureVideoPreviewLayer(session: session) previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait previewLayer?.frame = self.bounds self.layer.addSublayer(previewLayer!) // Process the camera and run it onto the preview session?.startRunning()
- 在通过委托运行captureOutput功能
,所述收到样品缓冲液转换为CIImage以检测人脸。如果发现脸部,请提供反馈。
func captureOutput(_ captureOutput: AVCaptureOutput!, didDrop sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!) let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh] let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy) let faces = faceDetector?.features(in: cameraImage) for face in faces as! [CIFaceFeature] { print("Found bounds are \(face.bounds)") let faceBox = UIView(frame: face.bounds) faceBox.layer.borderWidth = 3 faceBox.layer.borderColor = UIColor.red.cgColor faceBox.backgroundColor = UIColor.clear self.addSubview(faceBox) if face.hasLeftEyePosition { print("Left eye bounds are \(face.leftEyePosition)") } if face.hasRightEyePosition { print("Right eye bounds are \(face.rightEyePosition)") } } }
我的问题:我可以运行,但与我从所有在互联网上尝试不同的代码的群众,我从来没有能够得到captureOutput检测面对镜头。要么是应用程序没有进入函数,要么是因为变量不起作用而崩溃,最常见的情况是sampleBuffer变量为nul。 我在做什么错?
我实际上在iOS实验室的帮助下发现了这个问题,忘记了更新问题。 这实际上是所有失踪,谢谢你通过,希望这会帮助别人。 – KazToozs