2017-04-26 61 views
5

我正在实施google-vision face tracker中给出的示例。 MyFaceDetector类:来自CameraSource的裁剪面

public class MyFaceDetector extends Detector<Face> { 
    private Detector<Face> mDelegate; 

    MyFaceDetector(Detector<Face> delegate) { 
     mDelegate = delegate; 
    } 

    public SparseArray<Face> detect(Frame frame) { 
     return mDelegate.detect(frame); 
    } 

    public boolean isOperational() { 
     return mDelegate.isOperational(); 
    } 

    public boolean setFocus(int id) { 
     return mDelegate.setFocus(id); 
    } 

} 

FaceTrackerActivity类:

private void createCameraSource() { 

    imageView = (ImageView) findViewById(R.id.face); 

    FaceDetector faceDetector = new FaceDetector.Builder(this).build(); 
    myFaceDetector = new MyFaceDetector(faceDetector); 
    myFaceDetector.setProcessor(new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory()) 
      .build()); 
    mCameraSource = new CameraSource.Builder(this, myFaceDetector) 
      .setRequestedPreviewSize(640, 480) 
      .setFacing(CameraSource.CAMERA_FACING_FRONT) 
      .setRequestedFps(60.0f) 
      .build(); 

    if (!myFaceDetector.isOperational()) { 
     Log.w(TAG, "Face detector dependencies are not yet available."); 
    } 
} 

我需要裁剪的脸,把它放在ImageView。我无法在这里实现我的自定义Frameframe.getBitmap()总是在detect(Frame frame)中返回null。我如何实现这一目标?

+0

看吧https://stackoverflow.com/questions/32299947/mobile-vision-api-concatenate-new-detector-object-to-continue-frame-processing/ 32314136#32314136 – George

回答

2

如果帧最初是从位图创建的,frame.getBitmap()将只返回一个值。 CameraSource将图像信息作为ByteBuffers而不是位图提供,因此这是可用的图像信息。

frame.getGrayscaleImageData()将返回图像数据。

frame.getMetadata()将返回元数据,如图像尺寸和图像格式。

+0

你是对的!如果其他人看起来类似,我会在下面发布代码。 – Andro

0

这正好CameraSource.java

Frame outputFrame = new Frame.Builder() 
    .setImageData(mPendingFrameData, mPreviewSize.getWidth(), 
        mPreviewSize.getHeight(), ImageFormat.NV21) 
    .setId(mPendingFrameId) 
    .setTimestampMillis(mPendingTimeMillis) 
    .setRotation(mRotation) 
    .build(); 

int w = outputFrame.getMetadata().getWidth(); 
int h = outputFrame.getMetadata().getHeight(); 
SparseArray<Face> detectedFaces = mDetector.detect(outputFrame); 
Bitmap bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888); 

if (detectedFaces.size() > 0) { 
    ByteBuffer byteBufferRaw = outputFrame.getGrayscaleImageData(); 
    byte[] byteBuffer = byteBufferRaw.array(); 
    YuvImage yuvimage = new YuvImage(byteBuffer, ImageFormat.NV21, w, h, null); 

    Face face = detectedFaces.valueAt(0); 
    int left = (int) face.getPosition().x; 
    int top = (int) face.getPosition().y; 
    int right = (int) face.getWidth() + left; 
    int bottom = (int) face.getHeight() + top; 

    ByteArrayOutputStream baos = new ByteArrayOutputStream(); 
    yuvimage.compressToJpeg(new Rect(left, top, right, bottom), 80, baos); 
    byte[] jpegArray = baos.toByteArray(); 
    bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length); 
} 
((FaceTrackerActivity) mContext).setBitmapToImageView(bitmap);