Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Building a "Minimum Viable Product" (MVP) with ...

Building a "Minimum Viable Product" (MVP) with Face recognition and AR in Android @ Droidcon London 2017

Droidcon London 2017 Talk the 27th of October. Introduction to a use case for building a "Minimum viable product" (MVP), talking about machine learning (face recognition) on Android devices with Augmented reality (AR). You will learn how to plan those kind of projects, the process to follow and future work for those specific projects. Indicating "how", "what" and "why".

Raul Hernandez Lopez

October 27, 2017
Tweet

More Decks by Raul Hernandez Lopez

Other Decks in Technology

Transcript

  1. Raúl Hernández López Software Engineer focused on Android Building a

    “Minimum Viable Product” (MVP) with Face Recognition and AR in Android @RaulHernandezL droidcon London 2017 raulh82vlc raul.h82 #droidconUK
  2. @RaulHernandezL Brainstorm ...also a smoking pipe and why not a

    smart moustache and a monocle to impress!!! #MVP #droidconUK
  3. How do I start to build my detection system? @RaulHernandezL

    #AR #MachineLearning #droidconUK Choose a machine learning library:
  4. Face detection @RaulHernandezL Detection model (cascade classifier) Haar Cascade vs

    Local Binary Pattern Histograms (LBPH) #AR #MachineLearning #droidconUK https://github.com/raulh82vlc/Image-Detection-Samples
  5. @RaulHernandezL #AR #MachineLearning #droidconUK @NonNull private MatOfRect startDetection() { if

    (absoluteFaceSize == 0) { int height = matrixGray.rows(); if (Math.round(height * RELATIVE_FACE_SIZE) > 0) { absoluteFaceSize = Math.round(height * RELATIVE_FACE_SIZE); } } MatOfRect faces = new MatOfRect(); if (detectorFace != null) { if (matrixGray.height() > 0) { detectorFace.detectMultiScale(matrixGray, faces, 1.1, 2, 2, new Size(absoluteFaceSize, absoluteFaceSize), new Size()); } } return faces; } Image in gray output opencv.domain.FDInteractorImpl Haar Cascade classifier OpenCV face detection
  6. @RaulHernandezL #AR #MachineLearning #droidconUK @NonNull private MatOfRect startDetection() { if

    (absoluteFaceSize == 0) { int height = matrixGray.rows(); if (Math.round(height * RELATIVE_FACE_SIZE) > 0) { absoluteFaceSize = Math.round(height * RELATIVE_FACE_SIZE); } } MatOfRect faces = new MatOfRect(); if (detectorFace != null) { if (matrixGray.height() > 0) { detectorFace.detectMultiScale(matrixGray, faces, 1.1, 2, 2, new Size(absoluteFaceSize, absoluteFaceSize), new Size()); } } return faces; } opencv.domain.FDInteractorImpl OpenCV face detection
  7. @RaulHernandezL #AR #MachineLearning #droidconUK // computing eye areas as well

    as splitting it Rect leftEyeArea = new Rect(face.x + face.width / 16 + (face.width - 2 * face.width / 16) / 2, (int) (face.y + (face.height / 4.5)), (face.width - 2 * face.width / 16) / 2, (int) (face.height / 3.0)); FaceDrawerOpenCV.drawEyesRectangles(rightEyeArea, leftEyeArea, matrixRGBA); opencv.domain.EyesDetectionInteractorImpl OpenCV draws initial eyes areas
  8. @RaulHernandezL #AR #MachineLearning #droidconUK // computing eye areas as well

    as splitting it Rect leftEyeArea = new Rect(face.x + face.width / 16 + (face.width - 2 * face.width / 16) / 2, (int) (face.y + (face.height / 4.5)), (face.width - 2 * face.width / 16) / 2, (int) (face.height / 3.0)); FaceDrawerOpenCV.drawEyesRectangles(rightEyeArea, leftEyeArea, matrixRGBA); opencv.domain.EyesDetectionInteractorImpl public static void drawIrisRectangle (Rect eyeTemplate , Mat matrixRgba) { Imgproc.rectangle(matrixRgba, eyeTemplate.tl() , eyeTemplate.br() , new Scalar(255, 0, 0, 255), 2); } Image in color OpenCV draws initial eyes areas
  9. @RaulHernandezL #AR #MachineLearning #droidconUK String methodForEyes; if (learnFrames < LEARN_FRAMES_LIMIT)

    { templateRight = buildTemplate(rightEyeArea, IRIS_MIN_SIZE, matrixGray, matrixRGBA, detectorEye); templateLeft = buildTemplate(leftEyeArea, IRIS_MIN_SIZE, matrixGray, matrixRGBA, detectorEye); learnFrames++; methodForEyes = "building Template with Detect multiscale, frame: " + learnFrames; } else { // Learning finished, use the new templates for template matching matchEye(rightEyeArea, templateRight, matrixGray, matrixRGBA); matchEye(leftEyeArea, templateLeft, matrixGray, matrixRGBA); methodForEyes = "match eye with Template, frame: " + learnFrames; } notifyEyesFound(methodForEyes); OpenCV eyes detection opencv.domain.EyesDetectionInteractorImpl
  10. @RaulHernandezL #AR #MachineLearning #droidconUK String methodForEyes; if (learnFrames < LEARN_FRAMES_LIMIT)

    { templateRight = buildTemplate(rightEyeArea, IRIS_MIN_SIZE, matrixGray, matrixRGBA, detectorEye); templateLeft = buildTemplate(leftEyeArea, IRIS_MIN_SIZE, matrixGray, matrixRGBA, detectorEye); learnFrames++; methodForEyes = "building Template with Detect multiscale, frame: " + learnFrames; } else { // Learning finished, use the new templates for template matching matchEye(rightEyeArea, templateRight, matrixGray, matrixRGBA); matchEye(leftEyeArea, templateLeft, matrixGray, matrixRGBA); methodForEyes = "match eye with Template, frame: " + learnFrames; } notifyEyesFound(methodForEyes); OpenCV eyes detection opencv.domain.EyesDetectionInteractorImpl
  11. @RaulHernandezL #AR #MachineLearning #droidconUK String methodForEyes; if (learnFrames < LEARN_FRAMES_LIMIT)

    { templateRight = buildTemplate(rightEyeArea, IRIS_MIN_SIZE, matrixGray, matrixRGBA, detectorEye); templateLeft = buildTemplate(leftEyeArea, IRIS_MIN_SIZE, matrixGray, matrixRGBA, detectorEye); learnFrames++; methodForEyes = "building Template with Detect multiscale, frame: " + learnFrames; } else { // Learning finished, use the new templates for template matching matchEye(rightEyeArea, templateRight, matrixGray, matrixRGBA); matchEye(leftEyeArea, templateLeft, matrixGray, matrixRGBA); methodForEyes = "match eye with Template, frame: " + learnFrames; } notifyEyesFound(methodForEyes); OpenCV eyes detection opencv.domain.EyesDetectionInteractorImpl
  12. @RaulHernandezL #AR #MachineLearning #droidconUK Imgproc.matchTemplate(submatGray, builtTemplate, outputTemplateMat, Imgproc.TM_SQDIFF_NORMED); Core.MinMaxLocResult minMaxLocResult

    = Core.minMaxLoc(outputTemplateMat); // when is difference in matching methods, the best match is max / min value matchLoc = minMaxLocResult.minLoc; Point matchLocTx = new Point(matchLoc.x + area.x, matchLoc.y + area.y); Point matchLocTy = new Point(matchLoc.x + builtTemplate.cols() + area.x, matchLoc.y + builtTemplate.rows() + area.y); FaceDrawerOpenCV.drawMatchedEye(matchLocTx, matchLocTy, matrixRGBA); opencv.domain.EyesDetectionInteractorImpl OpenCV template matching
  13. @RaulHernandezL #AR #MachineLearning #droidconUK Imgproc.matchTemplate(submatGray, builtTemplate, outputTemplateMat, Imgproc.TM_SQDIFF_NORMED); Core.MinMaxLocResult minMaxLocResult

    = Core.minMaxLoc(outputTemplateMat); // when is difference in matching methods, the best match is max / min value matchLoc = minMaxLocResult.minLoc; Point matchLocTx = new Point(matchLoc.x + area.x, matchLoc.y + area.y); Point matchLocTy = new Point(matchLoc.x + builtTemplate.cols() + area.x, matchLoc.y + builtTemplate.rows() + area.y); FaceDrawerOpenCV.drawMatchedEye(matchLocTx, matchLocTy, matrixRGBA); opencv.domain.EyesDetectionInteractorImpl OpenCV minMaxLoc
  14. @RaulHernandezL #AR #MachineLearning #droidconUK Imgproc.matchTemplate(submatGray, builtTemplate, outputTemplateMat, Imgproc.TM_SQDIFF_NORMED); Core.MinMaxLocResult minMaxLocResult

    = Core.minMaxLoc(outputTemplateMat); // when is difference in matching methods, the best match is max / min value matchLoc = minMaxLocResult.minLoc; Point matchLocTx = new Point(matchLoc.x + area.x, matchLoc.y + area.y); Point matchLocTy = new Point(matchLoc.x + builtTemplate.cols() + area.x, matchLoc.y + builtTemplate.rows() + area.y); FaceDrawerOpenCV.drawMatchedEye(matchLocTx, matchLocTy, matrixRGBA); opencv.render.FaceDrawerOpenCV OpenCV draw eye public static void drawMatchedEye(Point matchLocTx, Point matchLocTy, Mat matrixRgba) { Imgproc.rectangle(matrixRgba, matchLocTx, matchLocTy, new Scalar(255, 255, 0, 255)); }
  15. How do I start? @RaulHernandezL Native camera detection Camera API

    Camera 2 API #AR #MachineLearning #droidconUK https://github.com/raulh82vlc/Image-Detection-Samples
  16. private CameraCaptureSession.CaptureCallback captureCallback = new CameraCaptureSession.CaptureCallback() { … @Override public

    void onCaptureProgressed(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull CaptureResult captureResult) { detectFaces(captureResult); } @Override public void onCaptureCompleted(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull TotalCaptureResult captureResult) { detectFaces(captureResult); } @RaulHernandezL #AR #MachineLearning #droidconUK Camera2 face detection camera2.presentation.FDCamera2Presenter
  17. private CameraCaptureSession.CaptureCallback captureCallback = new CameraCaptureSession.CaptureCallback() { … @Override public

    void onCaptureProgressed(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull CaptureResult captureResult) { detectFaces(captureResult); } @Override public void onCaptureCompleted(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull TotalCaptureResult captureResult) { detectFaces(captureResult); } @RaulHernandezL #AR #MachineLearning #droidconUK Camera2 face detection camera2.presentation.FDCamera2Presenter
  18. private CameraCaptureSession.CaptureCallback captureCallback = new CameraCaptureSession.CaptureCallback() { … @Override public

    void onCaptureProgressed(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull CaptureResult captureResult) { detectFaces(captureResult); } @Override public void onCaptureCompleted(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull TotalCaptureResult captureResult) { detectFaces(captureResult); } @RaulHernandezL #AR #MachineLearning #droidconUK Camera2 face detection camera2.presentation.FDCamera2Presenter
  19. private void detectFaces(CaptureResult captureResult) { Integer mode = captureResult.get(CaptureResult.STATISTICS_FACE_DETECT_MODE); if

    (isViewAvailable() && mode != null) { android.hardware.camera2.params.Face[] faces = captureResult.get(CaptureResult.STATISTICS_FACES); if (faces != null) { Log.i(TAG, "faces : " + faces.length + " , mode : " + mode); for (android.hardware.camera2.params.Face face : faces) { Rect faceBounds = face.getBounds(); // Once processed, the result is sent back to the View presenterView.onFaceDetected(mapCameraFaceToCanvas(faceBounds, face.getLeftEyePosition(), face.getRightEyePosition())); } } } } @RaulHernandezL #AR #MachineLearning #droidconUK Camera2 face detection camera2.presentation.FDCamera2Presenter
  20. private void detectFaces(CaptureResult captureResult) { Integer mode = captureResult.get(CaptureResult.STATISTICS_FACE_DETECT_MODE); if

    (isViewAvailable() && mode != null) { android.hardware.camera2.params.Face[] faces = captureResult.get(CaptureResult.STATISTICS_FACES); if (faces != null) { Log.i(TAG, "faces : " + faces.length + " , mode : " + mode); for (android.hardware.camera2.params.Face face : faces) { Rect faceBounds = face.getBounds(); // Once processed, the result is sent back to the View presenterView.onFaceDetected(mapCameraFaceToCanvas(faceBounds, face.getLeftEyePosition(), face.getRightEyePosition())); } } } } @RaulHernandezL #AR #MachineLearning #droidconUK Camera2 face detection camera2.presentation.FDCamera2Presenter
  21. private void detectFaces(CaptureResult captureResult) { Integer mode = captureResult.get(CaptureResult.STATISTICS_FACE_DETECT_MODE); if

    (isViewAvailable() && mode != null) { android.hardware.camera2.params.Face[] faces = captureResult.get(CaptureResult.STATISTICS_FACES); if (faces != null) { Log.i(TAG, "faces : " + faces.length + " , mode : " + mode); for (android.hardware.camera2.params.Face face : faces) { Rect faceBounds = face.getBounds(); // Once processed, the result is sent back to the View presenterView.onFaceDetected(mapCameraFaceToCanvas(faceBounds, face.getLeftEyePosition(), face.getRightEyePosition())); } } } } @RaulHernandezL #AR #MachineLearning #droidconUK Camera2 face detection camera2.presentation.FDCamera2Presenter
  22. private void detectFaces(CaptureResult captureResult) { Integer mode = captureResult.get(CaptureResult.STATISTICS_FACE_DETECT_MODE); if

    (isViewAvailable() && mode != null) { android.hardware.camera2.params.Face[] faces = captureResult.get(CaptureResult.STATISTICS_FACES); if (faces != null) { Log.i(TAG, "faces : " + faces.length + " , mode : " + mode); for (android.hardware.camera2.params.Face face : faces) { Rect faceBounds = face.getBounds(); // Once processed, the result is sent back to the View presenterView.onFaceDetected(mapCameraFaceToCanvas(faceBounds, face.getLeftEyePosition(), face.getRightEyePosition())); } } } } @RaulHernandezL #AR #MachineLearning #droidconUK Camera2 face detection camera2.presentation.FDCamera2Presenter
  23. public class FaceDrawer extends View { ... @Override protected void

    onDraw(Canvas canvas) { super.onDraw(canvas); if (face != null) { Square faceShape = face.getFaceShape(); int w = faceShape.getWidth(); int h = faceShape.getHeight(); drawFaceMarker(canvas, faceShape, w, h); // Bitmap AR Super Sayan drawBitmapAR(canvas, drawableStore.get(KEY_BITMAP_HEAD), w, h, (int) faceShape.getStart().getxAxis() - (w / 2), (int) faceShape.getStart().getyAxis() - (int)(h * 1.5)); } } @RaulHernandezL #AR #MachineLearning #droidconUK camera2.render.FaceDrawer Camera2 face rendering
  24. public class FaceDrawer extends View { ... @Override protected void

    onDraw(Canvas canvas) { super.onDraw(canvas); if (face != null) { Square faceShape = face.getFaceShape(); int w = faceShape.getWidth(); int h = faceShape.getHeight(); drawFaceMarker(canvas, faceShape, w, h); // Bitmap AR Super Sayan drawBitmapAR(canvas, drawableStore.get(KEY_BITMAP_HEAD), w, h, (int) faceShape.getStart().getxAxis() - (w / 2), (int) faceShape.getStart().getyAxis() - (int)(h * 1.5)); } } @RaulHernandezL #AR #MachineLearning #droidconUK camera2.render.FaceDrawer Camera2 face rendering
  25. private void drawBitmapAR(Canvas canvas, Drawable drawable, int w, int h,

    int x, int y) { if (drawable != null) { int widthGraphic = drawable.getIntrinsicWidth(); int heightGraphic = drawable.getIntrinsicHeight(); setTransformation(w, h, x, y, widthGraphic, heightGraphic); canvas.setMatrix(transformation); drawable.draw(canvas); } } @RaulHernandezL #AR #MachineLearning #droidconUK camera2.render.FaceDrawer Camera2 face rendering
  26. private void drawBitmapAR(Canvas canvas, Drawable drawable, int w, int h,

    int x, int y) { if (drawable != null) { int widthGraphic = drawable.getIntrinsicWidth(); int heightGraphic = drawable.getIntrinsicHeight(); setTransformation(w, h, x, y, widthGraphic, heightGraphic); canvas.setMatrix(transformation); drawable.draw(canvas); } } private void setTransformation(int w, int h, int x, int y, int widthGraphic, int heightGraphic) { MeasuresUI measures = TransformationsHelper.calcMeasures(w, h, viewWidth, viewHeight, x, y, widthGraphic, heightGraphic); transformation.setScale(measures.getScale(), measures.getScale()); transformation.postTranslate(measures.getDx(), measures.getDy()); transformation.postConcat(transformation); } @RaulHernandezL #AR #MachineLearning #droidconUK camera2.render.FaceDrawer Camera2 face rendering
  27. @RaulHernandezL AR marker (Paint) with Canvas on a View #AR

    #MachineLearning #droidconUK Camera2 face validation
  28. Can you see any commonalities? @RaulHernandezL #AR #MachineLearning #droidconUK Clear

    and separated features A defined Architecture UI driven
  29. @RaulHernandezL #AR #MachineLearning #droidconUK Detecting nose or mouth, or other

    shapes to render. Improvement of graphics user experience (UX) for instance using a kalman filter, or using tracking algorithms like “good features to track”... Extensions for detection & rendering
  30. Deep learning @RaulHernandezL #AR #MachineLearning #droidconUK Adapted from the original

    image from Max Tegmark - MIT Connections between physics and deep learning on YouTube Label Image
  31. Recognition Cloud APIs @RaulHernandezL #AR #MachineLearning #droidconUK a. Google Cloud

    Vision (Only Detection) b. Amazon Rekognition c. Microsoft Azure Cognitive Services Azure Cognitive services for face API Cloud Google Vision API Amazon Rekognition API
  32. Using a Pre-trained model @RaulHernandezL #AR #MachineLearning #droidconUK Transfer learning

    Pic 1 from Tate museum Pic 2 from Mirror Celebrity news Pic 3 from Eccles photographs blog
  33. Using a Pre-trained model @RaulHernandezL #AR #MachineLearning #droidconUK Daj´s Using

    a pre-trained Tensorflow model on Android Original diagram from Jalammar blog -> Supercharging android apps using Tensorflow
  34. Using a Pre-trained model @RaulHernandezL #AR #MachineLearning #droidconUK TensorFlow now

    bundles the native binaries and Java interface. JAR -> libandroid_tensorflow_interface_java.jar or native binaries -> libtensorflow_inference.so dependencies { compile 'org.tensorflow:tensorflow-android:1.2.0' }
  35. Using a Pre-trained model @RaulHernandezL #AR #MachineLearning #droidconUK Adding to

    app/src/main/assets: Model graph -> .pb & graph labels -> .txt
  36. Using a Pre-trained model @RaulHernandezL #AR #MachineLearning #droidconUK public static

    TensorFlowFaceClassifier buildFaceClassifier(AssetManager assetManager, String modelFileName, String labelFileName, int inputSize, String inputName, String outputName) { TensorFlowFaceClassifier faceClassifier = new TensorFlowFaceClassifier(); … // Set input and output names // For each line read from labelFileName, add its label into memory faceClassifier.addLabel(lineOnFile); // String values … faceClassifier.inferenceInterface = new TensorFlowInferenceInterface(assetManager, modelFileName); // The shape of the output is [N, NUM_CLASSES], where N is the batch size. int numClasses = (int) faceClassifier.inferenceInterface.graph().operation(outputName).output(0).shape().size(1); faceClassifier.inputSize = inputSize; faceClassifier.outputNames = new String[]{outputName}; faceClassifier.outputs = new float[numClasses]; return faceClassifier; } .facerecognition.TensorFlowFaceClassifier (AndroidTensorFlowMNISTExample modified)
  37. Using a Pre-trained model @RaulHernandezL #AR #MachineLearning #droidconUK public static

    TensorFlowFaceClassifier buildFaceClassifier(AssetManager assetManager, String modelFileName, String labelFileName, int inputSize, String inputName, String outputName) { TensorFlowFaceClassifier faceClassifier = new TensorFlowFaceClassifier(); … // Set input and output names // For each line read from labelFileName, add its label into memory faceClassifier.addLabel(lineOnFile); // String values … faceClassifier.inferenceInterface = new TensorFlowInferenceInterface(assetManager, modelFileName); // The shape of the output is [N, NUM_CLASSES], where N is the batch size. int numClasses = (int) faceClassifier.inferenceInterface.graph().operation(outputName).output(0).shape().size(1); faceClassifier.inputSize = inputSize; faceClassifier.outputNames = new String[]{outputName}; faceClassifier.outputs = new float[numClasses]; return faceClassifier; } .facerecognition.TensorFlowFaceClassifier (AndroidTensorFlowMNISTExample modified)
  38. Using a Pre-trained model @RaulHernandezL #AR #MachineLearning #droidconUK public static

    TensorFlowFaceClassifier buildFaceClassifier(AssetManager assetManager, String modelFileName, String labelFileName, int inputSize, String inputName, String outputName) { TensorFlowFaceClassifier faceClassifier = new TensorFlowFaceClassifier(); … // Set input and output names // For each line read from labelFileName, add its label into memory faceClassifier.addLabel(lineOnFile); // String values … faceClassifier.inferenceInterface = new TensorFlowInferenceInterface(assetManager, modelFileName); // The shape of the output is [N, NUM_CLASSES], where N is the batch size. int numClasses = (int) faceClassifier.inferenceInterface.graph().operation(outputName).output(0).shape().size(1); faceClassifier.inputSize = inputSize; faceClassifier.outputNames = new String[]{outputName}; faceClassifier.outputs = new float[numClasses]; return faceClassifier; } Output layer size .facerecognition.TensorFlowFaceClassifier (AndroidTensorFlowMNISTExample modified)
  39. Using a Pre-trained model @RaulHernandezL #AR #MachineLearning #droidconUK public static

    TensorFlowFaceClassifier buildFaceClassifier(AssetManager assetManager, String modelFileName, String labelFileName, int inputSize, String inputName, String outputName) { TensorFlowFaceClassifier faceClassifier = new TensorFlowFaceClassifier(); … // Set input and output names // For each line read from labelFileName, add its label into memory faceClassifier.addLabel(lineOnFile); // String values … faceClassifier.inferenceInterface = new TensorFlowInferenceInterface(assetManager, modelFileName); // The shape of the output is [N, NUM_CLASSES], where N is the batch size. int numClasses = (int) faceClassifier.inferenceInterface.graph().operation(outputName).output(0).shape().size(1); faceClassifier.inputSize = inputSize; faceClassifier.outputNames = new String[]{outputName}; faceClassifier.outputs = new float[numClasses]; return faceClassifier; } Confidence results .facerecognition.TensorFlowFaceClassifier (AndroidTensorFlowMNISTExample modified)
  40. Using a Pre-trained model @RaulHernandezL #AR #MachineLearning #droidconUK public static

    TensorFlowFaceClassifier buildFaceClassifier(AssetManager assetManager, String modelFileName, String labelFileName, int inputSize, String inputName, String outputName) { TensorFlowFaceClassifier faceClassifier = new TensorFlowFaceClassifier(); … // Set input and output names // For each line read from labelFileName add its label into memory faceClassifier.addLabel(lineOnFile); // String values … faceClassifier.inferenceInterface = new TensorFlowInferenceInterface(assetManager, modelFileName); // The shape of the output is [N, NUM_CLASSES], where N is the batch size. int numClasses = (int) faceClassifier.inferenceInterface.graph().operation(outputName).output(0).shape().size(1); faceClassifier.inputSize = inputSize; faceClassifier.outputNames = new String[]{outputName}; faceClassifier.outputs = new float[numClasses]; return faceClassifier; } .facerecognition.TensorFlowFaceClassifier (AndroidTensorFlowMNISTExample modified)
  41. Using a Pre-trained model @RaulHernandezL #AR #MachineLearning #droidconUK .facerecognition.TensorFlowFaceClassifier (AndroidTensorFlowMNISTExample

    modified) public List<Recognition> recognizeFaceFromImage(final float[] pixels) { // Copy the input data into TensorFlow inferenceInterface.feed(inputName, pixels, new long[]{inputSize * inputSize}); // Run the inference call inferenceInterface.run(outputNames, runStatus); // Copy the output Tensor back into the output array inferenceInterface.fetch(outputName, outputs); // Find the best classifications for (int i = 0; i < outputs.length; ++i) { if (outputs[i] > THRESHOLD) { // > 0.1f // the following priority queue is sorted by resulting confidence priorityQueue.add(new Recognition( "" + i, labels.size() > i ? labels.get(i) : "unknown", outputs[i], null)); } } return getRecognitionsListFromQueue(priorityQueue); } .facerecognition.TensorFlowFaceClassifier (AndroidTensorFlowMNISTExample modified)
  42. Using a Pre-trained model @RaulHernandezL #AR #MachineLearning #droidconUK .facerecognition.TensorFlowFaceClassifier (AndroidTensorFlowMNISTExample

    modified) public List<Recognition> recognizeFaceFromImage(final float[] pixels) { // Copy the input data into TensorFlow inferenceInterface.feed(inputName, pixels, new long[]{inputSize * inputSize}); // Run the inference call inferenceInterface.run(outputNames, runStatus); // Copy the output Tensor back into the output array inferenceInterface.fetch(outputName, outputs); // Find the best classifications for (int i = 0; i < outputs.length; ++i) { if (outputs[i] > THRESHOLD) { // > 0.1f // the following priority queue is sorted by resulting confidence priorityQueue.add(new Recognition( "" + i, labels.size() > i ? labels.get(i) : "unknown", outputs[i], null)); } } return getRecognitionsListFromQueue(priorityQueue); } .facerecognition.TensorFlowFaceClassifier (AndroidTensorFlowMNISTExample modified)
  43. Using a Pre-trained model @RaulHernandezL #AR #MachineLearning #droidconUK .facerecognition.TensorFlowFaceClassifier (AndroidTensorFlowMNISTExample

    modified) public List<Recognition> recognizeFaceFromImage(final float[] pixels) { // Copy the input data into TensorFlow inferenceInterface.feed(inputName, pixels, new long[]{inputSize * inputSize}); // Run the inference call inferenceInterface.run(outputNames, runStatus); // Copy the output Tensor back into the output array inferenceInterface.fetch(outputName, outputs); // Find the best classifications for (int i = 0; i < outputs.length; ++i) { if (outputs[i] > THRESHOLD) { // > 0.1f // the following priority queue is sorted by resulting confidence priorityQueue.add(new Recognition( "" + i, labels.size() > i ? labels.get(i) : "unknown", outputs[i])); } } return getRecognitionsListFromQueue(priorityQueue); } Extracted values of confidence .facerecognition.TensorFlowFaceClassifier (AndroidTensorFlowMNISTExample modified)
  44. Using a Pre-trained model @RaulHernandezL #AR #MachineLearning #droidconUK .facerecognition.TensorFlowFaceClassifier (AndroidTensorFlowMNISTExample

    modified) public List<Recognition> recognizeFaceFromImage(final float[] pixels) { // Copy the input data into TensorFlow inferenceInterface.feed(inputName, pixels, new long[]{inputSize * inputSize}); // Run the inference call inferenceInterface.run(outputNames, runStatus); // Copy the output Tensor back into the output array inferenceInterface.fetch(outputName, outputs); // Find the best classifications for (int i = 0; i < outputs.length; ++i) { if (outputs[i] > THRESHOLD) { // > 0.1f // the following priority queue is sorted by resulting confidence priorityQueue.add(new Recognition( "" + i, labels.size() > i ? labels.get(i) : "unknown", outputs[i])); } } return getRecognitionsListFromQueue(priorityQueue); } Id Label associated to this recognition Confidence .facerecognition.TensorFlowFaceClassifier (AndroidTensorFlowMNISTExample modified)
  45. Using a Pre-trained model @RaulHernandezL #AR #MachineLearning #droidconUK .facerecognition.TensorFlowFaceClassifier (AndroidTensorFlowMNISTExample

    modified) public List<Recognition> recognizeFaceFromImage(final float[] pixels) { // Copy the input data into TensorFlow inferenceInterface.feed(inputName, pixels, new long[]{inputSize * inputSize}); // Run the inference call inferenceInterface.run(outputNames, runStatus); // Copy the output Tensor back into the output array inferenceInterface.fetch(outputName, outputs); // Find the best classifications for (int i = 0; i < outputs.length; ++i) { if (outputs[i] > THRESHOLD) { // > 0.1f // the following priority queue is sorted by resulting confidence priorityQueue.add(new Recognition( "" + i, labels.size() > i ? labels.get(i) : "unknown", outputs[i], null)); } } return getRecognitionsListFromQueue(priorityQueue); } .facerecognition.TensorFlowFaceClassifier (AndroidTensorFlowMNISTExample modified)
  46. Face Recognition @RaulHernandezL #AR #MachineLearning #droidconUK Expected status: Face is

    recognized Use case image taken when using: Qualeams Android Face Recognition with Deep Learning Test Framework
  47. Conclusions @RaulHernandezL #MVP #droidconUK Make something functional means we have

    enough insights, time and learnt lessons to build something ready to go
  48. Conclusions @RaulHernandezL #MVP #droidconUK Make something extendable means we have

    made an effort to introduce best practices to create a maintainable product
  49. References @RaulHernandezL Image Detection & AR Presentation Samples: https://github.com/raulh82vlc/Image-Detection-Samples OpenCV

    OpenCV tutorial http://www.learnopencv.com/image-recognition-and-object-detection-part1/ OpenCV Sample https://github.com/opencv/opencv/tree/master/samples/android/face-detection OpenCV for Secret Agents - Joseph Howse https://www.packtpub.com/application-development/opencv-secret-agents Camera 2 Camera 2 Sample Basic https://github.com/googlesamples/android-Camera2Basic Camera 2 Reference https://developer.android.com/reference/android/hardware/camera2/package-summary.html Camera 2 Introduction on Dev.Bytes https://youtu.be/Xtp3tH27OFs #AR #MachineLearning #droidconUK
  50. References @RaulHernandezL TensorFlow Deep learning library https://github.com/Qualeams/Android-Face-Recognition-with-Deep-Learning-Library Mindorks MNIST example

    https://github.com/MindorksOpenSource/AndroidTensorFlowMNISTExample ML LetterPredictor Android https://github.com/mccorby/ML-LetterPredictor-Android TensorFlow image retraining https://www.tensorflow.org/tutorials/image_retraining Applied TensorFlow for Android apps https://speakerdeck.com/daj/applied-tensorflow-for-android-apps Building Mobile Apps with TensorFlow - Pete Warden Machine learning Deep Learning in a Nutshell – what it is, how it works, why care? https://www.kdnuggets.com/2015/01/deep-learning-explanation-what-how-why.html Transfer learning with Keras https://medium.com/towards-data-science/transfer-learning-using-keras-d804b2e04ef8 Neural networks deep learning https://medium.com/machine-learning-for-humans/neural-networks-deep-learning-cdad8aeae49b #AR #MachineLearning #droidconUK