Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Tutorial: Creation of AR Coloring with OpenCV a...

Tutorial: Creation of AR Coloring with OpenCV and ARCore

Description how to create AR Coloring app by using OpenCV, ARCore and Unity.

TakashiYoshinaga

May 12, 2019
Tweet

More Decks by TakashiYoshinaga

Other Decks in Technology

Transcript

  1. Download and Installation ①Sample Data for the Tutorial http://arfukuoka.lolipop.jp/nurie/sample.zip ②ARCoreSDK(v1.8.0)

    https://github.com/google-ar/arcore-unity-sdk/releases/tag/v1.8.0 ③Unity2017.4.15f1 or later https://unity3d.com/jp/unity/qa/lts-releases?version=2017.4 ④Android SDK https://developer.android.com/studio ※Please finish setting up Android build on Unity before hand.
  2. ARCore New marker-less AR platform which can available for Android

    devices. 【Features】 (1) Motion Tracking based on SLAM (2) Environmental Understanding (3) Light Estimation (4) Augmented Image (5) Cloud Anchor (6) Augmented Faces
  3. OpenCV Plus  Unity asset of image processing based on

    OpenCV3.  OpenCVSharp was adapted to Unity environment.  Available on Windows/Mac/Android/iOS.
  4. Check Point & Next Step 【Check Point】 Appearance can be

    changed just by replacing texture file if a material to use texture was applied to3D model. 【Next Step】  Create texture from inner area of square frame.  Writing C# script to replace texture automaticaly.
  5. Installation of ARCore SDK ①Assets ②Import Package ③Open SDK from

    Custom Package arcore-unity-sdk-v1.8.0.unitypackage
  6. Run

  7. Setting Up the View of Unity Editor Game This operation

    means making your modification of UI layout of application comfortable in Unity Editor.
  8. Setting Up the View of Unity Editor ①Click Free Aspect

    ②Click+ This operation means making your modification of UI layout of application comfortable in Unity Editor.
  9. Using Sample UI UI was added to the Scene UI

    might not be visible the view point which you using know. But it’s not problem! Please see next page.
  10. Importing OpenCV and UnityEngine.UI using UnityEngine; using UnityEngine.UI; using OpenCvSharp;

    using OpenCvSharp.Demo; public class ColoringScript : MonoBehaviour { // Start関数は初期化のために一度だけ実行される void Start () { cg = GameObject.Find ("Robot Kyle"); } // Update関数は毎フレーム実行される void Update () { } }
  11. Declaration of Variables public MeshRenderer target; //Rendering Setting of Cube

    public GameObject canvas; //Canvas which involves UI public RawImage viewL, viewR; //Result viewer UnityEngine.Rect capRect;//Region of screen shot Texture2D capTexture; //Texture of screenshot image Texture2D colTexture; //Result of image processing(color) Texture2D binTexture; //Result of image processing(gray) void Start () { } void Update () { } canvas viewL viewR colTexture binTexture
  12. Preparation of Screen Capture public MeshRenderer target; public GameObject canvas;

    public RawImage viewL, viewR; UnityEngine.Rect capRect; Texture2D capTexture; Texture2D colTexture; Texture2D binTexture; void Start () { int w = Screen.width; int h = Screen.height; //Definition of capture region as (0,0) to (w,h) capRect = new UnityEngine.Rect(0, 0, w, h); //Creating texture image of the size of capRect capTexture = new Texture2D(w, h, TextureFormat.RGB24, false); } width height (0,0)
  13. Making Function of Image Processing void Start () { /*Code

    was omitted in the slide.*/ } IEnumerator ImageProcessing() { canvas.SetActive(false);//Making UIs invisible yield return new WaitForEndOfFrame(); capTexture.ReadPixels(capRect, 0, 0);//Starting capturing capTexture.Apply();//Apply captured image. /*Setting texture on the coloring target object (cube)*/ target.material.mainTexture = capTexture; canvas.SetActive(true);//Making UIs visible. } public void StartCV() { StartCoroutine(ImageProcessing());//Calling coroutine. } Write!
  14. Clipping Image of Inside the ROI void Start () {

    int w = Screen.width; int h = Screen.height; //Setting up position/size of ROI int sx = (int)(w * 0.2); //Start of X int sy = (int)(h * 0.3); //Start of Y w = (int)(w * 0.6); //Width of ROI h = (int)(h * 0.4); //Height of ROi //キャプチャする領域を保持したRectを作成 capRect = new UnityEngine.Rect(0, 0, w, h); capTexture = new Texture2D(w, h, TextureFormat.RGB24, false); } capRect = new UnityEngine.Rect(sx, sy, w, h); Replace!
  15. Refactoring (1/2) IEnumerator ImageProcessing() { canvas.SetActive(false); yield return new WaitForEndOfFrame();

    //描画終了を待つ capTexture.ReadPixels(capRect, 0, 0);//キャプチャ開始 capTexture.Apply();//各画素の色をテクスチャに反映 /*下記の一行でオブジェクトにテクスチャを張り付け*/ target.material.mainTexture = capTexture; canvas.SetActive(true); } void CreateImage() { /*Cut & Paste Code of Image Creation*/ } void ShowImage() { /*Cut & Paste Code of Image Visualization*/ } Image Creation Visualization
  16. Refactoring (2/2) IEnumerator ImageProcessing() { canvas.SetActive(false); yield return new WaitForEndOfFrame();

    CreateImage(); //Image Creation ShowImage(); //Image Visualization canvas.SetActive(true); } void CreateImage() { capTexture.ReadPixels(capRect, 0, 0); capTexture.Apply(); } void ShowImage() { target.material.mainTexture = capTexture; }
  17. Binarization of Gray Scale Image 0 255 0 255 Binarization

    means splitting grayscale(0~255) into 0 or 255 by threshold. It’s very important technique to define pixels which should be processed.
  18. Preparation of Using Image with OpenCV Texture2D colTexture; Texture2D binTexture;

    //Mat:Format of image for OpenCV //bgr is for color image、bin is for binarized image Mat bgr, bin; void Start () { int w = Screen.width; int h = Screen.height; int sx = (int)(w * 0.2); int sy = (int)(h * 0.3); w = (int)(w * 0.6); h = (int)(h * 0.4); capRect = new UnityEngine.Rect(sx, sy, w, h); capTexture = new Texture2D(w, h, TextureFormat.RGB24, false); }
  19. Binarization void CreateImage() { capTexture.ReadPixels(capRect, 0, 0); capTexture.Apply(); //Conversion Texure2D

    to Mat bgr = OpenCvSharp.Unity.TextureToMat(capTexture); //Conversion Color Image to Gray Scale Image bin = bgr.CvtColor(ColorConversionCodes.BGR2GRAY); //Binarization of image with Otsu’s method. bin = bin.Threshold(100, 255, ThresholdTypes.Otsu); Cv2.BitwiseNot(bin, bin); } Color Image Gray Scale Binary Inverse
  20. Visualization of Results void ShowImage() { //Releasing memories of textures.

    if (colTexture != null) { DestroyImmediate(colTexture); } if (binTexture != null) { DestroyImmediate(binTexture); } //Conversion of Mat to Texture2D colTexture = OpenCvSharp.Unity.MatToTexture(bgr); binTexture = OpenCvSharp.Unity.MatToTexture(bin); //Attaching texture to RawImage for visualization. viewL.texture = colTexture; viewR.texture = binTexture; //スクショ画像をモデルに適用 target.material.mainTexture = colTexture; }
  21. Releasing Memories Allocated for Mat IEnumerator ImageProcessing() { canvas.SetActive(false); yield

    return new WaitForEndOfFrame(); CreateImage(); ShowImage(); //Releasing Memories allocated for two Mats bgr.Release(); bin.Release(); canvas.SetActive(true); }
  22. Preparation of Square Frame Detection IEnumerator ImageProcessing() { canvas.SetActive(false); yield

    return new WaitForEndOfFrame(); CreateImage(); Point[] corners; //4 corners of square will be memorized FindRect(out corners); //Square Frame Detection ShowImage(); //画像の表示 bgr.Release(); bin.Release(); canvas.SetActive(true); } void FindRect(out Point[] corners) { /*Code will be described from next page.*/ }
  23. Contour Detection //Initialization of corners corners = null; //contour points

    and hierarchy Point[][] contours; HierarchyIndex[] h; //Contour detection bin.FindContours(out contours, out h, RetrievalModes.External, ContourApproximationModes.ApproxSimple); //Finding the contour of which has the most wide area. double maxArea = 0; for(int i = 0; i < contours.Length; i++) { double area = Cv2.ContourArea(contours[i]); if (area > maxArea) { maxArea = area; corners = contours[i]; } }
  24. Visualization of the Result void FindRect(out Point[] corners) { /*Code

    is omitted in this slide.*/ double maxArea = 0; for (int i = 0; i < contours.Length; i++) { double area = Cv2.ContourArea(contours[i]); if (area > maxArea) { maxArea = area; corners = contours[i]; } } //Write a contour line of max area in bgr. if (corners != null) { bgr.DrawContours( new Point[][] { corners }, 0, Scalar.Red, 5); } }
  25. Run

  26. Polygon Approximation void FindRect(out Point[] corners) { /*Code is omitted

    in this slide*/ double maxArea = 0; for(int i = 0; i < contours.Length; i++) { //Calculate the length of contour line. double length = Cv2.ArcLength(contours[i], true); //Polygon Approximation. Point[] tmp = Cv2.ApproxPolyDP( contours[i], length * 0.01f, true); double area = Cv2.ContourArea(contours[i]); //If number of corner is 4. if (area > maxArea) { maxArea = area; corners = contours[i]; } } /*次のページに続く*/ if (tmp.Length == 4 && area > maxArea) corners = tmp;
  27. Visualization of Corners void FindRect(out Point[] corners) { /*Code is

    omitted in this slide.*/ if (corners != null) { bgr.DrawContours( new Point[][] { corners }, 0, Scalar.Red, 5); //Draw circle on the position of corner point. for(int i = 0; i < corners.Length; i++) { bgr.Circle(corners[i], 20, Scalar.Blue, 5); } } }
  28. Before seeing next step… void FindRect(out Point[] corners) { /*Code

    is omitted in this slide.*/ //Comment out the visualization code. /*if (corners != null) { bgr.DrawContours( new Point[][] { corners }, 0, Scalar.Red, 5); //各頂点の位置に円を描画 for(int i = 0; i < corners.Length; i++) { bgr.Circle(corners[i], 20, Scalar.Blue, 5); } }*/ }
  29. Perspective Transformation (0, 0) (255, 0) (0, 255) (255 255)

    [0] [1] [2] [3]  Deform distorted square to front view based on the result of calculation of perspective transformation matrix.
  30. Perspective Transformation IEnumerator ImageProcessing() { canvas.SetActive(false); yield return new WaitForEndOfFrame();

    CreateImage(); Point[] corners; FindRect(out corners); TransformImage(corners); //Deform distorted square. ShowImage(); //画像の表示 bgr.Release(); bin.Release(); canvas.SetActive(true); } void TransformImage(Point[] corners) { /*Code will be described from the next page.*/ }
  31. Perspective Transformation void TransformImage(Point[] corners) { //Do nothing if square

    wasn’t found. if (corners == null) return; //Input detect corners. Point2f[] input = { corners[0], corners[1], corners[2], corners[3] }; //Define corners of square image. Point2f[] square = { new Point2f(0, 0), new Point2f(0, 255), new Point2f(255, 255), new Point2f(255, 0) }; //Calculation of transformation matrix. Mat transform = Cv2.GetPerspectiveTransform(input, square); //Deform image as front view square. Cv2.WarpPerspective(bgr,bgr,transform, new Size(256, 256)); } (0, 0) (255, 0) (0, 255) (255 255) [0] [1] [2] [3]
  32.  Local position of corner at [0] depends on rotation

    of square.  Sorting is necessary to obtain image of front standing view. [0] [1] [2] [3] [0] [1] [2] [3] Succeeded Failed
  33. Sorting Corner Points void TransformImage(Point[] corners) { if (corners ==

    null) return; //Sorting SortCorners(corners); Point2f[] input = { corners[0], corners[1], corners[2], corners[3] }; Point2f[] square = { new Point2f(0, 0), new Point2f(0, 255), new Point2f(255, 255), new Point2f(255, 0) }; Mat transform = Cv2.GetPerspectiveTransform(input, square); Cv2.WarpPerspective(bgr,bgr,transform, new Size(256, 256)); } void SortCorners(Point[] corners) { /*Code of sorting is described in the next page.*/ }
  34. Sorting Corner Points void SortCorners(Point[] corners) { System.Array.Sort(corners, (a, b)

    => a.X.CompareTo(b.X)); if (corners[0].Y > corners[1].Y) { corners.Swap(0, 1); } if (corners[3].Y > corners[2].Y) { corners.Swap(2, 3); } } [0] [1] [2] [3] [2] [0] [1] [3] [3] [0] [1] [2] Sort by X axis Sort [2][3] by Y axis
  35. Clip a Image Inside the Square Frame void TransformImage(Point[] corners)

    { if (corners == null) return; SortCorners(corners); Point2f[] input = { corners[0], corners[1], corners[2], corners[3] }; Point2f[] square = { new Point2f(0, 0), new Point2f(0, 255), new Point2f(255, 255), new Point2f(255, 0) }; Mat transform = Cv2.GetPerspectiveTransform(input, square); Cv2.WarpPerspective(bgr,bgr,transform, new Size(256, 256)); int s = (int)(256*0.05);//Line width of frame is 5% of square int w = (int)(256*0.9);//Width of clipping area is 90% of square OpenCvSharp.Rect innerRect = new OpenCvSharp.Rect(s, s, w, w); bgr = bgr[innerRect]; }
  36. Applying Texture to 3D Object void ShowImage() { //すでにcolTextureが存在するならいったん削除 if

    (colTexture != null) { DestroyImmediate(colTexture); } if (binTexture != null) { DestroyImmediate(binTexture); } //Matをテクスチャに変換 colTexture = OpenCvSharp.Unity.MatToTexture(bgr); binTexture = OpenCvSharp.Unity.MatToTexture(gray); //RawImageに切り抜き画像を表示 viewL.texture = colTexture; viewR.texture = binTexture; //Applying texture to target 3D object target.material.mainTexture = capTexture; //Show Canvas gain. canvas.SetActive(true); } target.material.mainTexture = colTexture; Replace
  37. Run