Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Super Resolution with CoreML @ try! Swift Tokyo...

kenmaz
March 02, 2018

Super Resolution with CoreML @ try! Swift Tokyo 2018

The 'super resolution' technique is used for converting low resolution image into high resolution, which reduces the amount of image data that needs to be transfered. In this talk, I'd like to show you the implementation of super resolution with CoreML and Swift, and compare the results with conventional methods.

Video:
https://www.youtube.com/watch?v=E65lXzau_0Y

https://www.tryswift.co/events/2018/tokyo/en/#coreml
https://github.com/DeNA/SRCNNKit

kenmaz

March 02, 2018
Tweet

More Decks by kenmaz

Other Decks in Programming

Transcript

  1. SR Method • SRCNN • SR method with Deep Learning

    technology • Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, ’Image Super-Resolution Using Deep Convolutional Networks’ (2015)
  2. 400x600 WebP 50 KB Display SR on CoreML Overview 800x1200

    WebP 200 KB Client Server Data Size 1/4 Resize
  3. How to prepare MLModel file (A) Use public MLModel (B)

    Train your own model (with Manga images)
  4. Training Environment • Training Data are Manga image files. •

    340,000 patch images • AWS EC2 GPU instance (p3.2xlarge )
  5. Run Super Resolution process let model = SRCNN() for patch

    in patches { let res = try! model.prediction(image: patch.buff) outs.append(res) }
  6. Performance Patch size Device Time 32x32 iPhone X 10.89 sec

    112x112 iPhone X 2.39 sec 200x200 iPhone X 1.04 sec 200x200 iPhone 7 1.21 sec
  7. let imageView: UIImageView = … let image: UIImage = …

    imageView.setSRImage(image) //Super Resolution
  8. Recap • Reduced image file size with CoreML + SRCNN

    • You need only Swift skill ( if you have a model ) • CoreML is a good parts for building apps • I feel the CoreML have great potential for the future