Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Imparting privacy to face images: designing se...

Imparting privacy to face images: designing semi-adversarial neural networks for multi-objective function optimization

Talk about protecting privacy in face images, aka confounding soft biometric facial attributes using a novel deep neural network architecture: semi-adversarial neural networks (SAN). This talk was given at the Applied Machine Learning Conference 2018 in Charlottesville, VA

Sebastian Raschka

April 12, 2018
Tweet

More Decks by Sebastian Raschka

Other Decks in Technology

Transcript

  1. Imparting privacy to face images: designing semi-adversarial neural networks for

    multi-objective function optimization Sebastian Raschka, Ph.D. Researcher at MSU / Assistant Professor of Statistics, UW Madison (starting summer 2018) https://sebastianraschka.com Applied Machine Learning Conference 2018 Charlottesville, VA 12 Apr 2018
  2. Mirjalili, Raschka, Namboodiri, Ross "Semi-adversarial networks: Convolutional autoencoders for imparting

    privacy to face images." The 11th IAPR International Conference on Biometrics, Gold Coast, Queensland, Australia (Feb 20th-23rd, 2018). [manuscript version: https://arxiv.org/abs/1712.00321] Best Paper Award @ ICB2018 Imparting privacy to face images: designing semi-adversarial neural networks for multi-objective function optimization
  3. Biometric (face) recognition A. Identification Determine identity of an unknown

    person 1-to-n matching ... B. Verification Verify claimed identity of a person 1-to-1 matching (CelebA dataset) (MUCT dataset)
  4. 1.Identity theft: combining soft biometric info with publicly available data

    2.Profiling: e.g., gender/race based profiling 3.Ethics: extracting data without users’ consent Soft biometric attributes: issues and concerns
  5. 100101010101111 101010101001000 001101010101010 101001100101010 010111010001010 101110001010101 101001010100101 100101010101111 101010101001000 001101010101010

    101001100101010 010111010001010 101110001010101 101001010100101 Gender classifier Face matcher P(same person) P(male)
  6. 100101010101111 101010101001000 001101010101010 101001100101010 010111010001010 101110001010101 101001010100101 100101010101111 101010101001000 001101010101010

    101001100101010 010111010001010 101110001010101 101001010100101 Gender classifier Face matcher Goal: • perturbing gender • retaining matching utility
  7. General architecture of the semi-adversarial network Objective 1: Realistic images

    Objective 3: Confound gender Objective 2: Retain matching utility
  8. Semi-adversarial network Objective 1: Realistic images Objective 3: Confound gender

    Objective 2: Retain matching utility adversarial not adversarial
  9. Objective 1: Realistic images Objective 3: Confound gender Objective 2:

    Retain matching utility General architecture of the semi-adversarial network
  10. Gender prototypes PMale average of all male images PFemale average

    of all female images Pneutral weighted average PMale and PFemale
  11. Same gender prototype: Gender prototypes Class labels ! ∈ 0,

    1 , where 0 = female, 1 = male Opposite gender prototype:
  12. Architecture described in “Parkhi O., Vedaldi M., Zisserman A., “Deep

    Face Recognition”, BMVC, 2015. Face matcher architecture
  13. Training a face matcher: Triplet loss Anchor Positive Anchor Negative

    Want encodings to be very different (large distance) Want encodings to be very similar (small distance)
  14. Anchor Positive Anchor Negative ! ", $ ≤ ! ",

    & '(") − '($) + + ≤ '(") − '(&) + + Training a face matcher: Triplet loss
  15. Anchor Positive Anchor Negative ! ", $ + & ≤

    ! ", ( )(") − )($) - - + & ≤ )(") − )(() - - Training a face matcher: Triplet loss
  16. Anchor Positive Anchor Negative ! ", $ + & ≤

    ! ", ( )(") − )($) - - + & ≤ )(") − )(() - - )(") − )($) - - + & − ) " − ) ( - - ≤ 0 Training a face matcher: Triplet loss
  17. Anchor Positive Anchor Negative ! ", $, % = max(

    +(") − +($) . . + 0 − + " − + % . ., 0 ) Training a face matcher: Triplet loss
  18. Anchor Positive Anchor Negative ! ", $, % = max(

    +(") − +($) . . + 0 − + " − + % . ., 0 ) Training a face matcher: Triplet loss Side note: shortcoming off triplet loss (as noted by Yann LeCun & Alfredo Canziani)
  19. Objective 1: Realistic images Objective 3: Confound gender Objective 2:

    Retain matching utility General architecture of the semi-adversarial network
  20. Cost function for semi-adversarial learning 1. Pixel-wise similarity term •

    Only used during the pre-training of the autoencoder 2. Loss term related gender attribute • Correctly predict gender of X' SM • Flip the gender prediction of X' OP 3. Loss related to matching !" #, #%& ' = ) *+, --.×--. 012 # 3 , # %& ) '(3 !6 #, #%& ' , #78 ' , 9; ;6 = 1 9, ;6 #%& ' + 1 1 − 9, ;6 #78 ' !& #, #%& ' ; ?& = ?& #%& ' − ?& # - -
  21. Visual results (improved) Female: 69% Male: 99% Female: 71% Female:

    58% Male: 99% Female: 98% Male: 97% Male: 100%
  22. Datasets [1] Liu, Ziwei, et al. "Deep learning face attributes

    in the wild." Proceedings of the IEEE International Conference on Computer Vision. 2015. [2] Milborrow, Stephen, John Morkel, and Fred Nicolls. "The MUCT landmarked face database." Pattern Recognition Association of South Africa 201.0 (2010). [3] Huang, Gary B., et al. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst, 2007. [4] Martinez, Aleix M. "The AR face database." CVC Technical Report 24 (1998). [1] [2] [3] [4]
  23. Female classified as Male 0.0 0.2 0.4 0.6 0.8 1.0

    0.00 0.25 0.50 0.75 1.00 (a) CelebA-test 0.0 0.2 0.4 0.6 0.8 1.0 (b) MUCT 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 (c) LFW 0.0 0.2 0.4 0.6 0.8 1.0 (d) AR-face Male classified as Male IntraFace gender classifier performance Before After (SM) After (NT) After (OP)
  24. 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75

    1.00 (a) CelebA-test 0.0 0.2 0.4 0.6 0.8 1.0 (b) MUCT 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 (c) LFW 0.0 0.2 0.4 0.6 0.8 1.0 (d) AR-face Female classified as Male Male classified as Male Before After (SM) After (NT) After (OP) After Ref [1] [1] A. Othman and A. Ross. Privacy of facial soft biometrics: Suppressing gender but retaining identity. In European Conference on Computer Vision Workshop, pages 682–696. Springer, 2014. IntraFace gender classifier performance
  25. Female classified as Male Male classified as Male G-COTS gender

    classifier Before After (SM) After (NT) After (OP) 10 −4 10 −3 10 −2 10 −1 10 0 0.00 0.25 0.50 0.75 1.00 (a) CelebA-test 10 −3 10 −2 10 −1 10 0 (b) MUCT 10 −3 10 −2 10 −1 10 0 0.00 0.25 0.50 0.75 1.00 (c) LFW 10 −3 10 −2 10 −1 10 0 (d) AR-face
  26. 10 −7 10 −6 10 −5 10 −4 10 −3

    10 −2 10 −1 10 0 0.00 0.25 0.50 0.75 1.00 True Matching Rate (a) MUCT Same image: before-after (OP) 10 −7 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 0.00 0.25 0.50 0.75 1.00 (b) LFW 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 False Matching Rate 0.00 0.25 0.50 0.75 1.00 (c) AR-face True Matching Rate True Matching Rate M-COTS face matcher performance
  27. 10 −7 10 −6 10 −5 10 −4 10 −3

    10 −2 10 −1 10 0 0.00 0.25 0.50 0.75 1.00 (a) MUCT 10 −7 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 0.00 0.25 0.50 0.75 1.00 (b) LFW 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 0.00 0.25 0.50 0.75 1.00 (c) AR-face True Matching Rate True Matching Rate True Matching Rate False Matching Rate Same image: before-after (OP) Before After (SM) After (NT) After (OP) After Ref [1] [1] A. Othman and A. Ross. Privacy of facial soft biometrics: Suppressing gender but retaining identity. In European Conference on Computer Vision Workshop, pages 682–696. Springer, 2014. M-COTS face matcher performance multi-subject comparisons