Upgrade to Pro — share decks privately, control downloads, hide ads and more …

CSC570 Lecture 08

CSC570 Lecture 08

Applied Affective Computing
Eye Tracking Hands-on Lab
(202504)

Avatar for Javier Gonzalez-Sanchez

Javier Gonzalez-Sanchez

April 25, 2023
Tweet

More Decks by Javier Gonzalez-Sanchez

Other Decks in Research

Transcript

  1. Dr. Javier Gonzalez-Sanchez [email protected] www.javiergs.info o ffi ce: 14 -227

    CSC 570 Current Topics in Computer Science Applied Affective Computing Lecture 08. Eye Tracking Hands-on Lab
  2. (

  3. Clustering • Unsupervised Learning • Clustering is the task of

    dividing a population (data points) into a number of groups such that data points in the same groups are similar 7
  4. Algorithms • K-Means - distance between points. Minimize square-error criterion.

    • DBSCAN (Density-Based Spatial Clustering of Applications with Noise) - distance between nearest points. 
 • Simple EM (Expectation Maximization) is fi nding likelihood of an observation belonging to a cluster(probability). Maximize log-likelihood criterion 8
  5. Similarity • One of the simplest w a ys to

    c a lcul a te the dist a nce between two fe a ture vectors is to use Euclidean Distance. • Other options: Minkowski dist a nce, M a nh a tt a n dist a nce, H a mming dist a nce, Cosine dist a nce, … 10
  6. Algorithm: K-means • K-Me a ns begins with k r

    a ndomly pl a ced centroids. Centroids a re the center points of the clusters. • Iter a tion: • Assign e a ch existing d a t a point to its ne a rest centroid • Move the centroids to the a ver a ge loc a tion of points a ssigned to it. • Repe a t iter a tions until the a ssignment between multiple consecutive iter a tions stops ch a nging 11
  7. K-means Problems • K-Me a ns clustering m a y

    cluster loosely related observations together. Every observ a tion becomes a p a rt of some cluster eventu a lly, even if the observ a tions a re sc a ttered f a r a w a y in the vector sp a ce • Clusters depend on the me a n v a lue of cluster elements; e a ch d a t a point pl a ys a role in forming the clusters. A slight ch a nge in d a t a points might a ff ect the clustering outcome. • Another ch a llenge with k-me a ns is th a t you need to specify the number of clusters (“k”) in order to use it. Much of the time, we won’t know wh a t a re a son a ble k v a lue is a priori. 12
  8. DBSCAN • The a lgorithm proceeds by a rbitr a

    rily picking up a point in the d a t a set • If there a re a t le a st N points within a r a dius of E to the point, then we consider a ll these points to be p a rt of the s a me cluster. • Repe a t until a ll points h a ve been visited 16
  9. K-means VS. DBSCAN • wek a .clusterers: These a re

    clustering a lgorithms, including K-means, CLOPE, Cobweb, DBSCAN hier a rchic a l clustering, a nd F a rthestFirst. 17
  10. )

  11. CSC 570 Applied Affective Computing Javier Gonzalez-Sanchez, Ph.D. [email protected] Spring

    2025 Copyright. These slides can only be used as study material for the class CSC 570 at Cal Poly. They cannot be distributed or used for another purpose.