Inami. Position Paper: Brain Teasers - Toward Wearable Computing that Engages Our Mind. Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing Adjunct Publication (UbiComp2014). September 2014. 1-back 2-back 4-back n-backهԱήʔϜதͷ׆ੑ ෛՙ 3-back
Inami. Position Paper: Brain Teasers - Toward Wearable Computing that Engages Our Mind. Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing Adjunct Publication (UbiComp2014). September 2014. 1-back 2-back 4-back n-backهԱήʔϜதͷ׆ੑ ෛՙ 3-back
Koichi Kise, Andreas Dengel and Paul Lukowicz. In the Blink of an Eye - Combining Head Motion and Eye Blink Frequency for Activity Recognition with Google Glass. Proceedings of the 5th Augmented Human International Conference (AH2014). March 2014.
Weppner, Kai Kunze, Andreas Bulling, Koichi Kise, Andreas Dengel and Paul Lukowicz. In the Blink of an Eye - Combining Head Motion and Eye Blink Frequency for Activity Recognition with Google Glass. Proceedings of the 5th Augmented Human International Conference (AH2014). March 2014.
Koichi Kise, Andreas Dengel and Paul Lukowicz. In the Blink of an Eye - Combining Head Motion and Eye Blink Frequency for Activity Recognition with Google Glass. Proceedings of the 5th Augmented Human International Conference (AH2014). March 2014. reading watching solving sawing talking
Koichi Kise, Andreas Dengel and Paul Lukowicz. In the Blink of an Eye - Combining Head Motion and Eye Blink Frequency for Activity Recognition with Google Glass. Proceedings of the 5th Augmented Human International Conference (AH2014). March 2014. reading watching solving sawing talking ॠ͖͚ͩͰ ಄ͷಈ͖ΛՃ͑Δͱͷࣝผਫ਼
State on the Basis of Physical and Social Activities. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing Adjunct Publication (UbiComp2015). September 2015.
Dengel. The Wordometer 2.0 - Estimating the Number of Words You Read in Real Life using Commercial EOG Glasses. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing Adjunct Publication (UbiComp2016). September 2016.
intro. def. appl. intro. def. appl. intro. def. appl. Expert Intermediate Novice reading solving reading solving reading solving B SFBEJOH C TPMWJOH F SFBEJOH G TPMWJOH /PWJDF D SFBEJOH E TPMWJOH *OUFSNFEJBUF &YQFSU Introduction Definitions Applications (a) text [4] Shoya Ishimaru, Syed Saqib Bukhari, Carina Heisel, Jochen Kuhn, and Andreas Dengel. Towards an Intelligent Textbook: Eye Gaze Based Attention Extraction on Materials for Learning and Instruction in Physics. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing Adjunct Publication (UbiComp2016). September 2016.
intro. def. appl. intro. def. appl. intro. def. appl. Expert Intermediate Novice reading solving reading solving reading solving B SFBEJOH C TPMWJOH F SFBEJOH G TPMWJOH /PWJDF D SFBEJOH E TPMWJOH *OUFSNFEJBUF &YQFSU Introduction Definitions Applications (a) text [4] Shoya Ishimaru, Syed Saqib Bukhari, Carina Heisel, Jochen Kuhn, and Andreas Dengel. Towards an Intelligent Textbook: Eye Gaze Based Attention Extraction on Materials for Learning and Instruction in Physics. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing Adjunct Publication (UbiComp2016). September 2016.
intro. def. appl. intro. def. appl. intro. def. appl. Expert Intermediate Novice reading solving reading solving reading solving B SFBEJOH C TPMWJOH F SFBEJOH G TPMWJOH /PWJDF D SFBEJOH E TPMWJOH *OUFSNFEJBUF &YQFSU Introduction Definitions Applications (a) text [4] Shoya Ishimaru, Syed Saqib Bukhari, Carina Heisel, Jochen Kuhn, and Andreas Dengel. Towards an Intelligent Textbook: Eye Gaze Based Attention Extraction on Materials for Learning and Instruction in Physics. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing Adjunct Publication (UbiComp2016). September 2016.
2 participants. a video, solving a mathematical problem and sawing. Dis- tinguishing between these activities involves not only recog- nizing physical actions (that can easily be captured using for example on body motion sensors) but also a cognitive com- ponent which is what we hypothesize eye blinking frequency and head motion correlate with. We evaluate our method on a data set containing eight par- ticipants demonstrating an average classification accuracy of 67% using blink features only and 82% using blink and mo- tion features. Related Work There is a large corpus of work to recognize human activities. A variety of physical activities can be recognized using body- mounted sensors [5]. On the other hand, some researchers focus on our cognitive activities. Bentivoglio et al. have stud- ied the relation between sitting activities and blink patterns [3]. They described that the blink rate changes when partic- ipants were reading, talking and resting. Acosta et al. have presented that working with computers causes a reduction of blink [1]. Haak et al. have described that emotion, especially stress, effects blink frequency [9]. Therefore, blink pattern should be one of the important features to recognize our ac- tivities. Some researchers have applied an image processing method [6] and an eye tracking approach [4] to detect blinks. As far as we know, we are the first to use a simple proximity sensor embedded in a commercial wearable computing sys- tem for activity recognition and to combine it with head mo- tion patterns. APPROACH We believe that blink patterns can give a lot of insights about the user’s mental state (drowsiness etc.) and the user’s ac- tivity. To show this we use an infrared proximity sensor on Google Glass (see Figure 1). It monitors the distance between the Google Glass and the eye. Figure 2 shows the raw values of the sensor. While the main function of this sensor is to detect if the user wears the device, when the user blinks, a peak value appears due to the eye lid and eyelashes move- ment. Our algorithm is based on two stages. The first stage is the pre-processing stage of the raw sensor signal. The pre- processing stage extracts the time of blinks. We validate the pre-processing results with ground truth blink information. Q Q Q Q Q Q Q Q Q from peak to average from peak to average Figure 3. Blink detection by calculating peak value. Secondly, the main part of our algorithm calculates features based on the detected blinks. Getting raw data of infrared proximity sensor on Google Glass is not provided in an of- ficial way. We rooted (customized) our Glass on the basis of Glass hacking tutorial [7] and installed our own logging application [8] for the experiment. Blink detection During pre-processing blinks are detected based on the raw infrared proximity sensor signal. We move a sliding win- dow on the sensor data stream and monitor whether the cen- ter point of each window is a peak or not according to the following definition. We calculate the distance from one sen- sor value of the center point in the window (p5 in Figure 3) to the average value of other points (p1 , p2 , p3 , p7 , p8 and p9 ). The preceding and subsequent points of the center (p4 and p6 ) are excluded from the average calculation because their sensor values are often affected by the center point. If the distance is larger than a threshold ranging from 3.0 - 7.0 we define the center point as a blink. Because the shape of the face and eye location vary, the best threshold for the peak detection varies for each user. Figure 2 with the same scale for each sub-graphic also demonstrates different signal vari- ations for different users. We calculate the best threshold (in 0.1 steps ranging from 3.0 to 7.0) by evaluating the accuracy based on the ground truth information. This approach can be applied only in off-line evaluation. In on-line usage, we need a few seconds for calibration before detection. During the calibration term, Glass urges the user to blink as matching some timing. We get sensor values and actual blink timing from calibration and evaluate the best threshold. Blink frequency based activity recognition As an output of our pre-processing step we extract the times- tamps of blinks and compute a three-dimensional feature vec- tor. One is the mean blink frequency which describes the number of blinks during a period divided by the length of a period. Two other features are based on the distribution of blinks. Graphically, this can be understood as the histogram of the blink frequency. Figure 5 shows five histograms with a period of 5 minutes. The x-axis describes the mean blink frequency (0.0 - 1.0 Hz) and the y-axis describes the blink counts of each frequency. The number of specified bins per histogram is 20 having a resolution of 0.05 Hz. The frequency value is calculated as inverse value of the interval between two blinks. The second and third features are defined as the x-center of mass and the y-center of mass of the histogram. 2 ֶज़จͱ • ԿΛ໌Β͔ʹ͍ͨ͠ͷ͔ • ͦΕ͕͔Δͱͳͥخ͍͠ͷ͔ • ઌߦݚڀͱൺͯԿ͕͍͔͢͝ • Ͳ͏ͬͯ໌Β͔ʹ͢Δͷ͔ • Ͳ͏ͬͯ༗ޮͩͱݕূ͔ͨ͠ • ͜ͷݚڀͷߩݙԿ͔ ͜ΕΒͷใ͕ߴͰಡΈऔΕΔ ϑΥʔϚοτʹͳ͍ͬͯΔจॻ
a Great Research Paper." https://www.microsoft.com/en-us/research/academic-program/write-great-research-paper/ • ൃݟΛଟ͘ͷਓʹ͑ΔͨΊ ֶձൃද͚ͩͰͦͷʹډ߹Θͤͨਓʹ͔͠ΘΒͳ͍ɻ จ͕ࡶࢽʹࡌΕੈքதͷਓ͕ಡΉ͜ͱ͕Ͱ͖Δɻ • ݚڀՌΛ࣌Λ͑ͯ͑ΔͨΊ 1BQFSTBSFGBSNPSFEVSBCMFUIBOQSPHSBNT ʮจϓϩάϥϜΑΓང͔ʹ࣋ͪ͢Δʯ