

We release SMAC with HMM, a Matlab toolbox freely available to the community under an open-source license agreement. This syn-ergistic approach between behavior and machine learning will open new avenues for simple quantification of gazing behavior. HMMs allow to integrate bottom-up, top-down, and oculomotor influences into a single model of gaze behavior. Labeling the requested classification on the rows and the actual classifier outputs on the For a DHMM classifier, the Vector quantization (VQ) process columns forms a. We achieve an average 81.2% correct classification rate (chance = 50%). Classification using discrete and fuzzy discrete hidden Markov A basic methodology called a confusion matrix is used to dis- model play the classification results of a classifier. Secondly, we use eye positions recorded while viewing 15 conversational videos, and infer a stimulus-related characteristic: the presence or absence of original soundtrack. We show that correct classification rates positively correlate with the number of salient regions present in the stimuli. Hidden Markov Models for Labeled Sequences Anders Krogh Electronics Institute, Building 349 Technical University of Denmark, 2800 Lyngby, Denmark Email: kroghBnordig.ei.dth. We achieve an average of 55.9% correct classification rate (chance = 33%). Firstly, we use fixations recorded while viewing 800 static natural scene images, and infer an observer-related characteristic: the task at hand. We test our approach on two very different datasets. HMMs encapsulate the dynamic and individualistic dimensions of gaze behavior, allowing DA to capture systematic patterns diagnostic of a given class of observers and/or stimuli. This method relies on variational hidden Markov models (HMMs) and discriminant analysis (DA). Here, we provide a turnkey method for scanpath modeling and classification. However, eye movements are complex signals and many of these studies rely on limited gaze descriptors and bespoke datasets. Previous studies showed that scanpath, i.e., the sequence of eye movements made by an observer exploring a visual stimulus, can be used to infer observer-related (e.g., task at hand) and stimuli-related (e.g., image semantic category) information. Abstract : How people look at visual information reveals fundamental information about them their interests and their states of mind.
