Thanks to my friend Pablo Munoz I got the chance to share my interest in facial expression recognition, computer vision, motion capture and machine learning with a computer science class at Poly Prep. The students were great to work with — I only wish I had more time to let them try the Faceshift demo.
This presentation reviews the work done for my summer 2013 internship at VicarVision, working with their FaceReader automated expression recognition software. A 3D model generated using faceshift was modified and scripted in Maya to conform to the standard Action Units used in coding facial expressions is driven by the data output by FaceReader.
For the Open Apereo Conference in 2013, I assisted New York University’s Chief Digital Officer, David Ackerman, in presenting a research study that I designed and executed. We examined the market penetration of the top learning management systems at U.S. 4-year colleges and universities.
A 3D Model of Infant Facial Expressions Senior project for BA in Applied General Studies My B.A. degree required a capstone project or internship. I decided to work on a project that applied design and programming to the infant facial expression fieldwork studies I had been pursuing since 2011. It centers on the development of a three-dimensional[…]
Baby Facial Action Coding System Based on Paul Ekman’s FACS System As part of my work in Dr. Harriet Oster‘s psychology lab at New York University SCPS McGhee, I’ve been developing a series of images detailing the facial muscles involved in generating prototypical expressions for the Baby FACS manual. The illustration shown here is one of[…]