Using K-means Analysis and Principal Component Analysis, we attempt
to perform feature reduction directly on the observation matrix for
POMDPs. The reduced observation matrices allow for faster planning
of larger POMDPs, but with minimal loss to the expected reward.
Professor:
Prof. Joelle Pineau
While working at BBN Research Labs, I worked on the development of their Byblos speech recognition systems. This involved developing system to recognize languages such as english, arabic, spanish, and mandarin for delivery to a client, as well as experiments for the yearly evaluations.
Gesture recognition is becoming a more common interface for new
systems. GT2k attempts to provide researchers and system designers
with a toolkit to create the gesture recognition component of larger
systems. By using existing recognition technology to provide training
and recognition tools, researchers and designers can focus on the more
important issues of the system. GT2k builds on top of an existing
speech recognition hidden Markov Model toolkit, HTK.
Professor:
Prof. Thad Starner
By using simple on-body sensor such as microphones and
accelerometers, it is possible to determine the actions of
a user. In this experiment, the actions consisted of workshop
activities such as hammering or sawing. LDA was used to model
the audio data while HMMs were used to model the accelerometer
data. By using properities of sound travelling, the audio
can be used to aid in parititioning the continuous data,
reducing continuous recognition to an isolated recognition problem.
The goal of the project was to perform continuous recognition
of the users actions, and thus context, based on the simple sensors.
Professor:
Prof. Thad Starner
A joint project between NRL, CMU, Swarthmore, and Metrica involved
developing GRACE for the AAAI robot competition. GRACE is a RWI B21r
robot equipped with VIKIA (a virtual face displayed on an LCD),
speech recognition, and speech generation. Our task in the project
was to use human robot interaction to get GRACE from the door of the
registration hall to the registration desk. This involved locating
humans, asking for directions, and acting upon those directions.
Professor: Alan Schultz
The goal of this project is to perform
robot localization based on an omnicam mounted to the top of a
robot. The omnicam returns a 360 degree view around the robot.
Different methods were used to localize. A simplier method
involved placing feducials around the building and using
a Kalman Filter to maintain position. A more complex method involved
applying PCA on the raw images.
Professor:
Prof. Frank Dellaert
Using Neural Networks, we are trying
to learn low-level control for a physical pinball machine. The
pinball machine is interfaced to the computer through a circuit which
controls the flippers. A camera tracks the ball in play. This
forces us to deal with issues such as real time processing, noise
from the environment, and noise from the sensors, as well as
learning a complex space.
Professor:
Prof. Sven Koenig
As a summer internship at the Naval Research Lab on applying the
continuous localization technique on a robot to be used in outdoor
environments. Continuous localization is a map-based localization
technique initially developed on Nomads. Current work is in porting it
to an ATRV Jr, a robot designed for outdoor enviroments. This involved
integrating various sensors to help with localization, experimentation in
small, high resolution indoor environments, and experimentation in large,
low resolution outdoor environments.
Professor: Alan Schultz William Adams
I
implemented a gesture recognition system for the perceptive workbench. The
perceptive workbench is a desk with a camera beneath which can observe the desk
top. By using HMMs (Hidden Markov Models), gestures could be training on the
table which could then be used to control applications.
Professor:
Prof. Thad Starner