You can view some of the projects I have worked on by selecting a link
Class Project: "Simulated Annealing and the Study of Protein Stability"

Introduction:

Protein stability is largely characterized by non-covalent intramolecular interactions between amino acid side chains. Studying the forces that lead to stability and correct folding is an essential component in drug design research and bio-pharmaceutical production. One of the most common methods used for the structure prediction of solids is simulated annealing, where ab initio calculations are used at each stage during a global search. Ab initio protein modeling builds 3D protein models based on physical principals. Since a protein structure corresponds to the global energy minimum of the protein potential energy, the structure of the protein can be determined by minimizing the Lennard Jones Potential energy function. MTMM, a MATLAB Toolbox for Macromolecular Modeling, was used to facilitate running the molecular modeling simulations and reading the atom coordinates from protein database files.

Implementation:

The simulated annealing algorithm outputs a new set of coordinates for each atom when the potential energy has reached a global minimum. The overall stability of a protein may be influenced by a particular secondary or tertiary structural element. A primary goal of this simulation was to target the component of a given protein that undergoes a significant change during the annealing process, and is therefore the least robust. The protein used in the routine was TRP-caget protein, a common protein used in many studies on protein stability, protein folding and 3D structure. A comparison of the coordinates before and after the annealing process accurately identified which region of the protein is the least robust.

National Science Foundation Research:  Binghamton, NY
Refactoring:
  • Determine which refactoring methods are efficient by running applications and measuring energy/power consumption.
Sparse Matrix Multiplication:
  • Determine which algorithm consumes the least amount of power. This is influenced by the number of CPU instructions and the number of L1, L2 and L3 cache misses.
  • Observe how the data structures used influence these parameters
Project Abstract:

Matrix multiplication using a standard dense matrix structure and the naïve multiplication approach is slow and consumes a large amount of memory. Compressing the input matrices and decreasing the number of matrix operations improves run time performance and decreases power consumption. Existing fast sparse matrix multiplication algorithms take advantage of sparse matrix structure with incremental compressed row storage (ICRS) and row-indexed sparse storage(RIS) data structures. A linear power model that's dependent on the number of CPU instructions and l1, l2 and l3 cache misses has been derived for a CPU frequency of 1.6MHz. We implemented and compared the performance and power consumption of 4 different sparse matrix multiplications, namely algorithms that use either CRS or ICRS data structures, an algorithm that requires an input matrix to be transposed, and other algorithms that demonstrate the effect of loop interchange on performance parameters. An implementation defined in “Numerical Recipes in C”, which uses the ICRS data structure, consistently has the lowest number of cache misses, has the lowest running time (up to 90% faster) and therefore consumes the least amount of power for 100 by 100 and 1000 by 1000 10% sparse matrices.

Developed an RTS game using the Panda3D game engine in a team of 4. Stay tuned!