TensorFlow
Organized and dependable candidate successful at managing multiple priorities with a positive attitude. Willingness to take on added responsibilities to meet team goals. Experience in designing and developing ML compilers for parallel accelerator architectures. Extensive experience in high-performance computing (HPC) research, focusing on methodologies and tools for performance reasoning and automated optimization of scientific applications while ensuring continued or better usability of HPC tools and libraries and improving developer productivity. Most past projects employ both supervised and unsupervised machine learning methods. Developed multiple open-source software packages and coauthored over 100 peer-reviewed publications on topics including performance modeling, compiler-based performance optimization (autotuning), the embedding of domain-specific languages into legacy codes, source-transformation-based automatic differentiation, adaptive algorithms for HPC, and component-based software engineering.
Leading design and development of Luminous parallelizing compiler infrastructure for compiling TensorFlow and PyTorch models, including eager PyTorch support
Led the performance engineering group in the Mathematics and Computer Science Division and conducted research in performance analysis and optimization, automatic differentiation, and component-based software engineering.
Advisor: Prof. Michael T. Heath
Thesis title: An Environment For Interactive Parallel Numerical Computing
Compiler design and implementation
TensorFlow
PyTorch
Linux
MPI
Compilers
Domain-specific programming languages
Performance analysis and optimization
Machine learning
Software development productivity