Karthik Garimella

I'm an ECE PhD student at New York University advised by Brandon Reagen. I also work with Siddharth Garg. I'm broadly interested in machine learning, systems, and security. Currently, my research focuses on privacy-enhanced computation (multi-party computation and homomorphic encryption) and machine learning security & privacy.

Previously, I received an MS in Computer Engineering from Washington University in St. Louis where I was advised by Xuan Zhang working on adversarial machine learning in autonomous vehicles. I hold a BA in
Physics from Hendrix College.

Outside of research, I enjoy playing tennis, cooking, reading, and biking around NYC.

Google Scholar  /  GitHub  /  Resume  /  kvgarimella AT nyu.edu

profile photo
Teaching
Head Graduate Teaching Assistant - Deep Learning Spring 2023 @ NYU ECE
Research
clean-usnob Characterizing and Optimizing End-to-End Systems for Private Inference
Karthik Garimella, Zahra Ghodsi, Nandan Kumar Jha, Siddharth Garg, Brandon Reagen.
ASPLOS 2023

We provide a detailed characterization of end-to-end private inference using state-of-the-art hybrid protocols and account for both the offline and online phase. We show that the offline phase lies on the critical path when considering multiple inferences performed over a period of time. We propose optimizations to mitigate the system-level effects of the offline phase including a method for parallelizing linear layer evaluations using homomorphic encryption.

[arxiv] [code]
clean-usnob CryptoNite: Revealing the Pitfalls of End-to-End Private Inference at Scale
Karthik Garimella, Nandan Kumar Jha, Zahra Ghodsi, Siddharth Garg, Brandon Reagen.
arXiv preprint, 2021

We investigate the end-to-end system characteristics of private inference protocols and optimization techniques and find that the current understanding of private inference performance is overly optimistic.

[arxiv]
clean-usnob Sisyphus: A Cautionary Tale of Using Low-Degree Polynomial Activations in Privacy-Preserving Deep Learning
Karthik Garimella, Nandan Kumar Jha, Brandon Reagen.
Privacy Preserving Machine Learning Workshop @ ACM CCS, 2021

In this work, we ask: Is it feasible to substitute all ReLUs with low-degree polynomial activation functions for building deep, privacy-friendly neural networks?

[arxiv] [code]
clean-usnob Attacking Vision-based Perception in End-to-End Autonomous Driving Models
Adith Boloor, Karthik Garimella , Xin He, Christopher Gill, Yevgeniy Vorobeychik, Xuan Zhang.
Journal of Systems Architecture, 2020

We employ Bayesian Optimization to efficiently traverse the search space of simple physically realizable attacks on deep neural network models for end-to-end autonomous driving control within the CARLA simulator.

[arxiv] [code]
clean-usnob CARLA Autonomous Driving Challenge 2019
Adith Boloor, Karthik Garimella , Jinghan Yang, Christopher Gill, Yevgeniy Vorobeychik, Ayan Chakrabarti, Xuan Zhang.
Invited talk at CVPR, 2019

We train an end-to-end CNN model to predict vehicle controls and make use of a hazard model to obey traffic rules and yield for pedestrians.

[CVPR 2019 workshop] [code]
Source stolen from here