Research Projects
Improving SoC Security
The general trend for computing systems these days is for increased integration: add more cores and more software/firmware into a system-on-chip (SoC)! While the SoC approach provides new ways for achieving application-specific requirements through customization, the use of 3rd party IPs and increasing overall complexity can lead to potential security threats. In this line of work, I am broadly interested in coming up with new design flows and architectures that improve security. Naturally, nothing is free --- so working out how to specify security objectives and achieve them while also satisfying other requirements is the name of the game.
Robustifying Deep Learning for EDA/CAD
Machine Learning (ML), and deep learning (DL) especially, have taken off in recent years and has shown a lot of promise in many domains. Electronic design automation (EDA) is one such domain and it involves many processes that iteratively transforms designs into lower and lower levels of abstraction. DL offers a way to speed up the design flow through state-of-the-art prediction and classification performance. However, DL models have also been shown to be vulnerable under some situations. What does this mean for DL in EDA? In this line of research, I investigate whether ML/DL models are robust in EDA settings as well what the potential risks can be.
Hardware Security + Machine Learning
Hardware lies at the foundation of all computing systems -- processors, accelerators, memories -- securing hardware from attackers is paramount. There are several problems in hardware security, including detecting hardware Trojans, Intellectual Property (IP) protection (e.g., reverse engineering), and side-channel attacks. How will the increasing capabilities of AI/ML affect hardware security? Increasing predictive capability can help with challenges like Trojan detection or malware classification. However, there is an opportunity for AI/ML to devise new strategies for attack and defense. In this line of work, I'm interesting in seeing how we can formulate hardware security problems so that AI agents can start exploring the design space. This could extend to areas of research, such as logic locking, in which I have taken a recent interest.
Security/Privacy of Deep Learning
The recent explosion in DL-enhanced application gives us plenty to be excited about: greater personalization, more accurate predictions, ambient intelligence... but are these tools secure? What are potential threats and safeguards that we need to feel more safe when deploying or using DL models? I have recently collaborated on several works with Siddharth Garg and his EnSuRe research group at NYU, focusing on the backdooring attack (BadNets) and subversion of Privacy-Preserving GANs,