Research

AI Security

Adversarial Robustness




Studies have showed that machine learning models, not only neural networks, are vulnerable to adversarial examples. By adding imperceptible perturbations to the original inputs, the attacker can get adversarial examples to fool a learned classifier. Adversarial examples are indistinguishable from the original input image to human, but are misclassified by the classifier. To illustrate, consider the above bagel images. To humans, the two images appear to be the same -- the vision system of human will identify each image as a bagel. The image on the left side is an ordinary image of a bagel (the original image). However, the image on the right side is generated by adding a small and imperceptible perturbation that forces a particular classifier to classify it as a piano. Adversarial attacks are not limited to image classification models; they can also fool other types of machine learning models. Given the widespread application of machine learning models in various tasks, it is crucial to study the security issues posed by adversarial attacks.
Preprints and Publications of this direction:

Backdoor Learning




Artificial Intelligence (AI) has become one of the most exciting areas in recent years, as it has achieved state-of-the-art performance and demonstrated fundamental breakthroughs in many challenging tasks. The Achilles’ heel of the technology, however, is what makes it possible -- learning from data. By poisoning the training data, adversaries can corrupt the model, causing severe or even drastic consequences. Backdoor attacks insert a hidden backdoor into a target model so that the model performs well on benign examples, but when a specific pattern, often known as a trigger, appears in the input data, the model will produce incorrect results, such as associating the trigger with a target label irrespective of what the true label is. For example, the figure above gives a specific example of a backdoor attack on image data, but our research is not limited to studying this type of backdoor attack.
Preprints and Publications of this direction:

Other Works Related to AI Security

Preprints and Publications of this direction:

Computational Pathology





Despite extensive research in applying machine learning methods to medical image analysis, many challenges in this field remain unsolved. The figure above illustrates the pipeline for developing a machine learning system for medical image analysis, highlighting the challenges at each stage. Our focus is on addressing the problems that arise at each step of this pipeline to enhance the utilization of medical images
Preprints and Publications of this direction:

    L-Arginine Supplementation in Severe Asthma. Shu-Yi Liao, Megan R Showalter, Angela L Linderholm, Lisa Franzi, Celeste Kivler , Yao Li, Michael R Sa, Zachary A Kons, Oliver Fiehn, Lihong Qi, Amir A Zeki, Nicholas J Kenyon. In JCI Insight, 2020.


AI Efficiency

To apply machine learning to a real-world problem, we design a model (such as a deep neural network) for that problem, train the model using training data, and then deploy the model in the application to interact with the real world. There are many difficulties in this pipeline that restrict the applicability of machine learning. First, designing machine learning models that can work with various types of data efficiently is a challenging task as one method may work well on a certain type of data but may not be applicable to other types due to large scale or some other limitations.
Preprints and Publications of this direction: