While Deep Neural Networks (DNN) have achieved superior performance in many downstream applications, they are often regarded as black-boxes and are criticized by their lack of interpretability, since these models cannot provide meaningful explanations on how a certain prediction is made. Without the explanations to enhance the transparency of DNN models, it would become difficult to build up trust among end-users.
In this talk, Dr. Xia (Ben) Hu presents a systematic framework from modeling and application perspectives for generating DNN interpretability, aiming at dealing with main technical challenges in interpretable machine learning, i.e., faithfulness, understandability and the efficiency of interpretability. Specifically, to tackle the faithfulness challenge of post-hoc interpretation, Hu will introduce how to make use of feature inversion and additive decomposition techniques to explain predictions made by two classical DNN architectures, i.e., Convolutional Neural Networks and Recurrent Neural Networks.
In addition, to develop DNNs that could generate more understandable interpretation to human beings, Hu will present a novel training method to regularize the interpretations of a DNN with domain knowledge. Finally, to accelerate the interpretation of DNNs, Hu will introduce a framework to significantly reduce the complexity of explaining DNNs without the degradation of interpretation performance. Fast and efficient DNNs explaining promotes the real-world application of XAI, especially for the online systems.
Watch the Recording
Presenter
Dr. Xia “Ben” Hu is an Associate Professor at Rice University in the Department of Computer Science and director of the Center for Transforming Data to Knowledge (D2K Lab). Dr. Hu has published over 100 papers in several major academic venues, including NeurIPS, ICLR, KDD, WWW, IJCAI, AAAI, etc. An open-source package developed by his group, namely AutoKeras, has become the most used automated deep learning system on Github (with over 8,000 stars and 1,000 forks). Also, his work on deep collaborative filtering, anomaly detection and knowledge graphs have been included in the TensorFlow package, Apple production system and Bing production system, respectively.
Dr. Hu's papers received several Best Paper (Candidate) awards from venues such as ICML, WWW, WSDM, ICDM, AMIA and INFORMS. He is the recipient of NSF CAREER Award and ACM SIGKDD Rising Star Award. His work has been cited more than 16,000 times with an h-index of 51. He was the conference General Co-Chair for WSDM 2020 and will be the conference General Co-Chair for ICHI 2023.