top of page

RESEARCH OVERVIEW

In the last few years, we have witnessed the emergence and success of deep learning and AI and their wide applications in many domains, including vision, natural language processing, speech, music, multimedia, medical imaging, etc. In fact, AI has become an integral part of most applications and research topics, including heterogeneous data analytics. However, before AI can be widely adopted, the fundamental issues of trust and robustness in AI need to be addressed.

 

It is well known that (Deep Learning-based) AI, though highly effective, is not explainable and robust and has been known to make occasional fatal mistakes that a human would never make. This has eroded the trust in and limited the widespread adaptation of AI.

Moreover, with greater awareness of individual rights and privacy, users and organisations are beginning to demand greater accountability in technologies and applications. As trust and accountability are longer-term goals that require deep fundamental research, most researchers tackle the first part, which is the explainability in AI.

In response to these challenges, NExT will also focus research on accountability in AI and causal reasoning. Accountability will include research on various aspects of explainability, trust and audit, that collectively forms the social basis of AI applications. Causal reasoning aims to uncover the true causal relations between inputs and outcomes and eliminate the spurious correlations that result in the vulnerability of AI systems. It thus contributes towards better transparency and robustness of AI.

 

Our vision is to advance fundamental research in trust and robustness in AI that forms the basis for advanced heterogeneous data analytics and applications. In particular, we will carry out research on the following three strategic topics:

TRUSTABLE AND EXPLAINABLE AI

CAUSAL REASONING FRAMEWORK

MULTIMODAL KNOWLEDGE GRAPH, CONVERSATIONAL SEARCH & RECOMMENDATION

HUMAN-AI 
INTERACTION

bottom of page