Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/139714
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Keeping the Questions Conversational: Using Structured Representations to Resolve Dependency in Conversational Question Answering
Author: Zaib, M.
Sheng, Q.Z.
Zhang, W.E.
Mahmood, A.
Citation: Proceedings of International Joint Conference on Neural Networks, 2023, vol.2023-June, pp.1-7
Publisher: IEEE
Publisher Place: Online
Issue Date: 2023
Series/Report no.: IEEE International Joint Conference on Neural Networks (IJCNN)
ISBN: 9781665488679
ISSN: 2161-4393
2161-4407
Conference Name: International Joint Conference on Neural Networks (IJCNN) (18 Jun 2023 - 23 Jun 2023 : Gold Coast, Australia)
Statement of
Responsibility: 
Munazza Zaib, Quan Z. Sheng, Wei Emma Zhang, and Adnan Mahmood
Abstract: Having an intelligent dialogue agent that can engage in conversational question answering (ConvQA) is now no longer limited to Sci-Fi movies only and has, in fact, turned into a reality. These intelligent agents are required to understand and correctly interpret the sequential turns provided as the context of the given question. However, these sequential questions are sometimes left implicit and thus require the resolution of some natural language phenomena such as anaphora and ellipsis. The task of question rewriting has the potential to address the challenges of resolving dependencies amongst the contextual turns by transforming them into intent-explicit questions. Nonetheless, the solution of rewriting the implicit questions comes with some potential challenges such as resulting in verbose questions and taking conversational aspect out of the scenario by generating the self-contained questions. In this paper, we propose a novel framework, CONVSR (CONVQA using Structured Representations) for capturing and generating intermediate representations as conversational cues to enhance the capability of the QA model to better interpret the incomplete questions. We also deliberate how the strengths of this task could be leveraged in a bid to design more engaging and more eloquent conversational agents. We test our model on the QuAC and CANARD datasets and illustrate by experimental results that our proposed framework achieves a better F1 score than the standard question rewriting model.
Keywords: Conversational question answering; information retrieval; question reformulation; deep learning; conversational information seeking
Rights: ©2023 IEEE
DOI: 10.1109/IJCNN54540.2023.10191510
Grant ID: http://purl.org/au-research/grants/arc/DP200102298
Published version: https://ieeexplore.ieee.org/xpl/conhome/10190990/proceeding
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.