Multi-modal Stance Detection Across Social Groups
By Alexander Hauptmann
This Project will enhance an existing text-based prototype system that identifies topics which are sensitive to certain communities and identifying persons’ attributes and stances towards said topics. This capability helps to identify “off-limit” topics and mitigate potential strategic communications gaffes. The Project will explore the use of multi-modal information to augment the text-based system and provide capabilities such as detecting stances from text and/or visual inputs, and identifying attribute-related information from visual inputs. These approaches aim to enhance the robustness of the prototype system. The enhanced prototype will be validated through a combination of test and live cases.