Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/129211
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Joint learning of social groups, individuals action and sub-group activities in videos
Author: Ehsanpour, M.
Abedin Varamin, A.
Saleh, F.
Shi, Q.
Reid, I.D.
Rezatofighi, H.
Citation: Lecture Notes in Artificial Intelligence, 2020 / Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (ed./s), vol.12354, pp.177-195
Publisher: Springer
Publisher Place: Cham, Switzerland
Issue Date: 2020
Series/Report no.: Lecture Notes in Computer Science; 12354
ISBN: 3030585441
9783030585440
ISSN: 0302-9743
1611-3349
Conference Name: 16th European Conference on Computer Vision Workshops (ECCV) (23 Aug 2020 - 28 Aug 2020 : Glasgow, UK)
Editor: Vedaldi, A.
Bischof, H.
Brox, T.
Frahm, J.-M.
Statement of
Responsibility: 
Mahsa Ehsanpour, Alireza Abedin, Fatemeh Saleh, Javen Shi, Ian Reid, and Hamid Rezatofighi
Abstract: The state-of-the art solutions for human activity understanding from a video stream formulate the task as a spatio-temporal problem which requires joint localization of all individuals in the scene and classification of their actions or group activity over time. Who is interacting with whom, e.g. not everyone in a queue is interacting with each other, is often not predicted. There are scenarios where people are best to be split into sub-groups, which we call social groups, and each social group may be engaged in a different social activity. In this paper, we solve the problem of simultaneously grouping people by their social interactions, predicting their individual actions and the social activity of each social group, which we call the social task. Our main contributions are: i) we propose an end-to-end trainable framework for the social task; ii) our proposed method also sets the state-of-the-art results on two widely adopted benchmarks for the traditional group activity recognition task (assuming individuals of the scene form a single group and predicting a single group activity label for the scene); iii) we introduce new annotations on an existing group activity dataset, re-purposing it for the social task. The data and code for our method is publicly available (https://github.com/mahsaep/Social-human-activity-understanding-and-grouping).
Keywords: Collective behaviour recognition; Social grouping; Video understanding
Rights: © Springer Nature Switzerland AG 2020
DOI: 10.1007/978-3-030-58545-7_11
Published version: https://doi.org/10.1007/978-3-030-58545-7
Appears in Collections:Aurora harvest 4
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.