English  |  正體中文  |  简体中文  |  Items with full text/Total items : 64191/96979 (66%)
Visitors : 8169830      Online Users : 6953
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/126772


    Title: HMTV: Hierarchical Multimodal Transformer for Video Highlight Query on Baseball
    Authors: Zhang, Qiaoyun;Chang, Chih-Yung;Su, Ming-Yang;Chang, Hsiang-Chuan;Roy, Diptendu Sinha
    Keywords: 分層多模態 Transformer;BERT;反白顯示查詢
    Date: 2024-09-23
    Issue Date: 2025-03-20 09:24:10 (UTC+8)
    Abstract: With the increasing popularity of watching baseball videos, there is a growing desire among fans to enjoy the highlights of these videos. However, the extraction of the highlights from lengthy baseball videos faces a significant challenge due to its time-consuming and labor-intensive nature. To address this challenge, this paper proposes a novel mechanism, called Hierarchical Multimodal Transformer for Video query (HMTV). The proposed HMTV incorporates a two-phase involving Coarse-Grained clipping for candidate videos and Fine-Grained identification for highlights. In the Coarse-Grained phase, a pitching detection model is employed to extract relevant candidate videos from baseball videos, encompassing the features of pitch deliveries and pitching. In the Fine-Grained phase, Transformer encoder and pre-trained Bidirectional Encoder Representations from Transformers (BERT) are utilized to capture relationship features between frames of candidate videos and words from users’ questions, respectively. These relationship features are then fed into the Video Query (VideoQ) model, implemented by the Text Video Attention (TVA). The VideoQ model identifies the start and end positions of the highlights mentioned in the query within the candidate videos. Simulation results demonstrate that the proposed HMTV significantly improves accuracy of highlights identification in terms of precision, recall, and F1-score.
    Relation: Multimedia Systems 30(285), p. 1-18
    DOI: 10.1007/s00530-024-01479-6
    Appears in Collections:[資訊工程學系暨研究所] 期刊論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML11View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback