English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 64178/96951 (66%)
造訪人次 : 9562997      線上人數 : 17720
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    請使用永久網址來引用或連結此文件: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/126358


    題名: A Multimodal Learning Approach for Translating Live Lectures into MOOCs Materials
    作者: Huang, Tzu-Chia;Chang, Chih-Yuan;Tsai, Hung-I;Tao, Han-Si
    關鍵詞: Generative MOOCs;Instructional videos;multimodal;Skeleton-based motion classification;Extractive summarization
    日期: 2024-07-09
    上傳時間: 2024-10-07 12:05:57 (UTC+8)
    摘要: This paper introduces an AI-based solution for the automatic generation of MOOCs, aiming to efficiently create highly realistic instructional videos while ensuring high-quality content. The generated content strives to keep content accuracy, video fluidity, and vivacity. This paper employs a multimodal to understand text, images, and sound simultaneously, enhancing the accuracy and realism of video generation. The process involves three stages: First, the preprocessing stage employs OpenAI's Whisper for audio-to-text conversion, supplemented by Fuzzy Wuzzy and Large Language Models (LLMs) to enhance content accuracy and detect thematic sections. In the second stage, speaker motion prediction begins with skeleton tags. Based on these labels, the speaker’s motion can be classified into different categories. Subsequently, a multimodal, including BERT and CNN, further extracts features from text and voice diagrams, respectively. Based on these features, the multimodal can learn the speaker’s motion categories through the skeleton labels. As a result, the multimodal can predict the classes of the speaker’s motions. The final stage generates MOOCs audiovisuals, converting text into subtitles using LLMs and predicting the speaker’s motions. Finally, the wellknown tool is used to ensure accurate voice and lip synchronization. Based on the mentioned approaches, the proposed mechanism guarantees seamless alignment and consistency in the video elements, thereby ensuring the generated MOOCs can be realistic and more recent.
    顯示於類別:[資訊工程學系暨研究所] 會議論文

    文件中的檔案:

    沒有與此文件相關的檔案.

    在機構典藏中所有的資料項目都受到原著作權保護.

    TAIR相關文章

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - 回饋