AMP-BiLSTM: An Enhanced Highlight Extraction Method Using Multi-Channel Bi-LSTM and Self-Attention in Streaming Videos DOI

Sheng-Jie Lin,

Chien Chin Chen, Yung‐Chun Chang

и другие.

Опубликована: Фев. 5, 2024

With the rise of conversation-oriented streaming videos, platforms that host them like Twitch have rapidly become prominent information hubs. However, lengthy nature such streams often deters viewers from consuming full content. To mitigate this, we propose AMP-BiLSTM, a novel highlight extraction method which focuses on textual in streamer discourses and viewer responses rather than visual features. This approach addresses limitations previous methods, primarily centered analyzing features, were thus insufficient for where highlights emerge dialogues interactions. AMP-BiLSTM is built techniques Attention, Multi-channel, Position enrichment integrated into Bidirectional Long Short-Term Memory (BiLSTM) network. Through experiments real-world dataset, found messages provide significant utility videos. Furthermore, our proposed Multi-channel self-attention effectively distill text semantically-rich embeddings. The experiment results demonstrate outperforms several state-of-the-art methods deep learning-based extraction, showing promise improved video content digestion.

Язык: Английский

Video summarization via knowledge-aware multimodal deep networks DOI
Jiehang Xie,

Xuanbai Chen,

Sicheng Zhao

и другие.

Knowledge-Based Systems, Год журнала: 2024, Номер 293, С. 111670 - 111670

Опубликована: Март 20, 2024

Язык: Английский

Процитировано

4

AMP-BiLSTM: An Enhanced Highlight Extraction Method Using Multi-Channel Bi-LSTM and Self-Attention in Streaming Videos DOI

Sheng-Jie Lin,

Chien Chin Chen, Yung‐Chun Chang

и другие.

Опубликована: Фев. 5, 2024

With the rise of conversation-oriented streaming videos, platforms that host them like Twitch have rapidly become prominent information hubs. However, lengthy nature such streams often deters viewers from consuming full content. To mitigate this, we propose AMP-BiLSTM, a novel highlight extraction method which focuses on textual in streamer discourses and viewer responses rather than visual features. This approach addresses limitations previous methods, primarily centered analyzing features, were thus insufficient for where highlights emerge dialogues interactions. AMP-BiLSTM is built techniques Attention, Multi-channel, Position enrichment integrated into Bidirectional Long Short-Term Memory (BiLSTM) network. Through experiments real-world dataset, found messages provide significant utility videos. Furthermore, our proposed Multi-channel self-attention effectively distill text semantically-rich embeddings. The experiment results demonstrate outperforms several state-of-the-art methods deep learning-based extraction, showing promise improved video content digestion.

Язык: Английский

Процитировано

0