/

Default Site
  • Change Site
  • Default Site
  • Advanced Search
  • Expert Search
  • Sign In
    • Help
    • Search History
    • Clear Session
  • Browse
    • Entire Repository  
    • Recent Additions
    • Communities & Collections
    • By Title
    • By Creator
    • By Subject
    • By Type
    • Most Accessed Papers
    • Most Accessed Items
    • Most Accessed Authors
  • Quick Collection  
Sign In
  • Help
  • Search History
  • Clear Session

Showing items 1 - 2 of 2

Your selections:

  • 2021 International Joint Conference on Neural Networks, IJCNN 2021 Vol. 2021-July
Creator
1Chetty, Madhu 1Guo, Teng 1Kasthuriarachchy, Buddhika 1Peng, Ciyuan 1Saikrishna, Vidya 1Shatte, Adrian 1Walls, Darren 1Xia, Feng 1Zhang, Dongyu 1Zhang, Minghao
Subject
1Annotation 1Crowdsourcing 1Facial Images 1Framework 1Linguistics Features 1Metaphor Identification 1Multimodal Model 1Sentiment Analysis 1Zero-shot text classification
Facets
Creator
1Chetty, Madhu 1Guo, Teng 1Kasthuriarachchy, Buddhika 1Peng, Ciyuan 1Saikrishna, Vidya 1Shatte, Adrian 1Walls, Darren 1Xia, Feng 1Zhang, Dongyu 1Zhang, Minghao
Subject
1Annotation 1Crowdsourcing 1Facial Images 1Framework 1Linguistics Features 1Metaphor Identification 1Multimodal Model 1Sentiment Analysis 1Zero-shot text classification
  • Title
  • Creator
  • Date

Cost effective annotation framework using zero-shot text classification

- Kasthuriarachchy, Buddhika, Chetty, Madhu, Shatte, Adrian, Walls, Darren

  • Authors: Kasthuriarachchy, Buddhika , Chetty, Madhu , Shatte, Adrian , Walls, Darren
  • Date: 2021
  • Type: Text , Conference paper
  • Relation: 2021 International Joint Conference on Neural Networks, IJCNN 2021 Vol. 2021-July
  • Full Text: false
  • Reviewed:
  • Description: Manual and high-quality annotation of social media data has enabled companies and researchers to develop improved implementations using natural language processing. However, human text-annotation is expensive and time-consuming. Crowd-sourcing platforms such as Amazon's Mechanical Turk (MTurk) can be leveraged for the creation of large training corpora for text classification tasks using social media data. Nevertheless, the quality of annotations can vary significantly, based on the interpretations and motivations of annotators completing the tasks. Further, the labelling cost of data through MTurk will increase if target messages are small and having a significant amount of noise (e.g. promotional messages on Twitter). In this work, we propose a new annotation framework to create high-quality human-annotated datasets for text classification from social media data. We present a zero-shot text classification based pre-annotation technique reducing the adverse effects arising due to the highly skewed distribution of data across target classes. The proposed framework significantly reduces the cost and time while maintaining the quality of the annotations. Being generic, it can be applied to annotating text data from any discipline. Our experiment with a Twitter data annotation using the proposed annotation framework shows a cost reduction of 80% with no compromise to quality. © 2021 IEEE.

In your face : sentiment analysis of metaphor with facial expressive features

- Zhang, Dongyu, Zhang, Minghao, Guo, Teng, Peng, Ciyuan, Saikrishna, Vidya, Xia, Feng

  • Authors: Zhang, Dongyu , Zhang, Minghao , Guo, Teng , Peng, Ciyuan , Saikrishna, Vidya , Xia, Feng
  • Date: 2021
  • Type: Text , Conference paper
  • Relation: 2021 International Joint Conference on Neural Networks, IJCNN 2021 Vol. 2021-July
  • Full Text: false
  • Reviewed:
  • Description: Metaphor plays an important role in human communication, which often conveys and evokes sentiments. Numerous approaches to sentiment analysis of metaphors have thus gained attention in natural language processing (NLP). The primary focus of these approaches is on linguistic features and text rather than other modal information and data. However, visual features such as facial expressions also play an important role in expressing sentiments. In this paper, we present a novel neural network approach to sentiment analysis of metaphorical expressions that combines both linguistic and visual features and refer to it as the multimodal model approach. For this, we create a Chinese dataset, containing textual data from metaphorical sentences along with visual data on synchronized facial images. The experimental results indicate that our multimodal model outperforms several other linguistic and visual models, and also outperforms the state-of-the-art methods. The contribution is realized in terms of novelty of the approach and creation of a new, sizeable, and scarce dataset with linguistic and synchronized facial expressive image data. The dataset is particularly useful in languages other than English and the approach addresses one of the most challenging NLP issue: sentiment analysis in metaphor. © 2021 IEEE.

  • «
  • ‹
  • 1
  • ›
  • »
  • English (United States)
  • English (United States)
  • Privacy
  • Copyright
  • Contact
  • Federation Library
  • Federation ResearchOnline policy
  • ABN 51 818 692 256 | CRICOS provider number 00103D | RTO code 4909 | TEQSA PRV12151 Australian University
  • About Vital

‹ › ×

    Clear Session

    Are you sure you would like to clear your session, including search history and login status?