Deep reinforcement-based conversational ai agent in healthcare system
- Authors: Kulkarni, Pradnya , Stranieri, Andrew , Mahableshwarkar, Ameya , Kulkarni, Mrunalini
- Date: 2022
- Type: Text , Book chapter
- Relation: Studies in Computational Intelligence p. 233-249
- Full Text: false
- Reviewed:
- Description: Conversational AI is a sub-domain of artificial intelligence that deals with speech-based or text-based AI agents that have the capability to simulate and automate conversations and verbal interactions. A Goal Oriented Conversational Agent (GOCA) is a conversational AI agent that attempts to solve a specific problem for the users as per their inputs. The development of Reinforcement Learning algorithms has opened up new opportunities in research related to conversational AI, due to the striking similarity the algorithm bears to the way a conversation takes place. This chapter aims to describe a novel, hybrid conversational AI architecture using Deep Reinforcement Learning that can give state-of-the-art results on the tasks of Intent Classification, Entity Recognition, Dialog Management, State Tracking, Information Retrieval and Natural Language Response Generation. The architecture also consists of external AI modules, focused on carrying out intelligent tasks pertaining to the healthcare sector. The AI tasks that the conversational agent is capable of performing are—Text-based Question Answering, Text Summarization and Visual Question Answering. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
Melanoma classification using efficientnets and ensemble of models with different input resolution
- Authors: Karki, Sagar , Kulkarni, Pradnya , Stranieri, Andrew
- Date: 2021
- Type: Text , Conference paper
- Relation: 2021 Australasian Computer Science Week Multiconference, ACSW 2021, Virtual, Online, 1-5 February 2021, ACM International Conference Proceeding Series
- Full Text:
- Reviewed:
- Description: Early and accurate detection of melanoma with data analytics can make treatment more effective. This paper proposes a method to classify melanoma cases using deep learning on dermoscopic images. The method demonstrates that heavy augmentation during training and testing produces promising results and warrants further research. The proposed method has been evaluated on the SIIM-ISIC Melanoma Classification 2020 dataset and the best ensemble model achieved 0.9411 area under the ROC curve on hold out test data. © 2021 ACM.
Comparing Pixel N-grams and bag of visual word features for the classification of diabetic retinopathy
- Authors: Kulkarni, Pradnya , Stranieri, Andrew , Jelinek, Herbert
- Date: 2019
- Type: Text , Conference proceedings
- Relation: ACSW 2019: Australasian Computer Science Week 2019;Sydney NSW Australia; January 29 - 31, 2019; published in Proceedings of the Australasian Computer Science Week Multiconference p. 1-7
- Full Text: false
- Reviewed:
- Description: The extraction of Bag of Visual Words (BoVW) features from retinal images for automated classification has been shown to be effective but computationally expensive. Histogram and co-variance matrix features do not generally result in models that have the same predictive accuracy as BoVW and are still computationally expensive. The discovery of features that result in accurate image classification on computationally constrained devices such as smartphones would enable new and promising applications for image classification. For example, smartphone retinal cameras can conceivably make diabetic retinopathy widely available and potentially reduce undiagnosed retinopathy if it could be achieved with computationally simple classification algorithms. A novel image feature extraction technique inspired by N-grams in text mining, called 'Pixel N-grams' is described that can serve this purpose. Results on mammogram and texture classification have shown high accuracy despite the reduced computational complexity. However retinal scan classification results using Pixel N-grams lag behind BoVW approaches. An explanation for the relative poor performance of Pixel N-grams with diabetic retinopathy that draws on concepts associated with the No Free Lunch theorem are presented.
Comparison of pixel N-Grams with histogram, Haralick's features and bag-of-visual-words for texture image classification
- Authors: Kulkarni, Pradnya , Stranieri, Andrew
- Date: 2018
- Type: Text , Conference proceedings
- Relation: IEEE 3rd International Conference on Convergence in Technology: Pune, India ; April 6th-8th, 2018 p. 1-4
- Full Text: false
- Reviewed:
- Description: Texture image classification is very useful in many domains. It has been tried using statistical, spectral and structural approaches. A novel Pixel N-grams technique has emerged for image feature extraction recently. The aim of this paper is to analyse the efficacy of Pixel N-grams technique for texture image classification in comparison with the traditional techniques namely Intensity histogram, Haralick’s features based on co-occurrence matrix and state-of-the-art Bag-of-Visual-Words (BoVW). The experiments were carried out on the benchmark UIUC texture dataset using SVM classifier. The classification performance was compared using Fscore, Recall and Precision. The classification results using Pixel N-gram were significantly better than that using Intensity histogram and Haralick features whereas, they were comparable with the BoVW approach.
Framework for Integration of Medical Image and Text-Based Report Retrieval to Support Radiological Diagnosis
- Authors: Kulkarni, Siddhivinayak , Savyanavar, Amit , Kulkarni, Pradnya , Stranieri, Andrew , Ghorpade, Vijay
- Date: 2017
- Type: Text , Book chapter
- Relation: Biomedical Signal and Image Processing in Patient Care p. 86-122
- Full Text: false
- Reviewed:
- Description: In healthcare systems, medical devices help physicians and specialists in diagnosis, prognosis, and therapeutics. As research shows, validation of medical devices is significantly optimized by accurate signal processing. Biomedical Signal and Image Processing in Patient Care is a pivotal reference source for progressive research on the latest development of applications and tools for healthcare systems. Featuring extensive coverage on a broad range of topics and perspectives such as telemedicine, human machine interfaces, and multimodal data fusion, this publication is ideally designed for academicians, researchers, students, and practitioners seeking current scholarly research on real-life technological inventions. In healthcare systems, medical devices help physicians and specialists in diagnosis, prognosis, and therapeutics. As research shows, validation of medical devices is significantly optimized by accurate signal processing. Biomedical Signal and Image Processing in Patient Care is a pivotal reference source for progressive research on the latest development of applications and tools for healthcare systems. Featuring extensive coverage on a broad range of topics and perspectives such as telemedicine, human machine interfaces, and multimodal data fusion, this publication is ideally designed for academicians, researchers, students, and practitioners seeking current scholarly research on real-life technological inventions.
Pixel N-grams for mammographic lesion classification
- Authors: Kulkarni, Pradnya , Stranieri, Andrew , Ugon, Julien , Mittal, Manish , Kulkarni, Siddhivinayak
- Date: 2017
- Type: Text , Conference proceedings
- Relation: 2017 2nd International Conference on Communication Systems, Computing and IT Applications, CSCITA , Mumbai; 7th-8th April, 2017; published in CSCITA 2017 - Proceedings p. 107-111
- Full Text: false
- Reviewed:
- Description: Automated classification algorithms have been applied to breast cancer diagnosis in order to improve the diagnostic accuracy and turnover time. However, classification accuracy, sensitivity and specificity could still be improved further. Moreover, reducing computational cost is another challenge as the number of images to be analyzed is typically large. In this paper, a novel Pixel N-gram approach inspired from character N-grams in the text retrieval context has been applied for mammographic lesion classification. The experiments on real world database demonstrate that the Pixel N-grams outperform the existing histogram as well as Haralick features with respect to classification accuracy as well as sensitivity. Effect of varying N and using various classifiers is also analyzed in this paper. Results show that optimum value of N is equal to 3 and MLP classifier performs better than SVM and KNN classifier using 3-gram features.
Texture image classification using pixel N-grams
- Authors: Kulkarni, Pradnya , Stranieri, Andrew , Ugon, Julien
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 IEEE International Conference on Signal and Image Processing (ICSIP); Beijing, China; 13-15 Aug, 2016 p. 137-141
- Full Text: false
- Reviewed:
- Description: Various statistical methods such as co-occurrence matrix, local binary patterns and spectral approaches such as Gabor filters have been used for generating global features for image classification. However, global image features fail to distinguish between local variations within an image. Bag-of-visual-words (BoVW) model do capture local variations in an image, but typically do not consider spatial relationships between the visual words. Here, a novel image representation ‘Pixel N-grams’, inspired from the character N-gram concept in text retrieval has been applied for texture classification purpose. Texture is an important property for image classification. Experiments on the benchmark texture database (UIUC) demonstrates that the overall classification accuracy resulting from Pixel N-gram approach (89.5%) is comparable with that achieved using BoVW approach (84.4%) with the added advantage of simplicity and reduced computational cost.
Analysis and comparison of co-occurrence matrix and pixel n-gram features for mammographic images
- Authors: Kulkarni, Pradnya , Stranieri, Andrew , Kulkarni, Sid , Ugon, Julien , Mittal, Manish
- Date: 2015
- Type: Text , Conference paper
- Relation: International Conference on Communication and Computing p. 7-14
- Full Text: false
- Reviewed:
- Description: Mammography is a proven way of detecting breast cancer at an early stage. Various feature extraction techniques such as histograms, co-occurrence matrix, local binary patterns, Gabor filters, wavelet transforms are used for analysing mammograms. The novel pixel N-gram feature extraction technique has been inspired from the character N-gram concept of text retrieval. In this paper, we have compared the novel N-gram feature extraction technique with the co-occurrence matrix feature extraction technique. The experiments were conducted on the benchmark miniMIAS mammography database. Classification of mammograms into normal and abnormal category using N-gram features showed promising results with greater classification accuracy, sensitivity and specificity compared to classification using co-occurrence matrix features. Moreover, N-gram features computation are found to be considerably faster than co-occurrence matrix feature computation
Visual character N-grams for classification and retrieval of radiological images
- Authors: Kulkarni, Pradnya , Stranieri, Andrew , Kulkarni, Siddhivinayak , Ugon, Julien , Mittal, Manish
- Date: 2014
- Type: Text , Journal article
- Relation: International Journal of Multimedia & Its Applications Vol. 6, no. 2 (April 2014), p. 35-49
- Full Text:
- Reviewed:
- Description: Diagnostic radiology struggles to maintain high interpretation accuracy. Retrieval of past similar cases would help the inexperienced radiologist in the interpretation process. Character n-gram model has been effective in text retrieval context in languages such as Chinese where there are no clear word boundaries. We propose the use of visual character n-gram model for representation of image for classification and retrieval purposes. Regions of interests in mammographic images are represented with the character n-gram features. These features are then used as input to back-propagation neural network for classification of regions into normal and abnormal categories. Experiments on miniMIAS database show that character n-gram features are useful in classifying the regions into normal and abnormal categories. Promising classification accuracies are observed (83.33%) for fatty background tissue warranting further investigation. We argue that Classifying regions of interests would reduce the number of comparisons necessary for finding similar images from the database and hence would reduce the time required for retrieval of past similar cases.