LSPM : joint deep modeling of long-term preference and short-term preference for recommendation
- Chen, Jie, Jiang, Lifen, Sun, Huazhi, Ma, Chunmei, Liu, Zekang, Zhao, Dake
- Authors: Chen, Jie , Jiang, Lifen , Sun, Huazhi , Ma, Chunmei , Liu, Zekang , Zhao, Dake
- Date: 2019
- Type: Text , Conference paper
- Relation: 26th International Conference on Neural Information Processing, ICONIP 2019 Vol. 1142 CCIS, p. 237-246
- Full Text: false
- Reviewed:
- Description: In the era of information, recommender systems are playing an indispensable role in our lives. A lot of deep learning based recommender systems have been created and proven to be good progress. However, users’ decisions are determined by both long-term and short-term preferences, and most of the existing efforts study these two requirements separately. In this paper, we seek to build a bridge between the long-term and short-term preferences. We propose a Long & Short-term Preference Model (LSPM), which incorporates LSTM and self-attention mechanism to learn the short-term preference and jointly model the long-term preference by a neural latent factor model. We conduct experiments to demonstrate the effectiveness of LSPM on three public datasets. Compared with the state-of-the-art methods, LSPM got a significant improvement in HR@10 and NDCG@10, which relatively increased by 3.875% and 6.363%. We publish our code at https://github.com/chenjie04/LSPM/. © Springer Nature Switzerland AG 2019.
Instruction cognitive one-shot malware outbreak detection
- Park, Sean, Gondal, Iqbal, Kamruzzaman, Joarder, Oliver, Jon
- Authors: Park, Sean , Gondal, Iqbal , Kamruzzaman, Joarder , Oliver, Jon
- Date: 2019
- Type: Text , Conference paper
- Relation: 26th International Conference on Neural Information Processing [ICONIP 2019] December 12-15 2019, Proceedings, Part IV Vol. 1142, p. 769-778
- Full Text: false
- Reviewed:
- Description: New malware outbreaks cannot provide thousands of training samples which are required to counter malware campaigns. In some cases, there could be just one sample. So, the defense system at the firing line must be able to quickly detect many automatically generated variants using a single malware instance observed from the initial outbreak by tatically inspecting the binary executables. As previous research works show, statistical features such as term frequency-inverse document frequency and n-gram are significantly vulnerable to attacks by mutation through reinforcement learning. Recent studies focus on raw binary executable as a base feature which contains instructions describing the core logic of the sample. However, many approaches using image-matching neural networks are insufficient due to the malware mutation technique that generates a large number of samples with high entropy data. Deriving instruction cognitive representation that disambiguates legitimate instructions from the context is necessary for accurate detection over raw binary executables. In this paper, we present a novel method of detecting semantically similar malware variants within a campaign using a single raw binary malware executable. We utilize Discrete Fourier Transform of instruction cognitive representation extracted from self-attention transformer network. The experiments were conducted with in-the-wild malware samples from ransomware and banking Trojan campaigns. The proposed method outperforms several state of the art binary classification models.
- Description: E1
DINE : a framework for deep incomplete network embedding
- Hou, Ke, Liu, Jiaying, Peng, Yin, Xu, Bo, Lee, Ivan, Xia, Feng
- Authors: Hou, Ke , Liu, Jiaying , Peng, Yin , Xu, Bo , Lee, Ivan , Xia, Feng
- Date: 2019
- Type: Text , Conference paper
- Relation: 32nd Australasian Joint Conference on Artificial Intelligence, AI 2019 Vol. 11919 LNAI, p. 165-176
- Full Text:
- Reviewed:
- Description: Network representation learning (NRL) plays a vital role in a variety of tasks such as node classification and link prediction. It aims to learn low-dimensional vector representations for nodes based on network structures or node attributes. While embedding techniques on complete networks have been intensively studied, in real-world applications, it is still a challenging task to collect complete networks. To bridge the gap, in this paper, we propose a Deep Incomplete Network Embedding method, namely DINE. Specifically, we first complete the missing part including both nodes and edges in a partially observable network by using the expectation-maximization framework. To improve the embedding performance, we consider both network structures and node attributes to learn node representations. Empirically, we evaluate DINE over three networks on multi-label classification and link prediction tasks. The results demonstrate the superiority of our proposed approach compared against state-of-the-art baselines. © 2019, Springer Nature Switzerland AG.
- Description: E1
- Authors: Hou, Ke , Liu, Jiaying , Peng, Yin , Xu, Bo , Lee, Ivan , Xia, Feng
- Date: 2019
- Type: Text , Conference paper
- Relation: 32nd Australasian Joint Conference on Artificial Intelligence, AI 2019 Vol. 11919 LNAI, p. 165-176
- Full Text:
- Reviewed:
- Description: Network representation learning (NRL) plays a vital role in a variety of tasks such as node classification and link prediction. It aims to learn low-dimensional vector representations for nodes based on network structures or node attributes. While embedding techniques on complete networks have been intensively studied, in real-world applications, it is still a challenging task to collect complete networks. To bridge the gap, in this paper, we propose a Deep Incomplete Network Embedding method, namely DINE. Specifically, we first complete the missing part including both nodes and edges in a partially observable network by using the expectation-maximization framework. To improve the embedding performance, we consider both network structures and node attributes to learn node representations. Empirically, we evaluate DINE over three networks on multi-label classification and link prediction tasks. The results demonstrate the superiority of our proposed approach compared against state-of-the-art baselines. © 2019, Springer Nature Switzerland AG.
- Description: E1
Rapid health data repository allocation using predictive machine learning
- Uddin, Ashraf, Stranieri, Andrew, Gondal, Iqbal, Balasubramanian, Venki
- Authors: Uddin, Ashraf , Stranieri, Andrew , Gondal, Iqbal , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: Health Informatics Journal Vol. 26, no. 4 (2020), p. 3009-3036
- Full Text:
- Reviewed:
- Description: Health-related data is stored in a number of repositories that are managed and controlled by different entities. For instance, Electronic Health Records are usually administered by governments. Electronic Medical Records are typically controlled by health care providers, whereas Personal Health Records are managed directly by patients. Recently, Blockchain-based health record systems largely regulated by technology have emerged as another type of repository. Repositories for storing health data differ from one another based on cost, level of security and quality of performance. Not only has the type of repositories increased in recent years, but the quantum of health data to be stored has increased. For instance, the advent of wearable sensors that capture physiological signs has resulted in an exponential growth in digital health data. The increase in the types of repository and amount of data has driven a need for intelligent processes to select appropriate repositories as data is collected. However, the storage allocation decision is complex and nuanced. The challenges are exacerbated when health data are continuously streamed, as is the case with wearable sensors. Although patients are not always solely responsible for determining which repository should be used, they typically have some input into this decision. Patients can be expected to have idiosyncratic preferences regarding storage decisions depending on their unique contexts. In this paper, we propose a predictive model for the storage of health data that can meet patient needs and make storage decisions rapidly, in real-time, even with data streaming from wearable sensors. The model is built with a machine learning classifier that learns the mapping between characteristics of health data and features of storage repositories from a training set generated synthetically from correlations evident from small samples of experts. Results from the evaluation demonstrate the viability of the machine learning technique used. © The Author(s) 2020.
- Authors: Uddin, Ashraf , Stranieri, Andrew , Gondal, Iqbal , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: Health Informatics Journal Vol. 26, no. 4 (2020), p. 3009-3036
- Full Text:
- Reviewed:
- Description: Health-related data is stored in a number of repositories that are managed and controlled by different entities. For instance, Electronic Health Records are usually administered by governments. Electronic Medical Records are typically controlled by health care providers, whereas Personal Health Records are managed directly by patients. Recently, Blockchain-based health record systems largely regulated by technology have emerged as another type of repository. Repositories for storing health data differ from one another based on cost, level of security and quality of performance. Not only has the type of repositories increased in recent years, but the quantum of health data to be stored has increased. For instance, the advent of wearable sensors that capture physiological signs has resulted in an exponential growth in digital health data. The increase in the types of repository and amount of data has driven a need for intelligent processes to select appropriate repositories as data is collected. However, the storage allocation decision is complex and nuanced. The challenges are exacerbated when health data are continuously streamed, as is the case with wearable sensors. Although patients are not always solely responsible for determining which repository should be used, they typically have some input into this decision. Patients can be expected to have idiosyncratic preferences regarding storage decisions depending on their unique contexts. In this paper, we propose a predictive model for the storage of health data that can meet patient needs and make storage decisions rapidly, in real-time, even with data streaming from wearable sensors. The model is built with a machine learning classifier that learns the mapping between characteristics of health data and features of storage repositories from a training set generated synthetically from correlations evident from small samples of experts. Results from the evaluation demonstrate the viability of the machine learning technique used. © The Author(s) 2020.
Robust Mobile Malware Detection
- Authors: Khoda, Mahbub
- Date: 2020
- Type: Text , Thesis , PhD
- Full Text:
- Description: The increasing popularity and use of smartphones and hand-held devices have made them the most popular target for malware attackers. Researchers have proposed machine learning-based models to automatically detect malware attacks on these devices. Since these models learn application behaviors solely from the extracted features, choosing an appropriate and meaningful feature set is one of the most crucial steps for designing an effective mobile malware detection system. There are four categories of features for mobile applications. Previous works have taken arbitrary combinations of these categories to design models, resulting in sub-optimal performance. This thesis systematically investigates the individual impact of these feature categories on mobile malware detection systems. Feature categories that complement each other are investigated and categories that add redundancy to the feature space (thereby degrading the performance) are analyzed. In the process, the combination of feature categories that provides the best detection results is identified. Ensuring reliability and robustness of the above-mentioned malware detection systems is of utmost importance as newer techniques to break down such systems continue to surface. Adversarial attack is one such evasive attack that can bypass a detection system by carefully morphing a malicious sample even though the sample was originally correctly identified by the same system. Self-crafted adversarial samples can be used to retrain a model to defend against such attacks. However, randomly using too many such samples, as is currently done in the literature, can further degrade detection performance. This work proposed two intelligent approaches to retrain a classifier through the intelligent selection of adversarial samples. The first approach adopts a distance-based scheme where the samples are chosen based on their distance from malware and benign cluster centers while the second selects the samples based on a probability measure derived from a kernel-based learning method. The second method achieved a 6% improvement in terms of accuracy. To ensure practical deployment of malware detection systems, it is necessary to keep the real-world data characteristics in mind. For example, the benign applications deployed in the market greatly outnumber malware applications. However, most studies have assumed a balanced data distribution. Also, techniques to handle imbalanced data in other domains cannot be applied directly to mobile malware detection since they generate synthetic samples with broken functionality, making them invalid. In this regard, this thesis introduces a novel synthetic over-sampling technique that ensures valid sample generation. This technique is subsequently combined with a dynamic cost function in the learning scheme that automatically adjusts minority class weight during model training which counters the bias towards the majority class and stabilizes the model. This hybrid method provided a 9% improvement in terms of F1-score. Aiming to design a robust malware detection system, this thesis extensively studies machine learning-based mobile malware detection in terms of best feature category combination, resilience against evasive attacks, and practical deployment of detection models. Given the increasing technological advancements in mobile and hand-held devices, this study will be very useful for designing robust cybersecurity systems to ensure safe usage of these devices.
- Description: Doctor of Philosophy
- Authors: Khoda, Mahbub
- Date: 2020
- Type: Text , Thesis , PhD
- Full Text:
- Description: The increasing popularity and use of smartphones and hand-held devices have made them the most popular target for malware attackers. Researchers have proposed machine learning-based models to automatically detect malware attacks on these devices. Since these models learn application behaviors solely from the extracted features, choosing an appropriate and meaningful feature set is one of the most crucial steps for designing an effective mobile malware detection system. There are four categories of features for mobile applications. Previous works have taken arbitrary combinations of these categories to design models, resulting in sub-optimal performance. This thesis systematically investigates the individual impact of these feature categories on mobile malware detection systems. Feature categories that complement each other are investigated and categories that add redundancy to the feature space (thereby degrading the performance) are analyzed. In the process, the combination of feature categories that provides the best detection results is identified. Ensuring reliability and robustness of the above-mentioned malware detection systems is of utmost importance as newer techniques to break down such systems continue to surface. Adversarial attack is one such evasive attack that can bypass a detection system by carefully morphing a malicious sample even though the sample was originally correctly identified by the same system. Self-crafted adversarial samples can be used to retrain a model to defend against such attacks. However, randomly using too many such samples, as is currently done in the literature, can further degrade detection performance. This work proposed two intelligent approaches to retrain a classifier through the intelligent selection of adversarial samples. The first approach adopts a distance-based scheme where the samples are chosen based on their distance from malware and benign cluster centers while the second selects the samples based on a probability measure derived from a kernel-based learning method. The second method achieved a 6% improvement in terms of accuracy. To ensure practical deployment of malware detection systems, it is necessary to keep the real-world data characteristics in mind. For example, the benign applications deployed in the market greatly outnumber malware applications. However, most studies have assumed a balanced data distribution. Also, techniques to handle imbalanced data in other domains cannot be applied directly to mobile malware detection since they generate synthetic samples with broken functionality, making them invalid. In this regard, this thesis introduces a novel synthetic over-sampling technique that ensures valid sample generation. This technique is subsequently combined with a dynamic cost function in the learning scheme that automatically adjusts minority class weight during model training which counters the bias towards the majority class and stabilizes the model. This hybrid method provided a 9% improvement in terms of F1-score. Aiming to design a robust malware detection system, this thesis extensively studies machine learning-based mobile malware detection in terms of best feature category combination, resilience against evasive attacks, and practical deployment of detection models. Given the increasing technological advancements in mobile and hand-held devices, this study will be very useful for designing robust cybersecurity systems to ensure safe usage of these devices.
- Description: Doctor of Philosophy
Deep matrix factorization for trust-aware recommendation in social networks
- Wan, Liangtian, Xia, Feng, Kong, Xiangjie, Hsu, Ching-Hsien, Huang, Runhe, Ma, Jianhua
- Authors: Wan, Liangtian , Xia, Feng , Kong, Xiangjie , Hsu, Ching-Hsien , Huang, Runhe , Ma, Jianhua
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Network Science and Engineering Vol. 8, no. 1 (2021), p. 511-528
- Full Text:
- Reviewed:
- Description: Recent years have witnessed remarkable information overload in online social networks, and social network based approaches for recommender systems have been widely studied. The trust information in social networks among users is an important factor for improving recommendation performance. Many successful recommendation tasks are treated as the matrix factorization problems. However, the prediction performance of matrix factorization based methods largely depends on the matrixes initialization of users and items. To address this challenge, we develop a novel trust-aware approach based on deep learning to alleviate the initialization dependence. First, we propose two deep matrix factorization (DMF) techniques, i.e., linear DMF and non-linear DMF to extract features from the user-item rating matrix for improving the initialization accuracy. The trust relationship is integrated into the DMF model according to the preference similarity and the derivations of users on items. Second, we exploit deep marginalized Denoising Autoencoder (Deep-MDAE) to extract the latent representation in the hidden layer from the trust relationship matrix to approximate the user factor matrix factorized from the user-item rating matrix. The community regularization is integrated in the joint optimization function to take neighbours' effects into consideration. The results of DMF are applied to initialize the updating variables of Deep-MDAE in order to further improve the recommendation performance. Finally, we validate that the proposed approach outperforms state-of-the-art baselines for recommendation, especially for the cold-start users. © 2013 IEEE.
- Authors: Wan, Liangtian , Xia, Feng , Kong, Xiangjie , Hsu, Ching-Hsien , Huang, Runhe , Ma, Jianhua
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Network Science and Engineering Vol. 8, no. 1 (2021), p. 511-528
- Full Text:
- Reviewed:
- Description: Recent years have witnessed remarkable information overload in online social networks, and social network based approaches for recommender systems have been widely studied. The trust information in social networks among users is an important factor for improving recommendation performance. Many successful recommendation tasks are treated as the matrix factorization problems. However, the prediction performance of matrix factorization based methods largely depends on the matrixes initialization of users and items. To address this challenge, we develop a novel trust-aware approach based on deep learning to alleviate the initialization dependence. First, we propose two deep matrix factorization (DMF) techniques, i.e., linear DMF and non-linear DMF to extract features from the user-item rating matrix for improving the initialization accuracy. The trust relationship is integrated into the DMF model according to the preference similarity and the derivations of users on items. Second, we exploit deep marginalized Denoising Autoencoder (Deep-MDAE) to extract the latent representation in the hidden layer from the trust relationship matrix to approximate the user factor matrix factorized from the user-item rating matrix. The community regularization is integrated in the joint optimization function to take neighbours' effects into consideration. The results of DMF are applied to initialize the updating variables of Deep-MDAE in order to further improve the recommendation performance. Finally, we validate that the proposed approach outperforms state-of-the-art baselines for recommendation, especially for the cold-start users. © 2013 IEEE.
A new data driven long-term solar yield analysis model of photovoltaic power plants
- Ray, Biplob, Shah, Rakibuzzaman, Islam, Md Rabiul, Islam, Syed
- Authors: Ray, Biplob , Shah, Rakibuzzaman , Islam, Md Rabiul , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 136223-136233
- Full Text:
- Reviewed:
- Description: Historical data offers a wealth of knowledge to the users. However, often restrictively mammoth that the information cannot be fully extracted, synthesized, and analyzed efficiently for an application such as the forecasting of variable generator outputs. Moreover, the accuracy of the prediction method is vital. Therefore, a trade-off between accuracy and efficacy is required for the data-driven energy forecasting method. It has been identified that the hybrid approach may outperform the individual technique in minimizing the error while challenging to synthesize. A hybrid deep learning-based method is proposed for the output prediction of the solar photovoltaic systems (i.e. proposed PV system) in Australia to obtain the trade-off between accuracy and efficacy. The historical dataset from 1990-2013 in Australian locations (e.g. North Queensland) are used to train the model. The model is developed using the combination of multivariate long and short-term memory (LSTM) and convolutional neural network (CNN). The proposed hybrid deep learning (LSTM-CNN) is compared with the existing neural network ensemble (NNE), random forest, statistical analysis, and artificial neural network (ANN) based techniques to assess the performance. The proposed model could be useful for generation planning and reserve estimation in power systems with high penetration of solar photovoltaics (PVs) or other renewable energy sources (RESs). © 2013 IEEE.
- Authors: Ray, Biplob , Shah, Rakibuzzaman , Islam, Md Rabiul , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 136223-136233
- Full Text:
- Reviewed:
- Description: Historical data offers a wealth of knowledge to the users. However, often restrictively mammoth that the information cannot be fully extracted, synthesized, and analyzed efficiently for an application such as the forecasting of variable generator outputs. Moreover, the accuracy of the prediction method is vital. Therefore, a trade-off between accuracy and efficacy is required for the data-driven energy forecasting method. It has been identified that the hybrid approach may outperform the individual technique in minimizing the error while challenging to synthesize. A hybrid deep learning-based method is proposed for the output prediction of the solar photovoltaic systems (i.e. proposed PV system) in Australia to obtain the trade-off between accuracy and efficacy. The historical dataset from 1990-2013 in Australian locations (e.g. North Queensland) are used to train the model. The model is developed using the combination of multivariate long and short-term memory (LSTM) and convolutional neural network (CNN). The proposed hybrid deep learning (LSTM-CNN) is compared with the existing neural network ensemble (NNE), random forest, statistical analysis, and artificial neural network (ANN) based techniques to assess the performance. The proposed model could be useful for generation planning and reserve estimation in power systems with high penetration of solar photovoltaics (PVs) or other renewable energy sources (RESs). © 2013 IEEE.
AI and IoT-Enabled smart exoskeleton system for rehabilitation of paralyzed people in connected communities
- Jacob, Sunil, Alagirisamy, Mukil, Xi, Chen, Balasubramanian, Venki, Srinivasan, Ram
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Xi, Chen , Balasubramanian, Venki , Srinivasan, Ram
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 80340-80350
- Full Text:
- Reviewed:
- Description: In recent years, the number of cases of spinal cord injuries, stroke and other nervous impairments have led to an increase in the number of paralyzed patients worldwide. Rehabilitation that can aid and enhance the lives of such patients is the need of the hour. Exoskeletons have been found as one of the popular means of rehabilitation. The existing exoskeletons use techniques that impose limitations on adaptability, instant response and continuous control. Also most of them are expensive, bulky, and requires high level of training. To overcome all the above limitations, this paper introduces an Artificial Intelligence (AI) powered Smart and light weight Exoskeleton System (AI-IoT-SES) which receives data from various sensors, classifies them intelligently and generates the desired commands via Internet of Things (IoT) for rendering rehabilitation and support with the help of caretakers for paralyzed patients in smart and connected communities. In the proposed system, the signals collected from the exoskeleton sensors are processed using AI-assisted navigation module, and helps the caretakers in guiding, communicating and controlling the movements of the exoskeleton integrated to the patients. The navigation module uses AI and IoT enabled Simultaneous Localization and Mapping (SLAM). The casualties of a paralyzed person are reduced by commissioning the IoT platform to exchange data from the intelligent sensors with the remote location of the caretaker to monitor the real time movement and navigation of the exoskeleton. The automated exoskeleton detects and take decisions on navigation thereby improving the life conditions of such patients. The experimental results simulated using MATLAB shows that the proposed system is the ideal method for rendering rehabilitation and support for paralyzed patients in smart communities. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Xi, Chen , Balasubramanian, Venki , Srinivasan, Ram
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 80340-80350
- Full Text:
- Reviewed:
- Description: In recent years, the number of cases of spinal cord injuries, stroke and other nervous impairments have led to an increase in the number of paralyzed patients worldwide. Rehabilitation that can aid and enhance the lives of such patients is the need of the hour. Exoskeletons have been found as one of the popular means of rehabilitation. The existing exoskeletons use techniques that impose limitations on adaptability, instant response and continuous control. Also most of them are expensive, bulky, and requires high level of training. To overcome all the above limitations, this paper introduces an Artificial Intelligence (AI) powered Smart and light weight Exoskeleton System (AI-IoT-SES) which receives data from various sensors, classifies them intelligently and generates the desired commands via Internet of Things (IoT) for rendering rehabilitation and support with the help of caretakers for paralyzed patients in smart and connected communities. In the proposed system, the signals collected from the exoskeleton sensors are processed using AI-assisted navigation module, and helps the caretakers in guiding, communicating and controlling the movements of the exoskeleton integrated to the patients. The navigation module uses AI and IoT enabled Simultaneous Localization and Mapping (SLAM). The casualties of a paralyzed person are reduced by commissioning the IoT platform to exchange data from the intelligent sensors with the remote location of the caretaker to monitor the real time movement and navigation of the exoskeleton. The automated exoskeleton detects and take decisions on navigation thereby improving the life conditions of such patients. The experimental results simulated using MATLAB shows that the proposed system is the ideal method for rendering rehabilitation and support for paralyzed patients in smart communities. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
A critical review of intrusion detection systems in the internet of things : techniques, deployment strategy, validation strategy, attacks, public datasets and challenges
- Khraisat, Ansam, Alazab, Ammar
- Authors: Khraisat, Ansam , Alazab, Ammar
- Date: 2021
- Type: Text , Journal article
- Relation: Cybersecurity Vol. 4, no. 1 (2021), p.
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) has been rapidly evolving towards making a greater impact on everyday life to large industrial systems. Unfortunately, this has attracted the attention of cybercriminals who made IoT a target of malicious activities, opening the door to a possible attack on the end nodes. To this end, Numerous IoT intrusion detection Systems (IDS) have been proposed in the literature to tackle attacks on the IoT ecosystem, which can be broadly classified based on detection technique, validation strategy, and deployment strategy. This survey paper presents a comprehensive review of contemporary IoT IDS and an overview of techniques, deployment Strategy, validation strategy and datasets that are commonly applied for building IDS. We also review how existing IoT IDS detect intrusive attacks and secure communications on the IoT. It also presents the classification of IoT attacks and discusses future research challenges to counter such IoT attacks to make IoT more secure. These purposes help IoT security researchers by uniting, contrasting, and compiling scattered research efforts. Consequently, we provide a unique IoT IDS taxonomy, which sheds light on IoT IDS techniques, their advantages and disadvantages, IoT attacks that exploit IoT communication systems, corresponding advanced IDS and detection capabilities to detect IoT attacks. © 2021, The Author(s).
- Authors: Khraisat, Ansam , Alazab, Ammar
- Date: 2021
- Type: Text , Journal article
- Relation: Cybersecurity Vol. 4, no. 1 (2021), p.
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) has been rapidly evolving towards making a greater impact on everyday life to large industrial systems. Unfortunately, this has attracted the attention of cybercriminals who made IoT a target of malicious activities, opening the door to a possible attack on the end nodes. To this end, Numerous IoT intrusion detection Systems (IDS) have been proposed in the literature to tackle attacks on the IoT ecosystem, which can be broadly classified based on detection technique, validation strategy, and deployment strategy. This survey paper presents a comprehensive review of contemporary IoT IDS and an overview of techniques, deployment Strategy, validation strategy and datasets that are commonly applied for building IDS. We also review how existing IoT IDS detect intrusive attacks and secure communications on the IoT. It also presents the classification of IoT attacks and discusses future research challenges to counter such IoT attacks to make IoT more secure. These purposes help IoT security researchers by uniting, contrasting, and compiling scattered research efforts. Consequently, we provide a unique IoT IDS taxonomy, which sheds light on IoT IDS techniques, their advantages and disadvantages, IoT attacks that exploit IoT communication systems, corresponding advanced IDS and detection capabilities to detect IoT attacks. © 2021, The Author(s).
Generative malware outbreak detection
- Park, Sean, Gondal, Iqbal, Kamruzzaman, Joarder, Oliver, Jon
- Authors: Park, Sean , Gondal, Iqbal , Kamruzzaman, Joarder , Oliver, Jon
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 2019 IEEE International Conference on Industrial Technology, ICIT 2019 Vol. 2019-February, p. 1149-1154
- Full Text: false
- Reviewed:
- Description: Recently several deep learning approaches have been attempted to detect malware binaries using convolutional neural networks and stacked deep autoencoders. Although they have shown respectable performance on a large corpus of dataset, practical defense systems require precise detection during the malware outbreaks where only a handful of samples are available. This paper demonstrates the effectiveness of the latent representations obtained through the adversarial autoencoder for malware outbreak detection. Using instruction sequence distribution mapped to a semantic latent vector, the model provides a highly effective neural signature that helps detecting variants of a previously identified malware within a campaign mutated with minor functional upgrade, function shuffling, or slightly modified obfuscations. The method demonstrates how adversarial autoencoder can turn a multiclass classification task into a clustering problem when the sample set size is limited and the distribution is biased. The model performance is evaluated on OS X malware dataset against traditional machine learning models. © 2019 IEEE.
- Description: E1
One-shot malware outbreak detection using spatio-temporal isomorphic dynamic features
- Park, Sean, Gondal, Iqbal, Kamruzzaman, Joarder, Zhang, Leo
- Authors: Park, Sean , Gondal, Iqbal , Kamruzzaman, Joarder , Zhang, Leo
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 18th IEEE International Conference on Trust, Security and Privacy in Computing and Communications/13th IEEE International Conference on Big Data Science and Engineering, TrustCom/BigDataSE 2019 p. 751-756
- Full Text: false
- Reviewed:
- Description: Fingerprinting the malware by its behavioural signature has been an attractive approach for malware detection due to the homogeneity of dynamic execution patterns across different variants of similar families. Although previous researches show reasonably good performance in dynamic detection using machine learning techniques on a large corpus of training set, decisions must be undertaken based upon a scarce number of observable samples in many practical defence scenarios. This paper demonstrates the effectiveness of generative adversarial autoencoder for dynamic malware detection under outbreak situations where in most cases a single sample is available for training the machine learning algorithm to detect similar samples that are in the wild. © 2019 IEEE.
- Description: E1
Tracing the Pace of COVID-19 research : topic modeling and evolution
- Liu, Jiaying, Nie, Hansong, Li, Shihao, Ren, Jing, Xia, Feng
- Authors: Liu, Jiaying , Nie, Hansong , Li, Shihao , Ren, Jing , Xia, Feng
- Date: 2021
- Type: Text , Journal article
- Relation: Big Data Research Vol. 25, no. (2021), p.
- Full Text:
- Reviewed:
- Description: COVID-19 has been spreading rapidly around the world. With the growing attention on the deadly pandemic, discussions and research on COVID-19 are rapidly increasing to exchange latest findings with the hope to accelerate the pace of finding a cure. As a branch of information technology, artificial intelligence (AI) has greatly expedited the development of human society. In this paper, we investigate and visualize the on-going advancements of early scientific research on COVID-19 from the perspective of AI. By adopting the Latent Dirichlet Allocation (LDA) model, this paper allocates the research articles into 50 key research topics pertinent to COVID-19 according to their abstracts. We present an overview of early studies of the COVID-19 crisis at different scales including referencing/citation behavior, topic variation and their inner interactions. We also identify innovative papers that are regarded as the cornerstones in the development of COVID-19 research. The results unveil the focus of scientific research, thereby giving deep insights into how the academic society contributes to combating the COVID-19 pandemic. © 2021 Elsevier Inc. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Jing Ren and Feng Xia" is provided in this record**
- Description: COVID-19 has been spreading rapidly around the world. With the growing attention on the deadly pandemic, discussions and research on COVID-19 are rapidly increasing to exchange latest findings with the hope to accelerate the pace of finding a cure. As a branch of information technology, artificial intelligence (AI) has greatly expedited the development of human society. In this paper, we investigate and visualize the on-going advancements of early scientific research on COVID-19 from the perspective of AI. By adopting the Latent Dirichlet Allocation (LDA) model, this paper allocates the research articles into 50 key research topics pertinent to COVID-19 according to their abstracts. We present an overview of early studies of the COVID-19 crisis at different scales including referencing/citation behavior, topic variation and their inner interactions. We also identify innovative papers that are regarded as the cornerstones in the development of COVID-19 research. The results unveil the focus of scientific research, thereby giving deep insights into how the academic society contributes to combating the COVID-19 pandemic. © 2021 Elsevier Inc.
- Authors: Liu, Jiaying , Nie, Hansong , Li, Shihao , Ren, Jing , Xia, Feng
- Date: 2021
- Type: Text , Journal article
- Relation: Big Data Research Vol. 25, no. (2021), p.
- Full Text:
- Reviewed:
- Description: COVID-19 has been spreading rapidly around the world. With the growing attention on the deadly pandemic, discussions and research on COVID-19 are rapidly increasing to exchange latest findings with the hope to accelerate the pace of finding a cure. As a branch of information technology, artificial intelligence (AI) has greatly expedited the development of human society. In this paper, we investigate and visualize the on-going advancements of early scientific research on COVID-19 from the perspective of AI. By adopting the Latent Dirichlet Allocation (LDA) model, this paper allocates the research articles into 50 key research topics pertinent to COVID-19 according to their abstracts. We present an overview of early studies of the COVID-19 crisis at different scales including referencing/citation behavior, topic variation and their inner interactions. We also identify innovative papers that are regarded as the cornerstones in the development of COVID-19 research. The results unveil the focus of scientific research, thereby giving deep insights into how the academic society contributes to combating the COVID-19 pandemic. © 2021 Elsevier Inc. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Jing Ren and Feng Xia" is provided in this record**
- Description: COVID-19 has been spreading rapidly around the world. With the growing attention on the deadly pandemic, discussions and research on COVID-19 are rapidly increasing to exchange latest findings with the hope to accelerate the pace of finding a cure. As a branch of information technology, artificial intelligence (AI) has greatly expedited the development of human society. In this paper, we investigate and visualize the on-going advancements of early scientific research on COVID-19 from the perspective of AI. By adopting the Latent Dirichlet Allocation (LDA) model, this paper allocates the research articles into 50 key research topics pertinent to COVID-19 according to their abstracts. We present an overview of early studies of the COVID-19 crisis at different scales including referencing/citation behavior, topic variation and their inner interactions. We also identify innovative papers that are regarded as the cornerstones in the development of COVID-19 research. The results unveil the focus of scientific research, thereby giving deep insights into how the academic society contributes to combating the COVID-19 pandemic. © 2021 Elsevier Inc.
MODEL : motif-based deep feature learning for link prediction
- Wang, Lei, Ren, Jing, Xu, Bo, Li, Jianxin, Luo, Wei, Xia, Feng
- Authors: Wang, Lei , Ren, Jing , Xu, Bo , Li, Jianxin , Luo, Wei , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 7, no. 2 (2020), p. 503-516
- Full Text:
- Reviewed:
- Description: Link prediction plays an important role in network analysis and applications. Recently, approaches for link prediction have evolved from traditional similarity-based algorithms into embedding-based algorithms. However, most existing approaches fail to exploit the fact that real-world networks are different from random networks. In particular, real-world networks are known to contain motifs, natural network building blocks reflecting the underlying network-generating processes. In this article, we propose a novel embedding algorithm that incorporates network motifs to capture higher order structures in the network. To evaluate its effectiveness for link prediction, experiments were conducted on three types of networks: social networks, biological networks, and academic networks. The results demonstrate that our algorithm outperforms both the traditional similarity-based algorithms (by 20%) and the state-of-the-art embedding-based algorithms (by 19%). © 2014 IEEE.
- Authors: Wang, Lei , Ren, Jing , Xu, Bo , Li, Jianxin , Luo, Wei , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 7, no. 2 (2020), p. 503-516
- Full Text:
- Reviewed:
- Description: Link prediction plays an important role in network analysis and applications. Recently, approaches for link prediction have evolved from traditional similarity-based algorithms into embedding-based algorithms. However, most existing approaches fail to exploit the fact that real-world networks are different from random networks. In particular, real-world networks are known to contain motifs, natural network building blocks reflecting the underlying network-generating processes. In this article, we propose a novel embedding algorithm that incorporates network motifs to capture higher order structures in the network. To evaluate its effectiveness for link prediction, experiments were conducted on three types of networks: social networks, biological networks, and academic networks. The results demonstrate that our algorithm outperforms both the traditional similarity-based algorithms (by 20%) and the state-of-the-art embedding-based algorithms (by 19%). © 2014 IEEE.
A deep learning model based on concatenation approach for the diagnosis of brain tumor
- Noreen, Neelum, Palaniappan, Sellappan, Qayyum, Abdul, Ahmad, Iftikhar, Imran, Muhammad, Shoaib, M.uhammad
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
Network representation learning: From traditional feature learning to deep learning
- Sun, Ke, Wang, Lei, Xu, Bo, Zhao, Wenhong, Teng, Shyh, Xia, Feng
- Authors: Sun, Ke , Wang, Lei , Xu, Bo , Zhao, Wenhong , Teng, Shyh , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 205600-205617
- Full Text:
- Reviewed:
- Description: Network representation learning (NRL) is an effective graph analytics technique and promotes users to deeply understand the hidden characteristics of graph data. It has been successfully applied in many real-world tasks related to network science, such as social network data processing, biological information processing, and recommender systems. Deep Learning is a powerful tool to learn data features. However, it is non-trivial to generalize deep learning to graph-structured data since it is different from the regular data such as pictures having spatial information and sounds having temporal information. Recently, researchers proposed many deep learning-based methods in the area of NRL. In this survey, we investigate classical NRL from traditional feature learning method to the deep learning-based model, analyze relationships between them, and summarize the latest progress. Finally, we discuss open issues considering NRL and point out the future directions in this field. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Authors: Sun, Ke , Wang, Lei , Xu, Bo , Zhao, Wenhong , Teng, Shyh , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 205600-205617
- Full Text:
- Reviewed:
- Description: Network representation learning (NRL) is an effective graph analytics technique and promotes users to deeply understand the hidden characteristics of graph data. It has been successfully applied in many real-world tasks related to network science, such as social network data processing, biological information processing, and recommender systems. Deep Learning is a powerful tool to learn data features. However, it is non-trivial to generalize deep learning to graph-structured data since it is different from the regular data such as pictures having spatial information and sounds having temporal information. Recently, researchers proposed many deep learning-based methods in the area of NRL. In this survey, we investigate classical NRL from traditional feature learning method to the deep learning-based model, analyze relationships between them, and summarize the latest progress. Finally, we discuss open issues considering NRL and point out the future directions in this field. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
Melanoma classification using efficientnets and ensemble of models with different input resolution
- Karki, Sagar, Kulkarni, Pradnya, Stranieri, Andrew
- Authors: Karki, Sagar , Kulkarni, Pradnya , Stranieri, Andrew
- Date: 2021
- Type: Text , Conference paper
- Relation: 2021 Australasian Computer Science Week Multiconference, ACSW 2021, Virtual, Online, 1-5 February 2021, ACM International Conference Proceeding Series
- Full Text:
- Reviewed:
- Description: Early and accurate detection of melanoma with data analytics can make treatment more effective. This paper proposes a method to classify melanoma cases using deep learning on dermoscopic images. The method demonstrates that heavy augmentation during training and testing produces promising results and warrants further research. The proposed method has been evaluated on the SIIM-ISIC Melanoma Classification 2020 dataset and the best ensemble model achieved 0.9411 area under the ROC curve on hold out test data. © 2021 ACM.
- Authors: Karki, Sagar , Kulkarni, Pradnya , Stranieri, Andrew
- Date: 2021
- Type: Text , Conference paper
- Relation: 2021 Australasian Computer Science Week Multiconference, ACSW 2021, Virtual, Online, 1-5 February 2021, ACM International Conference Proceeding Series
- Full Text:
- Reviewed:
- Description: Early and accurate detection of melanoma with data analytics can make treatment more effective. This paper proposes a method to classify melanoma cases using deep learning on dermoscopic images. The method demonstrates that heavy augmentation during training and testing produces promising results and warrants further research. The proposed method has been evaluated on the SIIM-ISIC Melanoma Classification 2020 dataset and the best ensemble model achieved 0.9411 area under the ROC curve on hold out test data. © 2021 ACM.
- Ali, Farman, El-Sappagh, Shaker, Islam, S., Kwak, Daehan, Ali, Amjad, Imran, Muhammad, Kwak, Kyung-Sup
- Authors: Ali, Farman , El-Sappagh, Shaker , Islam, S. , Kwak, Daehan , Ali, Amjad , Imran, Muhammad , Kwak, Kyung-Sup
- Date: 2020
- Type: Text , Journal article
- Relation: Information Fusion Vol. 63, no. (2020), p. 208-222
- Full Text: false
- Reviewed:
- Description: The accurate prediction of heart disease is essential to efficiently treating cardiac patients before a heart attack occurs. This goal can be achieved using an optimal machine learning model with rich healthcare data on heart diseases. Various systems based on machine learning have been presented recently to predict and diagnose heart disease. However, these systems cannot handle high-dimensional datasets due to the lack of a smart framework that can use different sources of data for heart disease prediction. In addition, the existing systems utilize conventional techniques to select features from a dataset and compute a general weight for them based on their significance. These methods have also failed to enhance the performance of heart disease diagnosis. In this paper, a smart healthcare system is proposed for heart disease prediction using ensemble deep learning and feature fusion approaches. First, the feature fusion method combines the extracted features from both sensor data and electronic medical records to generate valuable healthcare data. Second, the information gain technique eliminates irrelevant and redundant features, and selects the important ones, which decreases the computational burden and enhances the system performance. In addition, the conditional probability approach computes a specific feature weight for each class, which further improves system performance. Finally, the ensemble deep learning model is trained for heart disease prediction. The proposed system is evaluated with heart disease data and compared with traditional classifiers based on feature fusion, feature selection, and weighting techniques. The proposed system obtains accuracy of 98.5%, which is higher than existing systems. This result shows that our system is more effective for the prediction of heart disease, in comparison to other state-of-the-art methods. © 2020
Assessing trust level of a driverless car using deep learning
- Karmakar, Gour, Chowdhury, Abdullahi, Das, Rajkumar, Kamruzzaman, Joarder, Islam, Syed
- Authors: Karmakar, Gour , Chowdhury, Abdullahi , Das, Rajkumar , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 22, no. 7 (2021), p. 4457-4466
- Full Text: false
- Reviewed:
- Description: The increasing adoption of driverless cars already providing a shift to move away from traditional transportation systems to automated ones in many industrial and commercial applications. Recent research has justified that driverless vehicles will considerably reduce traffic congestions, accidents, carbon emissions, and enhance the accessibility of driving to wider cross-section of people and lifestyle choices. However, at present, people's main concerns are about its privacy and security. Since traditional protocol layers based security mechanisms are not so effective for a distributed system, trust value-based security mechanisms, a type of pervasive security, are appearing as popular and promising techniques. A few statistical non-learning based models for measuring the trust level of a driverless are available in the current literature. These are not so effective because of not being able to capture the extremely distributed, dynamic, and complex nature of the traffic systems. To bridge this research gap, in this paper, for the first time, we propose two deep learning-based models that measure the trustworthiness of a driverless car and its major On-Board Unit (OBU) components. The second model also determines its OBU components that were breached during the driving operation. Results produced using real and simulated traffic data demonstrate that our proposed DNN based deep learning models outperform other machine learning models in assessing the trustworthiness of individual car as well as its OBU components. The average precision of detection accuracies for the car, LiDAR, camera, and radar are 0.99, 0.96, 0.81, and 0.83, respectively, which indicates the potential real-life application of our models in assessing the trust level of a driverless car. © 2000-2011 IEEE.
Deep learning and big data technologies for IoT security
- Amanullah, Mohamed, Habeeb, Riyaz, Nasaruddin, Fariza, Gani, Abdullah, Ahmed, Ejaz, Nainar, Abdul, Akim, Nazihah, Imran, Muhammad
- Authors: Amanullah, Mohamed , Habeeb, Riyaz , Nasaruddin, Fariza , Gani, Abdullah , Ahmed, Ejaz , Nainar, Abdul , Akim, Nazihah , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: Computer Communications Vol. 151, no. (2020), p. 495-517
- Full Text: false
- Reviewed:
- Description: Technology has become inevitable in human life, especially the growth of Internet of Things (IoT), which enables communication and interaction with various devices. However, IoT has been proven to be vulnerable to security breaches. Therefore, it is necessary to develop fool proof solutions by creating new technologies or combining existing technologies to address the security issues. Deep learning, a branch of machine learning has shown promising results in previous studies for detection of security breaches. Additionally, IoT devices generate large volumes, variety, and veracity of data. Thus, when big data technologies are incorporated, higher performance and better data handling can be achieved. Hence, we have conducted a comprehensive survey on state-of-the-art deep learning, IoT security, and big data technologies. Further, a comparative analysis and the relationship among deep learning, IoT security, and big data technologies have also been discussed. Further, we have derived a thematic taxonomy from the comparative analysis of technical studies of the three aforementioned domains. Finally, we have identified and discussed the challenges in incorporating deep learning for IoT security using big data technologies and have provided directions to future researchers on the IoT security aspects. © 2020 Elsevier B.V.
- Chiang, Christina, Wells, Paul, Fieger, Peter, Sharma, Divesh
- Authors: Chiang, Christina , Wells, Paul , Fieger, Peter , Sharma, Divesh
- Date: 2021
- Type: Text , Journal article
- Relation: Accounting and Finance Vol. 61, no. 1 (2021), p. 913-936
- Full Text: false
- Reviewed:
- Description: Arguably, the audit course is one of the most challenging as it links prior accounting knowledge with new audit knowledge that students are generally not exposed to. A mini-audit group project was implemented at a New Zealand university, and a learning approach and learning experience survey instrument was administered. Responses from 98 students suggest that they perceived the learning experience positively and were encouraged to adopt a deep approach to learning. The findings have implications for accounting educators in the design and development of learning and assessment strategies in an audit course. © 2020 Accounting and Finance Association of Australia and New Zealand