CAINE 2021:Papers with Abstracts

Abstract. Radio-Frequency Identification (RFID) technology is fairly new and is an important part of the Internet of Things (IoT). RFID is low-cost and is constantly being researched for possible applications. The new technology already has many real-world applications such as in health care, library systems, inventory tracking, and object detection. This paper presents a possible application of RFID in smart infrastructure; It looks at different methods that can be used to detect vibration in precast concrete structures and prevent possible damage during transportation.
Abstract. Magnetic flux leakage (MFL) signals are used to estimate the scale and form of faults caused by the decaying metal used to build oil and gas pipelines. These faults, such as rust, can have catastrophic consequences if left undetected and improperly treated, both in terms of environmental damage and loss of life, as well as millions of dollars in maintenance costs for the stakeholders. Machine learning algorithms have proven their ability to solve the problem by correctly recognizing and calculating the scale and form of certain defects. The nonparametric and Bayesian approach to regression known as Gaussian process regression (GPR) is gaining popularity in machine learning. The optimization of GPR was carried out in this report using noisy and noiseless MFL signal measurements. The tune-able hyper-parameters were subjected to GPR optimization. Root mean square error (RMSE) error was used to calculate the output. In this research, the Quasi-Newton Method (QNM), an automated methodology for optimizing nonparametric regression analysis, was used to refine the GPR model. The optimization results are then compared to GPR analysis with default parameters, and it has been shown that QNM effectively optimizes the GPR while producing lower RMSE scores on all datasets. The ideal inferred parameter set can be used to train the GPR model for better output outcomes in determining oil and gas pipeline defects.
Abstract. Estimation of expected failure in an oil and gas pipeline system is challenging due to large uncertainties in the parameters associated with burst failure predictive models. The development of machine learning (ML) algorithms for reliability and risk assessment applications has attracted considerable attention from the scientific and research community in recent years. Working on the automation, efficiency, and optimization of underground oil and gas pipeline networks demands open access to extensive databases, which may not be possible. Oil and gas databases are confidential assets of specific countries, and no one can access these databases easily. As a result, training ML models is a big challenge, since it needs large data. To address this data shortage, in this paper, we have generated synthetic training datasets using a tabular generative adversarial neural network (TGAN). The generated synthetic data and real data (when available) were combined to train an artificial neural network (ANN). To further enhance the performance of the proposed system, the application of a genetic algorithm (GA) has been introduced to optimize the weights and biases of the ANN automatically. The results show superior performance results when compared with the previously reported algorithms in the literature. The proposed methodology succeeds to predict Oil and Gas pipeline defects with robust results and low error rates.
Abstract. The underlying nature of adiabatic circuits is most accurately characterized at the circuit level as it is for traditional technologies. In order to scale system designs for adiabatic logic technologies, modeling of adiabatic circuits at the logic level is necessary. Logic level models of adiabatic logic circuits can facilitate the design, development, and verification of large scale digital systems that may be infeasible using circuit simulators. Adiabatic logic circuits can be powered with a four stage power clock consisting of idle, charge, hold, and recover stages that provides for adiabatic charging and charge recovery to give adiabatic circuits their low power operation. By both discretizing the temporal aspects of the power clock and the logic values, a logical model of adiabatic circuit operation is proposed. Using the expressive capabilities of Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL), the salient aspects of adiabatic circuit models can be captured. In this work, a VHDL framework is defined for modeling adiabatic logic circuits & systems and its use is demonstrated in several example adiabatic logic circuits.
Abstract. This paper reports on an algorithmic exploration of the theory of causal regularity based on Mackie’s theory of causes as MINUS conditions, i.e., minimal insufficient but necessary member of a set of conditions that, though unnecessary, are sufficient for the effect. We describe the algorithm to extract causal hypotheses according to this model and the results of its application to a number of real world data sets. Results suggest further promising applications, modifications and extensions that might derive further insights of a dataset.
Abstract. The arise of maintenance issues in mechanical systems is cause for decreased energy efficiency and higher operating costs for many small- to medium-sized businesses. The sooner such issues can be identified and addressed, the greater the energy savings. We have designed and implemented an automated predictive maintenance system that uses machine learning models to predict maintenance needs from data collected via data sensors attached to mechanical systems. As a proof of concept, we demonstrate the effectiveness of the system by predicting several operating states for a standard clothes dryer.
Abstract. Character representation in computer systems is the main purpose of character encod- ings, such as Unicode. The representation of Chinese characters in computer systems is a long-standing issue. It is currently still not possible to easily represent, for instance to input, some Chinese characters in computers. In this research, we especially consider the issue of the Chinese characters that are not covered by the conventional encodings. In this paper, in continuation of our previous works on a universal character encoding for such characters, we describe a non-ambiguous hash function for any Chinese character. Unlike conventional approaches, this function is solely based on the character strokes, thus elim- inating any sort of ambiguity. Given its sparsity and low collision rate, the proposed hash function can then be applied to fingerprinting, which can in turn be applied, for instance, to information retrieval. Simplicity and unambiguity are keys to our proposal. This work is then formally evaluated and compared to previous works so as to show its applicability, contribution and to measure its limits.
Abstract. The autoencoder, a well-known neural network model, is usually fitted using a mean squared error loss or a cross-entropy loss. Both losses have a probabilistic interpretation: they are equivalent to maximizing the likelihood of the dataset when one uses a normal distribution or a categorical distribution respectively. We trained autoencoders on image datasets using different distributions and noticed the differences from the initial autoen- coder: if a mixture of distributions is used the quality of the reconstructed images may increase and the dataset can be augmented; one can often visualize the reconstructed im- age along with the variances corresponding to each pixel. The code which implements this method can be found at
Abstract. Normalizing flows fall into the category of deep generative models. They explicitly model a probability density function. As a result, such a model can learn probabilistic distributions beyond the Gaussian one. Clustering is one of the main unsupervised ma- chine learning tasks and the most common probabilistic approach to solve a clustering problem is via Gaussian mixture models. Although there are a few approaches for con- structing mixtures of normalizing flows in the literature, we propose a direct approach and use the masked autoregressive flow as the normalizing flow. We show the results obtained on 2D datasets and then on images. The results contain density plots or ta- bles with clustering metrics in order to quantify the quality of the obtained clusters. Although on images we usually obtain worse results than other classic models, the 2D results show that more expressive mixtures of distributions (than the Gaussian mixture models) can be learned indeed. The code which implements this method can be found at
Abstract. In this paper, we study the Convolutional Neural Network (CNN) applications in medical image processing during the battle against Coronavirus Disease 2019 (COVID- 19). Specifically, three CNN implementations are examined: CNN-LSTM, COVID-Net, and DeTraC. These three methods have been shown to offer promising implications for the future of CNN technology in the medical field. This survey explores how these technologies have improved upon their predecessors. Qualitative and quantitative analyses have strongly suggested that these methods perform significantly better than the commensurate technologies. After analyzing these CNN implementations, it is reasonable to conclude that this technology has a place in the future of the medical field, which can be used by professionals to gain insight into new diseases and to help in diagnosing infections using medical imaging.
Abstract. Software and Systems Product Line (SSPL) engineering has shown capabilities to reduce costs and time to market. This is thanks to the creation and management of a common platform dedicated to develop a family of products. These latters can be a family of mobile phones, a family of different brake systems variants for automative needs, etc. Recently, more and more large-scale companies start to implement SSPL engineering in their domains by adopting Model-Based Systems Engineering (MBSE). Systems Product Line (PL) engineering is much broader than software PL engineering. Therefore, various aspects of variability (e.g. functional and quality attributes variabilities) have to be considered in MBSE. However, variability integrated in MBSE is still limited to functional variability. This paper contributes to enhance the SSPL modelling based on SysML by extending the SysML language. The principle aim is to include various aspects of variability. In fact, a holistic variability model is proposed to define the SysML extensions by means of the UML profiling mechanism. This permits to express variability constructs in different SysML modelling artifacts. We also present an application example namely the brake systems family extracted from Splot repository. We in fact, show how our SysML extensions are concretely used.
Abstract. We argue for the design of credit-based EV charging while predicting the balance of electric power supply versus power demand for electric vehicles (EVs). We wish to a herald pivotal step toward modeling electric power availability versus power consumption for the consumers as well as utility providers for EV and the corresponding dynamic wireless charging facilities will benefit from our model for analysis and prediction of electric power.
Abstract. The emerging concept of vehicular communication including with the roadside infrastructure is a promising solution to avoid accidents and providing live traffic data. There is a high demand for the technologies which ensure low latency vehicular communication. Modern vehicles equipped with computing, communication, storage, and sensing capabilities expedite data exchange. To achieve deterministic bounds on data delivery, ability to be established anywhere quickly, as well as efficiency of data query, we describe a novel peer to peer structured overlay model for a cluster of vehicles known as the pyramid tree model.
Abstract. Consider a linear time-varying (LTV) system described by the state-space equation dx(t)/dt = A(t)x(t)+B(t)u(t). The main objectives of this paper include: (i) determination of the analytical (closed-form) solutions for the fundamental matrix X(t) and the state transition matrix P(t,t0) of the LTV system, and (ii) design of feedback control, such that the closed-loop system matrix Acl(t) = A(t)-B(t)K(t), where K(t) is a gain matrix, has desirable characteristics, namely, Acl(t), is commutative and triangular. It follows that commutativity of Acl(t) will facilitate the analytical solutions of Xcl(t) and Pcl(t,t0), including Matlab solutions; while triangularization of Acl(t) will allow manual calculations of these matrices easily, especially for low-dimensional systems. This is different from a traditional pole-placement design problem where the goal is to place the poles of Acl(t) to acquire desired closed-loop stability properties. Examples are given to demonstrate the design objectives. Solutions in Matlab are given as well.
Abstract. The purpose of this study is to analyze the behavior of humans and computers, and distinguish, from the results of the analysis, between humans and computer programs. In this study, we focus on the players of Daihinmin, which is a card game with imperfect in- formation. Although not many Daihinmin play logs for human players have been collected, research on the principles of human behavior has been successfully conducted. For that purpose, we propose a method to distinguish between human and computer players using decision trees to represent player characteristics and to compare the differences between them. To evaluate the effectiveness of the proposed method, we apply it to distinguish between human and computer players of Daihinmin. Based on the results, we examine the validity of the proposed method and discuss its potential.
Abstract. The manufacture of brick in Taiwan started since Qing Dynasty. In order to understand the historical context of a city and its industry history, case study was made to two old brick warehouses. This study aims to cross-reference brick warehouses for the evolving application of construction materials and methods, based on the brick joints made on walls or openings. A photogrammetry modeling approach was applied to reconstruct the buildings and details of each warehouse. An AR-based comparison was made on Augmented® platform using smartphone for on-site and remote comparison. The novelty redefined the connection between process and configuration through a looped interaction between AR and 3D models. The flexibility of smartphone and the portability of cloud-based 3D AR database enabled a former reconstructed result to be remodeled as the reference for follow-up reconstruction process of brick components.
Abstract. Solving inverse kinematics (IK) has been an important problem in the field of robotics. In recent years, the solutions based on neural networks (NNs) are popular to handle the non-linearity of IK. However, the complexity of IK grows rapidly as the degree-of-freedom (DOF) of the robot arms increases. To address this problem, we exploit the dependencies among the joints of the robot arms, based on the observation that the movements of certain joints of a robot arm will affect the movements of other joints. We investigate the idea under a data-driven setting, i.e., the NN models are trained based on supervised learning through a given trajectory dataset. Several NN architectures are examined to exploit the joint dependencies of robot arms. A greedy algorithm is then presented to find a proper sequence of applying the joints to decrease the distance error. The experimental results on a 7-DOF robot arm show that the NN models using joint dependency can achieve the same accuracy as the single-MLP model but use fewer parameters.
Abstract. Many IT enterprises today use Service Oriented Architecture(SOA) as the effective architectural approach for building their systems. Service-Based Systems(SBS) like other complex frameworks are liable to change to fit in the new user requirements. These may lead to the deterioration of the quality and design of the software systems and may cause the materialization of poor solutions called Anti-patterns. Similar to object-oriented systems, web services also suffer from anti-patterns due to bad programming practices, design, and implementation. An anti-pattern is defined as a commonly used process, structure, or pattern of action that, despite initially appearing to be an effective and appropriate response to a problem, has more bad consequences than good ones. Anti- pattern detection using Web Service Description Language(WSDL) metrics can be used as a part of the software development life cycle to reduce the maintenance of the software system and also to improve the quality of the software. The work is motivated by the need to develop an automatic predictive model for the prediction of web services anti- patterns using static analysis of the WSDL metrics. The core ideology of this work is to empirically investigate the effectiveness of classifier techniques i.e, ensemble and deep learning techniques in the prediction of web service anti-patterns. In this paper, we present an empirical analysis on the application of seven feature selection techniques, six data sampling techniques, and ten classifier techniques for the prediction of four different types of anti-patterns. The results confirm the predictive ability of WSDL metrics in the prediction of SOA anti-patterns.