Modelo de capital estructural para Universidades basado en el modelo de fusión de datos JDL y la calidad de la información
Revista Ibérica de Sistemas e Tecnologias de Informação (RISTI)
M. A. Becerra, E. Londoño-Montoya, L. Serna-Guarín, D.Peluffo-Ordonez, C.Tobón, L. Giraldo See abstract | see full paper
Intellectual capital is one of the most critical intangible active assets for universities, and there are multiple models to value it through the human, structural, and relational components. However, this is an open field of research that still demands new solutions to assess it effectively from each of its components. For the assessment of the structural component in higher education institutions, this study proposes a model that combines the assessment of the quality of information and the JDL data fusion model (joint directors of laboratories), which has been used in applications military. The proposed model is original in the methods used and their association, distributed in six levels that execute the pre-processing of the information, valuation of objects, valuation of the situation and the risk, and the refinement of the process. Besides, it evaluates the quality of the information, its traceability, and context to refine the process and obtain a more objective assessment taking into account the imperfection of the information for decision-making in the management of impact and risk. The model not only allows the assessment of structural capital, but also supports decision-making based on the quality of information and its impact. The functionality of the model is described by levels.
Knee Joint Angle Measuring Portable Embedded System based on Inertial Measurement Units for Gait Analysis
International Journal on Advanced Science, Engineering and Information Technology (IJASEIT)
Dagoberto Mayorca-Torres, Julio C. Caicedo-Eraso, Diego H. Peluffo-Ordóñez See abstract | see full paper
Inside clinical research, gait analysis is a fundamental part of the functional evaluation of the human body's movement. Its evaluation has been carried out through different methods and tools, which allow early diagnosis of diseases, and monitoring and assessing the effectiveness of therapeutic plans applied to patients for rehabilitation. The observational method is one of the most used in specialized centers in Colombia; however, to avoid any possible errors associated with the subjectivity observation, technological tools that provide quantitative data can support this method. This paper deals with the methodological process for developing a computational tool and hardware device for the analysis of gait, specifically on articular kinematics of the knee. This work develops a prototype based on the fusion of inertial measurement units (IMU) data as an alternative for the attenuation of errors associated with each of these technologies. A videogrammetry technique measured the same human gait patterns to validate the proposed system, in terms of accuracy and repeatability of the recorded data. Results showed that the developed prototype successfully captured the knee-joint angles of the flexion-extension motions with high consistency and accuracy in with the measurements obtained from the videogrammetry technique. Statistical analysis (ICC and RMSE) exhibited a high correlation between the two systems for the measures of the joint angles. These results suggest the possibility of using an IMU-based prototype in realistic scenarios for accurately tracking a patient’s knee-joint kinematics during a human gait.
Analysis of the Thermal Behavior in the Goldwind S50/750 Wind Turbines Installed in the Wind Farm Gibara II using CAD-CAE Tools
International Journal of Mechanical and Production Engineering Research and Development (IJMPERD)
Y. A. Feliciano, C. A. Trinchet, E. Meléndez, L. L. Lorente-Leyva,D. H. Peluffo-Ordóñez See abstract | see full paper
This study indicates the thermal behavior inside the gondola for models S50–750 of Goldwind wind turbines installed in the Wind Farm Gibara II. It allows the early diagnosis of incipient failures that occur in the studied devices because of the high temperatures generated in its components under the operating conditions of Cuba. It works in the obtention of the values of the thermal state using forecast statistics and computer-aided design and engineering software (CADCAE), such as SolidWorks and its Flow Simulation add-on. This research supports its theories and postulates in the study of six assembled devices of Chinese origin, which have been in operation for nine years. For that purpose, we used a database that collects the temperature measurements in different working conditions and points inside the gondola.
Data Fusion and Information Quality for Biometric Identification from Multimodal Signals
RISTI - Revista Ibérica de Sistemas e Tecnologías de Informacao
M. A. Becerra, L. Lasso-Arciniegas, A. Viveros, L. Serna-Guarín, D. Peluffo-Ordóñez, C. Tobón See abstract | see full paper
Biometric identification is carried out by processing physiological traits and signals. Biometrics systems are an open field of research and development, since they are permanently susceptible to attacks demanding permanent development to maintain their confidence. The main objective of this study is to analyze the effects of the quality of information on biometric identification and consider it in access control systems. This paper proposes a data fusion model for the development of biometrics systems considering the assessment of information quality. This proposal is based on the JDL (Joint Directors of Laboratories) data fusion model, which includes raw data processing, pattern detection, situation assessment and risk or impact. The results demonstrated the functionality of the proposed model and its potential compared to other traditional identification models.
Evaluación de Técnicas de Extracción de Características Orientado a la Clasificación de Señales Sísmico-volcánicas del Volcán Nevado del Ruiz
Y. Erazo-Bravo, E. Rosero-Narváez, P. Castro-Cabrera, J. Londoño-Bonilla, D. H. Peluffo-Ordóñez. See abstract | see full paper
Currently, researches have been carried out on automatic classification of seismic-volcanic events -mainly based on machine learning techniques- aimed at identifying the nature of the recorded event. In this sense, several approaches have been introduced. Nonetheless, due to these signals’ variability, there is no still a conclusive method of characterization, and it is in fact an open and challenging research problem. In this work, a methodology for comparing features extraction techniques is developed aimed at the discrimination of seismic events of volcanic origin. Representation of the signals in the domain of time, frequency, time-frequency and Cepstral is used. The set of attributes is optimized by selecting characteristics by assigning weights. A supervised classification is executed using known records. Finally, classification performance measures were obtained to determine the subset of characteristics that best represent and discriminate the signals.
A data set for electric power consumption forecasting based on socio-demographic features: Data from an area of southern Colombia
Data in Brief
J. Parraga-Alava, J. D. Moncayo-Nacaza, J. Revelo-Fuelagán, P. D.Rosero-Montalvo, A. Anaya-Isaza, D. H. Peluffo-Ordóñez. See abstract | see full paper
In this article, we introduce a data set concerning electric-power consumption-related features registered in seven main municipalities of Nariño, Colombia, from December 2010 to May 2016. The data set consists of 4427 socio-demographic characteristics, and 7 power-consumption-referred measured values. Data were fully collected by the company Centrales Eléctricas de Nariño (CEDENAR) according to the client consumption records. Power consumption data collection was carried following a manual procedure wherein company workers are in charge of manually registering the readings (measured in kWh) reported by the electric energy meters installed at each housing/building. Released data set is aimed at providing researchers a suitable input for designing and assessing the performance of forecasting, modelling, simulation and optimization approaches applied to electric power consumption prediction and characterization problems. The data set, so-named in shorthand PCSTCOL, is freely and publicly available at https://doi.org/10.17632/xbt7scz5ny.3.
Comparación de controladores y modelado matemático de un levitador magnético
E. P. Herrera-Granda, K. A. Herrera-Mayorga, I. D. Herrera-Granda, L. M. Sierra-Martínez, D. H. Peluffo-Ordóñez. See abstract | see full paper
This work presents the mathematical modeling and simulation of a magnetic levitator, and a comparison of different control techniques, applied on the system, in order to visualize which technique better stabilizes the magnetic levitator. The dynamical modeling was done applying the Newton Euler’s formulation, and the obtained equations were represented on the space state. Then the system was linearized applying Taylor’s approximation, and the obtained matrixes were used for the controller’s design. The employed controllers for the comparison were: Feedback Controller, Linear–quadratic regulator (LQR), and the neural-network based nonlinear autoregressive moving average controller (NARMA). Finally, the designed controllers and the plant were tested under several simulations using MATLAB and Symulink. The results proved that the three techniques were capable of stabilizing this particular system, and some significant advantages were found applying the NARMA and LQR techniques.
Interactive Visualization Interfaces for Big Data Analysis Using Combination of Dimensionality Reduction Methods: A Brief Review
A.C. Umaquinga-Criollo, D.H. Peluffo-Ordóñez, P.D. Rosero-Montalvo, P.E. Godoy-Trujillo, H. Benítez-Pereira. See abstract | see full paper
The Big Data analysis allows to generate knowledge based on mathematical models that surpass human capabilities, and therefore it is necessary to have robust computer systems. In this connection, the dimensionality reduction (DR) allows to perform approximations to make data perceptible in a simple and compact way while also the computational cost is reduced. Additionally, interactive interfaces enable the user to work with algorithms involving complex mathematical and statistical processes typically aimed at providing weighting factors to each RD algorithm to find the best way to represent data at a low dimension. In this study, a bibliographic re-view of the different models of interactive interfaces for the analysis of Big Data using RD is presented, by considering different, existing proposals and approaches on how to display the information. Particularly, those approaches based on mental processes and uses of color along with an intuitive handling are of special interest.
Exploring the Characterization and Classification of EEG Signals for a Computer-Aided Epilepsy Diagnosis System
E. Vega-Gualán, A. Vargas, M. Becerra, A. Umaquinga, J. A. Riascos, D.H. Peluffo-Ordóñez. See abstract | see full paper
Epilepsy occurs when localized electrical activity of neurons suffer from an imbalance. One of the most adequate methods for diagnosing and monitoring is via the analysis of electroencephalographic (EEG) signals. Despite there is a wide range of alternatives to characterize and classify EEG signals for epilepsy analysis purposes, many key aspects related to accuracy and physiological interpretation are still considered as open issues. In this paper, this work performs an exploratory study in order to identify the most adequate frequently-used methods for characterizing and classifying epileptic seizures. In this regard, a comparative study is carried out on several subsets of features using four representative classifiers: Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). The framework uses a well-known epilepsy dataset and runs several experiments for two and three classification problems. The results suggest that DWT decomposition with SVM is the most suitable combination.
Multi-target tracking for sperm motility measurement using the kalman filter and JPDAF: Preliminary results
RISTI Journal 2019
D. Mayorca-Torres, H. Guerrero-Chapal, J. Mejía-Manzano, D. Lopez-Mesa, D. H. Peluffo-Ordoñez, J. A. Salazar-Castro. See abstract | see full paper
The determination of sperm motility characteristics is of great importance for the specification of fertility in men. The semengram is the main diagnostic test to confirm semen quality. Currently, many fertility laboratories use visual assistance techniques to evaluate by using the Makler counting chamber, where motility and sperm count analysis can be performed. This research project proposes a method that allows the quantification of motility through the use of the probabilistic filter (JPDAF) based on the Kalman filter. This research requires the stages of segmentation, feature extraction and development of tracking algorithms for the association of sperm trajectories when there are multiple objectives. A total of 200 individual sperm were selected and the effectiveness for sperm classification was determined according to the mobility categories established by the WHO, obtaining an average value of 93.5% for the categories (A, B, C and D).
Satellite-image-based crop identification using unsupervised machine learning techniques: Preliminary results
RISTI Journal 2019
M.Y.M. Revelo, J.B. Gómez Menoza, D.H. Peluffo Ordoñez. See abstract | see full paper
Public urban transport optimization by means of tabu search and pso algorithms: Medellín, colombia
RISTI Journal 2019
L. Betancur-Delgado, M.A. Becerra, C. Duque-Mejía, D.H. Peluffo-Ordóñez, K.C. Álvarez-Uribe. See abstract | see full paper
Method for the Improvement of Knee Angle Accuracy Based on Kinect and IMU: Preliminary Results
O. D. Mayorca-Torres, J. C. Caicedo-Eraso, D. H. Peluffo-Ordóñez. See abstract | see full paper
One way to identify musculoskeletal disorders in the lower limb is through the functional examination where the ranges of normality of the joints are evaluated. Currently, this test can be performed with technological support, with optical sensors and inertial measurement sensors (IMU) being the most used. Kinect has been widely used for the functional evaluation of the human body, however, there are some limits to the movements made in the depth plane and when there is occlusion of the limbs. Inertial measurement sensors (IMU) allow orientation and acceleration measurements to be obtained with a high sampling rate, with some restrictions associated with drift. This article proposes a methodology that combines the acceleration measures of the IMU and kinect sensors in two planes of movement (Frontal and sagittal). These measurements are filtered in the preprocessing stage according to a Kalman filter and are obtained from a mathematical equation that allows them to be merged. The fusion system data obtains acceptable RMS error values of 5.5 ∘ and an average consistency of 92.5% for the sagittal plane with respect to the goniometer technique. The data is shown through an interface that allows the visualization of knee joint kinematic data, as well as tools for the analysis of signals by the health professional.
Kernel-spectral-clustering-driven motion segmentation: Rotating-objects first trials
O. Oña, J. Riascos, I. Marrufo, M. Paez, D. Mayorca, K. L. Ponce-Guevara, J. A. Salazar-Castro, D. H. Peluffo-Ordóñez. See abstract | see full paper
Time-varying data characterization and classification is a field of great interest in both scientific and technology communities. There exists a wide range of applications and challenging open issues such as: automatic motion segmentation, moving-object tracking, and movement forecasting, among others. In this paper, we study the use of the so-called kernel spectral clustering (KSC) approach to capture the dynamic behavior of frames - representing rotating objects - by means of kernel functions and feature relevance values. On the basis of previous research works, we formally derive a here-called tracking vector able to unveil sequential behavior patterns. As a remarkable outcome, we alternatively introduce an encoded version of the tracking vector by converting into decimal numbers the resulting clustering indicators. To evaluate our approach, we test the studied KSC-based tracking over a rotating object from the COIL 20 database. Preliminary results produce clear evidence about the relationship between the clustering indicators and the starting/ending time instance of a specific dynamic sequence.
Classification system for corporate reputation based on financial variables
RISTI Journal 2019
E. Londoño-Montoy, M. A. Becerra, J. Murillo-Escobar, L. Gómez-Bayona, Moreno-G. López, D. H. Peluffo-Ordóñez. See abstract | see full paper
The most important external assessment for companies is reputation, which is very difficult to calculate since its characterization may require a large number of qualitative and quantitative data. This study presents a comparison of different corporate reputation classification systems based on financial variables. Initially, a database was constructed using data from the Corporate Reputation Business Monitor and the Business Information and Reporting System of the Colombian Superintendence of Companies. The records were labeled as high and low. Then, a relevance analysis was carried out, using linear discriminant analysis. Four classifiers (ANFIS, K-NN, F-NN, and SVM-PSO) were compared to categorize the reputation, achieving a performance of 94% accuracy, which allowed to demonstrate the discriminant capacity of the financial variables to classify the reputation.
Artificial Neural Networks in the Demand Forecasting of a Metal-Mechanical Industry
Journal of Engineering and Applied Sciences
L. L. Lorente-Leyva, D. R. Patino-Alarcon, Y. Montero-Santos, I. D. Herrera-Granda, D. H. Peluffo-Ordonez, A. M. Lastre-Aleaga, A. Cordoves-Garcia See abstract | see full paper
This research presents an application of artificial neural networks in demand forecasting by using MATLAB Software. Keeping in mind that in any planning process forecasts play a fundamental role, being one of the bases for; planning, organizing and controlling production. It gives priority to the most critical nodes and their key activities, so that, the decisions made about them will generate the greatest possible positive impact. The methodology applied demonstrates the quality of the solutions found which are compared with traditional statistical methods to demonstrate the value of the solution proposed. When the results show that the minimum quadratic error is reached with the application of artificial neural networks, a better performance is obtained. Therefore, a suitable horizon is established for the planification and decision making in the metal-mechanical industry for the use of artificial intelligence in the production processes.
Design and Tests to Implement Hyperconvergence into a DataCenter: Preliminary Results
E. Maya-Olalla, M. Domínguez-Limaico, S. Meneses-Narváez, P. D. Rosero-Montalvo, S. Narváez-Pupiales, M. Zambrano Vizuete, D. H. Peluffo-Ordóñez. See abstract | see full paper
Hyperconvergence is a new technological trend that integrates and centralizes the functions of network, storage and computing in a single infrastructure, facilitating the administration, operability and scalability of a Data Center as a whole, benets that do not provide an architecture of traditional network or virtualization-specic technologies. This research based on qualitative and experimental methods suggests a model of Implementation of a HyperConvergent Architecture for the management of the Data Center of the Universidad Técnica del Norte, as a competitive and high-performance Open Source alternative for the integration of physical and virtual components. The suggested deployment model is based on the virtualization platform Proxmox VE, CEPH (Storage Software Platform), vSwitch (network scheme) and KVM (equipment virtualization). It includes a centralized domain and it provides a 99.88% availability rate making it in total harmony with functionalities requiring high availability. The results show the simplicity of the system: efficient execution of all applications, migrations of virtual machines from node to node, inactivity times between 50.3 ms and 53 ms, processing acceleration providing agility to IT operations without forgetting that its implementation and its start-up times are relatively low.
Dispensador médico de control y monitoreo para el Hogar del Anciano “San Vicente de Paúl” de la cuidad de Atuntaqui (Ecuador)
RISTI Journal 2019
M. T. Encalada-Grijalva, S. K. Narváez-Pupiales, A. C. Umaquinga-Criollo, L. E. Suárez-Zambrano, D. H. Peluffo-Ordóñez. See abstract | see full paper
La valoración externa más importante para las empresas es la reputación, la cual es muy difícil de calcular ya que su caracterización puede requerir un gran número de datos cualitativos y cuantitativos. Este estudio presenta una comparación de diferentes sistemas de clasificación de reputación corporativa a partir de variables financieras. Inicialmente se construyó una base de datos usando datos del Monitor Empresarial de Reputación Corporativa y del Sistema de Información y Reporte Empresarial de la Superintendencia de Sociedades de Colombia. Los registros fueron etiquetados como alto y bajo. Luego, se llevó a cabo un análisis de relevancia, usando análisis discriminante lineal. Cuatro clasificadores (ANFIS, K-NN, F-NN, y SVM-PSO) fueron comparados para categorizar la reputación, alcanzando un desempeño del 94% de precisión, lo cual permitió demostrar la capacidad discriminante de las variables financieras para clasificar la reputación.
Optimization of the Master Production Scheduling in a Textile Industry Using Genetic Algorithm
L. L. Lorente-Leyva, J. R. Murillo-Valle, Y. Montero-Santos, I. D. Herrera-Granda, E. P. Herrera-Granda, P. D. Rosero-Montalvo, D. H. Peluffo-Ordóñez, X. P. Blanco-Valencia. See abstract | see full paper
In a competitive environment, an industry’s success is directly related to the level of optimization of its processes, how production is planned and developed. In this area, the master production scheduling (MPS) is the key action for success. The object of study arises from the need to optimize the medium-term production planning system in a textile company, through genetic algorithms. This research begins with the analysis of the constraints, mainly determined by the installed capacity and the number of workers. The aggregate production planning is carried out for the T-shirts families. Due to such complexity, the application of bioinspired optimization techniques demonstrates their best performance, before industries that normally employ exact and simple methods that provide an empirical MPS but can compromise efficiency and costs. The products are then disaggregated for each of the items in which the MPS is determined, based on the analysis of the demand forecast, and the orders made by customers. From this, with the use of genetic algorithms, the MPS is optimized to carry out production planning, with an improvement of up to 96% of the level of service provided.
Urban Pollution Environmental Monitoring System Using IoT Devices and Data Visualization: A Case Study
P. D. Rosero-Montalvo, V. F. López-Batista, D. H. Peluffo-Ordóñez, L. L. Lorente-Leyva, X. P. Blanco-Valencia See abstract | see full paper
This work presents a new approach to the Internet of Things (IoT) between sensor nodes and data analysis with visualization platform with the purpose to acquire urban pollution data. The main objective is to determine the degree of contamination in Ibarra city in real time. To do this, for one hand, thirteen IoT devices have been implemented. For another hand, a Prototype Selection and Data Balance algorithms comparison in relation to the classifier k-Nearest Neighbourhood is made. With this, the system has an adequate training set to achieve the highest classification performance. As a final result, the system presents a visualization platform that estimates the pollution condition with more than 90% accuracy.
Artificial Neural Networks for Urban Water Demand Forecasting: A Case Study
Journal of Physics: Conference Series
L. L. Lorente-Leyva, J. F. Pavón-Valencia, Y. Montero-Santos, I. D. Herrera-Granda, E. P. Herrera-Granda, D. H. Peluffo-Ordóñez. See abstract | see full paper
This paper presents an application of an artificial neural network model in forecasting urban water demand using MATLAB software. Considering that in any planning process, the demand forecast plays a fundamental role, being one of the premises to organize and control a set of activities or processes. The versatility of the short, medium and long-term prediction that is provided to the company that offers the water distribution service to determine the supply capacity, maintenance activities, and system improvements as a strategic planning tool. Shown to improve network performance by using time series water demand data, the model can provide excellent fit and forecast without relying on the explicit inclusion of climatic factors and number of consumers. The excellent accuracy of the model indicates the effectiveness of forecasting over different time horizons. Finally, the results obtained from the Artificial Neural Network are compared with traditional statistical models.
A New Approach of Service Platform for Water Optimization in Lettuce Crops Using Wireless Sensor Network
E. Maya-Olalla, H. Domínguez-Limaico, C. Vásquez-Ayala, E. Jaramillo-Vinueza, M. Zambrano, A. Jácome-Ortega, P. D. Rosero-Montalvo, D. H. Peluffo-Ordóñez See abstract | see full paper
Wireless sensor network is implemented and communicated with the cloud through IPv6. The entire system is applied to precision irrigation systems for lettuce crops in Ecuador. The main objective is to provide optimization system for irrigation water for productive purposes and providing crops with the adequate amount of water needed for surviving and producing. To do that the system has a data acquisition system by sensors and this data is stored in web services. By improving the irrigation system crops can be planted throughout the year including summer, the system has a remarkable result for efficient water savings and lettuce crops.
Artificial Neural Networks for Bottled Water Demand Forecasting: A Small Business Case Study
I. D. Herrera-Granda, J. A. Chicaiza-Ipiales, E. P. Herrera-Granda, L. L. Lorente-Leyva, J. A. Caraguay-Procel, I. D. García-Santillán, D. H. Peluffo-Ordóñez See abstract | see full paper
This paper shows a neural networks-based demand forecasting model designed for a small manufacturer of bottled water in Ecuador, which currently doesn’t have adequate demand forecast methodologies, causing problems of customer orders non-compliance, inventory excess and economic losses. However, by working with accurate predictions, the manufacturer will have an anticipated vision of future needs in order to satisfy the demand for manufactured products, in other words, to guarantee on time and reasonable use of the resources. To solve the problems that this small manufacturer has to face a historic demand data acquisition process was done through the last 36 months costumer order records. In the construction of the historical time series, that was analyzed, demand dates and volumes were established as input variables. Then the design of forecast models was done, based on classical methods and multi-layer neural networks, which were evaluated by means of quantitative error indicators. The application of these methods was done through the R programming language. After this, a stage of training and improvement of the network is included, it was evaluated against the results of the classic forecasting methods, and the next 12 months were predicted by means of the best obtained model. Finally, the feasibility of the use of neural networks, in the forecast of demand for purified water bottles is demonstrated.
Multivariate Approach to Alcohol Detection in Drivers by Sensors and Artificial Vision
P. D. Rosero-Montalvo, V. F. López-Batista, D. H. Peluffo-Ordóñez, V. C. Erazo-Chamorro, R. P. Arciniega-Rocha See abstract | see full paper
This work presents a system for detecting excess alcohol in drivers to reduce road traffic accidents. To do so, criteria such as alcohol concentration the environment, a facial temperature of the driver and width of the pupil are considered. To measure the corresponding variables, the data acquisition procedure uses sensors and artificial vision. Subsequently, data analysis is performed into stages for prototype selection and supervised classification algorithms. Accordingly, the acquired data can be stored and processed in a system with low-computational resources. As a remarkable result, the amount of training samples is significantly reduced, while an admissible classification performance is achieved - reaching then suitable settings regarding the given device’s conditions.
Non-generalized Analysis of the Multimodal Signals for Emotion Recognition: Preliminary Results
E. Londoño-Delgado, M. A. Becerra, C. M. Duque-Mejía, J. C. Zapata, C. Mejía-Arboleda, A. E. Castro-Ospina, D. H. Peluffo-Ordóñez. See abstract | see full paper
Emotions are mental states associated with some stimuli, and they have a relevant impact on the people living and are correlated with their physical and mental health. Different studies have been carried out focused on emotion identification considering that there is a universal fingerprint of the emotions. However, this is an open field yet, and some authors had refused such proposal which is contrasted with many results which can be considered as no conclusive despite some of them have achieved high results of performances for identifying some emotions. In this work an analysis of identification of emotions per individual based on physiological signals using the known MAHNOB-HCI-TAGGING database is carried out, considering that there is not a universal fingerprint based on the results achieved by a previous meta-analytic investigation of emotion categories. The methodology applied is depicted as follows: first the signals were filtered and normalized and decomposed in five bands ( δ , θ , α , β , γ ), then a features extraction stage was carried out using multiple statistical measures calculated of results achieved after applied discrete wavelet transform, Cepstral coefficients, among others. A feature space dimensional reduction was applied using the selection algorithm relief F. Finally, the classification was carried out using support vector machine, and k-nearest neighbors and its performance analysis was measured using 10 folds cross-validation achieving high performance uppon to 99%.
Low Resolution Electroencephalographic-Signals-Driven Semantic Retrieval: Preliminary Results
M. A. Becerra, E. Londoño-Delgado, O. I. Botero-Henao, D. Marín-Castrillón, C. Mejia-Arboleda, D. H. Peluffo-Ordóñez. See abstract | see full paper
Nowadays, there exist high interest in the brain-computer interface (BCI) systems, and there are multiple approaches to developing them. Lexico-semantic (LS) classification from Electroencephalographic (EEG) signals is one of them, which is an open and few explored research field. The LS depends on the creation of the concepts of each person and its context. Therefore, it has not been demonstrated a universal fingerprint of the LS either the spatial location in the brain, which depends on the variability the brain plasticity and other changes throughout the time. In this study, an analysis of LS from EEG signals was carried out. The Emotiv Epoc+ was used for the EEG acquisition from three participants reading 36 different words. The subjects were characterized throughout two surveys (Becks depression, and emotion test) for establishing their emotional state, depression, and anxiety levels. The signals were processed to demonstrate semantic category and for decoding individual words (4 pairs of words were selected for this study). The methodology was executed as follows: first, the signals were pre-processed, decomposed by sub-bands ( δ,θ,α,β , and γ ) and standardized. Then, feature extraction was applied using linear and non-linear statistical measures, and the Discrete Wavelet Transform calculated from EEG signals, generating the feature space termed set-1. Also, the principal component analysis was applied to reduce the dimensionality, generating the feature space termed set-2. Finally, both sets were tested independently by multiple classifiers based on the support vector machine and k- nearest neighbor. These were validated using 10-fold cross-validation achieving results upper to 95% of accuracy which demonstrated the capability of the proposed mechanism for decoding LS from a reduced number of EEG signals acquired using a portable system of acquisition.
Adaptation and Recovery Stages for Case-Based Reasoning Systems Using Bayesian Estimation and Density Estimation with Nearest Neighbors
D. Bastidas-Torres, C. Piñeros-Rodriguez, D. H. Peluffo-Ordóñez, X. Blanco-Valencia, J. Revelo-Fuelagán, M. A. Becerra, A. E. Castro-Ospina, L. L. Lorente-Leyva. See abstract | see full paper
When searching for better solutions that improve the medical diagnosis accuracy, Case-Based reasoning systems (CBR) arise as a good option. This article seeks to improve these systems through the use of parametric and non-parametric probability estimation methods, particularly, at their recovery and adaptation stages. To this end, a set of experiments are conducted with two essentially different, medical databases (Cardiotocography and Cleveland databases), in order to find good parametric and non-parametric estimators. The results are remarkable as a high accuracy rate is achieved when using explored approaches: Naive Bayes and Nearest Neighbors (K-NN) estimators. In addition, a decrease on the involved processing time is reached, which suggests that proposed estimators incorporated into the recovery and adaptation stage becomes suitable for CBR systems, especially when dealing with support for medical diagnosis applications.
Cardiac Murmur Effects on Automatic Segmentation of ECG Signals for Biometric Identification: Preliminary Study
C. Duque-Mejía, M. A. Becerra, C. Zapata-Hernández, C. Mejia-Arboleda, A. E. Castro-Ospina, E. Delgado-Trejos, D. H. Peluffo-Ordóñez, P. Rosero-Montalvo, J. Revelo-Fuelagán. See abstract | see full paper
Biometric identification or authentication is a pattern recognition process, which is carried out acquiring different measures of human beings to distinguish them. Fingerprint and eye iris are the most known and used biometric techniques; nevertheless, also they are the most vulnerable to counterfeiting. Consequently, nowadays research has been focused on physiological signals and behavioral traits for biometric identification because these allow not only the authentication but also determine that the subject is alive. Electrocardiographic signals (ECG-S) have been studied for biometric identification demonstrating their capability. Taking into account that some pathologies are detected using ECG-S, these can affect the results of biometric identification; nonetheless, some diseases such as cardiac murmurs are not detected by ECG-S, but they can distort their morphology. Therefore, these signals must be analyzed considering different pathologies. In this paper, a biometric study was carried out from 40 subjects (20 with cardiac murmurs and 20 without cardiac affections). First, the ECG-S were preprocessed and segmented using the fast method for detecting T waves with annotation of P and T waves, then feature extraction was carried out using discrete wavelet transform (DWT), maximal overlap DWT, cepstral coefficients, and statistical measures. Then, rough set and relief F algorithms were applied to datasets (pathological and normal signals) for attribute reduction. Finally, multiple classifiers and combinations of them were tested. The results of the segmentation were analyzed achieving low results for signals affected by cardiac murmurs. On the other hand, according to the cardiac murmur effects analyzed, the performance of the classifiers in cascade shown the best accuracy for human identification from ECG-S, minimizing the impact of variability generated on ECG-S by cardiac murmurs diseases.
Drowsiness Detection in Drivers Through Real-Time Image Processing of the Human Eye
E. P. Herrera-Granda, J. A. Caraguay-Procel, P. D. Granda-Gudiño, I. D. Herrera-Granda, L. L. Lorente-Leyva, D. H. Peluffo-Ordóñez, J. Revelo-Fuelagán. See abstract | see full paper
At a global level, drowsiness is one of the main causes of road accidents causing frequent deaths and economic losses. To solve this problem an application developed in Matlab environment was made, which processes real time acquired images in order to determine if the driver is awake or drowsy. Using AdaBoost training Algorithm for Viola-Jones eyes detection, a cascade classifier finds the location and the area of the driver eyes in each frame of the video. Once the driver eyes are detected, they are analyzed whether are open or closed by color segmentation and thresholding based on the sclera binarized area. Finally, it was implemented as a drowsiness detection system which aims to prevent driver fall asleep while driving a vehicle by activating an audible alert, reaching speeds up to 14.5 fps.
Feature Extraction Analysis for Emotion Recognition from ICEEMD of Multimodal Physiological Signals
J. F. Gómez-Lara, O. A. Ordóñez-Bolaños, M. A. Becerra, A. E. Castro-Ospina, C. Mejía-Arboleda, C. Duque-Mejía, J. Rodriguez, J. Revelo-Fuelagán, D. H. Peluffo-Ordóñez See abstract | see full paper
The emotions identification is a very complex task due to depending on multiple variables individually and as a group. They are evaluated by different criteria such as arousal, valence, and dominance mainly. Several investigations have been focused on building prediction systems. Nevertheless, this is still an open research field. The main objective of this paper is the analysis of the Improved Complementary Ensemble Empirical Mode Decomposition (ICEEMD) for feature extraction from physiological signals for emotions prediction. Physiological signals and metadata of the DEAP database were used. First, the signals were preprocessed, then three decompositions were carried out using ICEEMD, Discrete Wavelet Transform (DWT), and Maximal overlap DWT. Feature extraction was carried out using Hermite coefficients, and multiple statistic measures from IMFs, coefficients DWT, and MODWT, and signals. Then, Relief F selection algorithms were applied to reducing the dimensionality of the feature space. Finally, Linear Discriminant Classifier (LDC) and K-NN cascade, and Random Forest classifiers were tested. The different decomposition techniques were compared, and the relevant signals and measures were established. The results demonstrated the capability of ICEEMD decomposition for emotions analysis from physiological signals.
Recognition of emotions using ICEEMD-based characterization of multimodal physiological signals
O. A. Ordóñez-Bolaños, J. F. Gómez-Lara, M. A. Becerra, D. H. Peluffo-Ordóñez, C. M. Duque-Mejía, D. Medrano-David, C. Mejia-Arboleda See abstract | see full paper
Physiological-signal-analysis-based approaches are typically used for automatic emotion identification. Given the complex nature of signals-related emotions, their right identification often results in a non-trivial and exhaustive process -especially because such signals suffer from high dependence upon multiple external variables. Some emotional criteria of interest are arousal, valence, and dominance. Several research works have addressed this issue, mainly through creating prediction systems, notwithstanding, due to aspects such as accuracy, in-context interpretation and computational cost, it is still considered a great-of-interest, open research eld. This paper is aimed at verifying the usefulness of the so-called improved complete empirical mode decomposition (ICEEMD) as a physiological-signal-characterization building block within an emotion-predicting system. To this purpose, some physiological signals along with patients’ metadata from the DEAP database are considered. The experiments are set-up as follows: Signals are pre-processed by amplitude adjusting and simple filtering. Then, a feature set is built using HC, and multiple statistic measures from information given by the three considered decompositions, namely: ICEEMD, discrete wavelet transform (DWT),and Maximal overlap DWT. Subsequently, Relief F selection algorithm was applied for reducing the dimensionality of the feature space. Finally, classifiers (LDC and K-NN cascade architectures) are used to assess the class-separability given by the feature set. The different decomposition techniques were compared, and the relevant signals and measures were established. Experimental results evidence the suitability of ICEEMD decomposition for physiological-signal-driven emotions analysis.
Intelligent System for Identification of Wheelchair User’s Posture using Machine Learning Techniques
IEEE Sensors 2019
P. Rosero-Montalvo, D. H. Peluffo-Ordóñez, V. F. López-Batista, J. Serrano, E. Rosero. See abstract | see full paper
This work presents an intelligent system aimed at detecting a person’s posture when sit a wheelchair. The main use of the proposed system is to warn an improper posture to preventing major health issues. A network of sensors is used to collect data that are analyzed through a scheme involving the following stages: Selection of prototypes using Condensed Nearest Neighborhood rule (CNN), data balancing with the Kennard- Stone (KS) algorithm, and reduction of dimensionality through Principal Component Analysis (PCA). In doing so, acquired data can be both stored and processed into a micro controller. Finally, to carry out the posture classification over balanced, pre-processed data, the K-Nearest Neighbors (k-NN) algorithm is used. It turns to be an intelligent system reaching a good tradeoff between the necessary amount of data and performance is accomplished. As a remarkable result, the amount of required data for training is significantly reduced while an admissible classification performance is achieved being a suitable tradegiven the device conditions.
Optimization of the Network of Urban Solid Waste Containers: A Case Study
I. D. Herrera-Granda, W. G. Imbaquingo-Usiña, L. L. Lorente-Leyva, E. P. Herrera-Granda, D. H. Peluffo-Ordóñez, Diego G. Rossit. See abstract | see full paper
This paper presents the results of the optimization of the urban solid waste container network in the urban sector of the Ibarra City, Ecuador by the implementation of an optimization model, which consists of a multi-objective mixed integer programming model which has been successfully used in the context of recycling in past studies. This model was modified so that possible locations of the containers at each corner of the blocks containing the constructed buildings were considered. As well, a restriction to count the containers to be installed was added. Furthermore, to add robustness to the model, it was also considered the filling of the container based on the density of the deposited waste and the model objective functions – being, a weighted sum of the cost of the installation of the network along with the average walking distance between users and the assigned containers. The outputs of the model are the total number of containers and a map with the optimal locations of municipal solid waste containers for Ibarra city. The model was implemented in GAMS platform wherein parameters can be permanently revised so that the results may be updated in case of variations of the initial conditions.
Wireless Sensor Networks for Irrigation in Crops Using Multivariate Regression Models
P. D. Rosero-Montalvo, J. Pijal-Rojas, C. Vásquez-Ayala, E. Maya, C. Pupiales, L. Suaréz, H. Benitez-Pereira, D. H. Peluffo-Ordóñez. See abstract | see full paper
The present wireless sensor network system shows a data analysis approach within greenhouses in short cycle crops. This research, on the one hand, is carried out to reduce water consumption and improve the product by predicting the right moment of the irrigation cycle through the evapotranspiration criterion. On the other hand, an efficient electronic system is designed under the electronic standard. To define the best model to define the next irrigation in the crops in base to ground humidity, the algorithms are compared for continuous and discontinuous multivariate regressions. The results are evaluated with different criteria of prediction errors. As a result, the linear regression with Support Vector Machine model is chosen for counting an average deviation error of 7.89% and an error variability of 4.48%. In addition, water consumption is reduced by 20%, achieving better quality products.
Sign Language Recognition Based on Intelligent Glove Using Machine Learning Techniques
P. D. Rosero-Montalvo, P. Godoy-Trujillo, E. Flores-Bosmediano, J. Carrascal-García, S. Otero-Potosi, H. Benitez-Pereira, D. H. Peluffo-Ordóñez See abstract | see full paper
We present an intelligent electronic glove system able to detect numbers of sign language in order to automate the process of communication between a deaf-mute person and others. This is done by translating the hands move sign language into an oral language. The system is inside to a glove with flex sensors in each finger that we are used to collect data that are analyzed through a methodology involving the following stages: (i) Data balancing with the Kennard-Stone (KS), (ii) Comparison of prototypes selection between CHC evolutionary Algorithm and Decremental Reduction Optimization Procedure 3 (DROP3) to define the best one. Subsequently, the K-Nearest Neighbors (kNN) as classifier (iii) is implemented. As a result, the amount of data reduced from stage (i) from storage within the system is 98%. Also, a classification performance of 85% is achieved with CHC evolutionary algorithm.
Air Quality Monitoring Intelligent System Using Machine Learning Techniques
P. D. Rosero-Montalvo, J. A. Caraguay-Procel, E. D. Jaramillo, J. M. Michilena-Calderón, A. C. Umaquinga-Criollo, M. Mediavilla-Valverde, M. A. Ruiz, L. A. Beltrán, D. H. Peluffo-Ordóñez See abstract | see full paper
Environment monitoring is so important because it is based on the first right of people, life and health. For this reason, this system monitoring air quality with different sensor nodes in the Ibarra that evaluate the parameters of CO2, NOx, UV Light, Temperature and Humidity. The data analysis through machine learning algorithms allow the system to classify autonomously if a certain geographical location is exceeding the established emission limits of gases. As a result, the k-Nearest Neighbor algorithm presented a great classification performance when selecting the most contaminated sectors.
Methodology for the design and simulation of industrial facilities and production systems based on a modular approach in an "industry 4.0" context
L. O. Alpala, M. E. Alemany, D. H. Peluffo-Ordoñez, F. Bolaños, A. M. Rosero, J. C. Torres. See abstract | see full paper
The design of the industrial facilities distribution is one of the most important decisions to be made, as it will condition the operation thereof. The concept of industrial installation as it is known today has evolved to the point that it integrates automation and information systems. Indeed, such evolution has given rise to the so-called intelligent factory. At present, in order to produce customized mass products according to customers' requirements, it is become an important issue the distribution of facilities with the generation of successful layout designs, based on the flexibility, modularity and easy configuration of production systems.This paper proposes a methodology to solve the problem of plant distribution design and redesign based upon a novel modular approach within an industry 4.0context. Proposed methodology is an adaptation of the "SLP" Methodology (Systematic Layout Planning-Simulation) so-called SLP Modulary 4.0 (systematic planning of the Layout based on a modular vision under a context of Industry 4.0); this methodology incorporates in its structure an integrated design system (IDS) into its structure, which allows collaborative work with different CAD design and simulation tools. For the validation of the proposed methodology, a case study of a coffee processing plant is considered. The distribution design results obtained from the case study prove the benefit and usefulness of the proposed methodology.
Exploratory Study of the Effects of Cardiac Murmurs on Electrocardiographic-Signal-Based Biometric Systems
Lecture Notes in Computer Science
M. A. Becerra, C. Duque-Mejía, C. Zapata-Hernández, D. H. Peluffo-Ordóñez, L. Serna-Guarín, Edilson Delgado-Trejos, E. J. Revelo-Fuelagán, X. P. Blanco Valencia. See abstract | see full paper
The process of distinguishing among human beings through the inspection of acquired data from physical or behavioral traits is known as biometric identification. Mostly, fingerprint- and iris-based biometric techniques are used. Nowadays, since such techniques are highly susceptible to be counterfeited, new biometric alternatives are explored mainly based on physiological signals and behavioral traits -which are useful not only for biometric identification purposes, but may also play a role as a vital signal indicator. In this connection, the electrocardiographic (ECG) signals have shown to be a suitable approach. Nonetheless, their informative components (morphology, rhythm, polarization, and among others) can be affected by the presence of a cardiac pathology. Even more, some other cardiac diseases cannot directly be detected by the ECG signal inspection but still have an effect on their waveform, that is the case of cardiac murmurs. Therefore, for biometric purposes, such signals should be analyzed submitted to the effects of pathologies. This paper presents a exploratory study aimed at assessing the influence of the presence of a pathology when analyzing ECG signals for implementing a biometric system. For experiments, a data base holding 20 healthy subjects and 20 pathological subjects (diagnosed with different types of cardiac murmurs) are considered. The proposed signal analysis consists of preprocessing, characterization (using wavelet features), feature selection and classification (five classifiers as well as a mixture of them are tested). As a result, through the performed comparison of the classification rates when testing pathological and normal ECG signals, the cardiac murmurs’ undesired effect on the identification mechanism performance is clearly unveiled.
Generalized Low-Computational Cost Laplacian Eigenmaps
Lecture Notes in Computer Science
J. A. Salazar-Castro, D. F. Peña, C. Basante, C. Ortega, L. Cruz-Cruz, J. Revelo-Fuelagán, X. P. Blanco-Valencia, G. Castellanos-Domínguez, D. H. Peluffo-Ordóñez. See abstract | see full paper
Dimensionality reduction (DR) is a methodology used in many fields linked to data processing, and may represent a preprocessing stage or be an essential element for the representation and classification of data. The main objective of DR is to obtain a new representation of the original data in a space of smaller dimension, such that more refined information is produced, as well as the time of the subsequent processing is decreased and/or visual representations more intelligible for human beings are generated. The spectral DR methods involve the calculation of an eigenvalue and eigenvector decomposition, which is usually high-computational-cost demanding, and, therefore, the task of obtaining a more dynamic and interactive user-machine integration is difficult. Therefore, for the design of an interactive IV system based on DR spectral methods, it is necessary to propose a strategy to reduce the computational cost required in the calculation of eigenvectors and eigenvalues. For this purpose, it is proposed to use locally linear submatrices and spectral embedding. This allows integrating natural intelligence with computational intelligence for the representation of data interactively, dynamically and at low computational cost. Additionally, an interactive model is proposed that allows the user to dynamically visualize the data through a weighted mixture.
Intelligence in Embedded Systems: Overview and Applications
FTC 2018: Future Technologies Conference.
P. D. Rosero-Montalvo, V. F. López Batista, E. A. Rosero, E. D. Jaramillo, J. A. Caraguay, J. Pijal-Rojas, D. H. Peluffo-Ordóñez. See abstract | see full paper
The use of electronic systems and devices has become widely spread and is reaching several fields as well as indispensable for many daily activities. Such systems and devices (here termed embedded systems) are aiming at improving human beings’ quality of life. To do so, they typically acquire users’ data to adjust themselves to different needs and environments in an adequate fashion. Consequently, they are connected to data networks to share this information and find elements that allow them to make the appropriate decisions. Then, for practical usage, their computational capabilities should be optimized to avoid issues such as: resources saturation (mainly memory and battery). In this line, machine learning offers a wide range of techniques and tools to incorporate “intelligence” into embedded systems, enabling them to make decisions by themselves. This paper reviews different data storage techniques along with machine learning algorithms for embedded systems. Its main focus is on techniques and applications (with special interest in Internet of Things) reported in literature about data analysis criteria to make decisions.
Building a Nasa Yuwe Language Corpus and Tagging with a Metaheuristic Approach
Computación y Sistemas
L. M. Sierra-Martinez, C. A. Cobos, J. C. Corrales-Muñoz, T. Rojas Curieux, E. Herrera-Viedman, D. H. Peluffo-Ordóñez. See abstract | see full paper
Nasa Yuwe is the language of the Nasa indigenous community in Colombia. It is currently threatened with extinction. In this regard, a range of computer science solutions have been developed to the teaching and revitalization of the language. One of the most suitable approaches is the construction of a Part-Of-Speech Tagging (POST), which encourages the analysis and advanced processing of the language. Nevertheless, for Nasa Yuwe no tagged corpus exists, neither is there a POS Tagger and no related works have been reported. This paper therefore concentrates on building a linguistic corpus tagged for the Nasa Yuwe language and generating the first tagging application for Nasa Yuwe. The main results and findings are 1) the process of building the Nasa Yuwe corpus, 2) the tagsets and tagged sentences, as well as the statistics associated with the corpus, 3) results of two experiments to evaluate several POS Taggers (a Random tagger, three versions of HSTAGger, a tagger based on the harmony search metaheuristic, and three versions of a memetic algorithm GBHS Tagger, based on Global-Best Harmony Search (GBHS), Hill Climbing and an explicit Tabu memory, which obtained the best results in contrast with the other methods considered over the Nasa Yuwe language corpus.
Angle-Based Model for Interactive Dimensionality Reduction and Data Visualization
International Workshop on Artificial Intelligence and Pattern Recognition: IWAIPR 2018
C. K. Basante-Villota, C. M. Ortega-Castillo, D. F. Peña-Unigarro, E. J. Revelo-Fuelagán, J. A. Salazar-Castro, M. Ortega-Bustamante, P. Rosero-Montalvo, L. S. Vega-Escobar, D. H. Peluffo-Ordóñez See abstract | see full paper
In recent times, an undeniable fact is that the amount of data available has increased dramatically due mainly to the advance of new technologies allowing for storage and communication of enormous volumes of information. In consequence, there is an important need for finding the relevant information within the raw data through the application of novel data visualization techniques that permit the correct manipulation of data. This issue has motivated the development of graphic forms for visually representing and analyzing high-dimensional data. Particularly, in this work, we propose a graphical approach, which, allows the combination of dimensionality reduction (DR) methods using an angle-based model, making the data visualization more intelligible. Such approach is designed for a readily use, so that the input parameters are interactively given by the user within a user-friendly environment. The proposed approach enables users (even those being non-experts) to intuitively select a particular DR method or perform a mixture of methods. The experimental results prove that the interactive manipulation enabled by the here-proposed model-due to its ability of displaying a variety of embedded spaces-makes the task of selecting a embedded space simpler and more adequately fitted for a specific need.
Movement Identification in EMG Signals Using Machine Learning: A Comparative Study
International Workshop on Artificial Intelligence and Pattern Recognition: IWAIPR 2018
L. Lasso-Arciniegas, A. Viveros-Melo, J. A. Salazar-Castro, M. A. Becerra, A. E. Castro-Ospina, E. J. Revelo-Fuelagán, D. H. Peluffo-Ordóñez See abstract | see full paper
The analysis of electromyographic (EMG) signals enables the development of important technologies for industry and medical environments, due mainly to the design of EMG-based human-computer interfaces. There exists a wide range of applications encompassing: Wireless-computer controlling, rehabilitation, wheelchair guiding, and among others. The semantic interpretation of EMG analysis is typically conducted by machine learning algorithms, and mainly involves stages for signal characterization and classification. This work presents a methodology for comparing a set of state-of-the-art approaches of EMG signal characterization and classification within a movement identification framework. We compare the performance of three classifiers (KNN, Parzen-density-based classifier and ANN) using spectral (Wavelets) and time-domain-based (statistical and morphological descriptors) features. Also, a methodology for movement selection is proposed. Results are comparable with those reported in literature, reaching classification performance of (90.89 ± 1.12)% (KNN), (93.92 ± 0.34)% (ANN) and 91.09 ± 0.93 (Parzen-density-based classifier) with 12 movements.
Electroencephalographic Signals and Emotional States for Tactile Pleasantness Classification
International Workshop on Artificial Intelligence and Pattern Recognition: IWAIPR 2018
M. A. Becerra, E. Londoño-Delgado, S. M. Pelaez-Becerra, A. E. Castro-Ospina, C. Mejia-Arboleda, J. Durango, D. H. Peluffo-Ordóñez See abstract | see full paper
Haptic textures are alterations of any surface that are perceived and identified using the sense of touch, and such perception affects individuals. Therefore, it has high interest in different applications such as multimedia, medicine, marketing, systems based on human-computer interface among others. Some studies have been carried out using electroencephalographic signals; nevertheless, this can be considered few. Therefore this is an open research field. In this study, an analysis of tactile stimuli and emotion effects was performed from EEG signals to identify pleasantness and unpleasantness sensations using classifier systems. The EEG signals were acquired using Emotiv Epoc+ of 14 channels following a protocol for presenting ten different tactile stimuli two times. Besides, three surveys (Becks depression, emotion test, and tactile stimuli pleasant level) were applied to three volunteers for establishing their emotional state, depression, anxiety and the pleasantness level to characterize each subject. Then, the results of the surveys were computed and the signals preprocessed. Besides, the registers were labeled as pleasant and unpleasant. Feature extraction was applied from Short Time Fourier Transform and discrete wavelet transform calculated to each sub-bands (δ, θ, α, β, and γ) of EEG signals. Then, Rough Set algorithm was applied to identify the most relevant features. Also, this technique was employed to establish relations among stimuli and emotional states. Finally, five classifiers based on the support vector machine were tested using 10-fold cross-validation achieving results upper to 99% of accuracy. Also, dependences among emotions and pleasant and unpleasant tactile stimuli were identified.
Exploration of Characterization and Classification Techniques for Movement Identification from EMG Signals: Preliminary Results
Colombian Conference on Computing - CCC 2018
A. Viveros-Melo, L. Lasso-Arciniegas, J. A. Salazar-Castro, D. H. Peluffo-Ordóñez, M. A. Becerra, A. E. Castro-OspinaE. J. Revelo-Fuelagán See abstract | see full paper
Today, human-computer interfaces are increasingly more often used and become necessary for human daily activities. Among some remarkable applications, we find: Wireless-computer controlling through hand movement, wheelchair directing/guiding with finger motions, and rehabilitation. Such applications are possible from the analysis of electromyographic (EMG) signals. Despite some research works have addressed this issue, the movement classification through EMG signals is still an open challenging issue to the scientific community -especially, because the controller performance depends not only on classifier but other aspects, namely: used features, movements to be classified, the considered feature-selection methods, and collected data. In this work, we propose an exploratory work on the characterization and classification techniques to identifying movements through EMG signals. We compare the performance of three classifiers (KNN, Parzen-density-based classifier and ANN) using spectral (Wavelets) and time-domain-based (statistical and morphological descriptors) features. Also, a methodology for movement selection is proposed. Results are comparable with those reported in literature, reaching classification errors of 5.18% (KNN), 14.7407% (ANN) and 5.17% (Parzen-density-based classifier).
Physiological Signals Fusion Oriented to Diagnosis - A Review
Colombian Conference on Computing - CCC 2018
Y. F. Uribe, K. C. Alvarez-Uribe, D. H. Peluffo-Ordoñez, M. A. Becerra See abstract | see full paper
The analysis of physiological signals is widely used for the development of diagnosis support tools in medicine, and it is currently an open research field. The use of multiple signals or physiological measures as a whole has been carried out using data fusion techniques commonly known as multimodal fusion, which has demonstrated its ability to improve the accuracy of diagnostic care systems. This paper presents a review of state of the art, putting in relief the main techniques, challenges, gaps, advantages, disadvantages, and practical considerations of data fusion applied to the analysis of physiological signals oriented to diagnosis decision support. Also, physiological signals data fusion architecture oriented to diagnosis is proposed.
Comparative Analysis Between Embedded-Spaces-Based and Kernel-Based Approaches for Interactive Data Representation
Colombian Conference on Computing - CCC 2018
C. K. Basante-Villota, C. M. Ortega-Castillo, D. F. Peña-Unigarro, J. E. Revelo-Fuelagán, J. A. Salazar-Castro, D. H. Peluffo-Ordóñez See abstract | see full paper
This work presents a comparative analysis between the linear combination of em-bedded spaces resulting from two approaches: (1) The application of dimensional reduction methods (DR) in their standard implementations, and (2) Their corresponding kernel-based approximations. Namely, considered DR methods are: CMDS (Classical Multi- Dimensional Scaling), LE (Laplacian Eigenmaps) and LLE (Locally Linear Embedding). This study aims at determining -through objective criteria- what approach obtains the best performance of DR task for data visualization. The experimental validation was performed using four databases from the UC Irvine Machine Learning Repository. The quality of the obtained embedded spaces is evaluated regarding the RNX(K) criterion. The RNX(K) allows for evaluating the area under the curve, which indicates the performance of the technique in a global or local topology. Additionally, we measure the computational cost for every comparing experiment. A main contribution of this work is the provided discussion on the selection of an interactivity model when mixturing DR methods, which is a crucial aspect for information visualization purposes.
Odor Pleasantness Classification from Electroencephalographic Signals and Emotional States
Colombian Conference on Computing - CCC 2018
M. A. Becerra, E. Londoño-Delgado, S. M. Pelaez-Becerra, L. Serna-Guarín, A. E. Castro-Ospina, D. Marin-Castrillón, D. H. Peluffo-Ordóñez See abstract | see full paper
Odor identification refers to the capability of the olfactory sense for discerning odors. The interest in this sense has grown over multiple fields and applications such as multimedia, virtual reality, marketing, among others. Therefore, objective identification of pleasant and unpleasant odors is an open research field. Some studies have been carried out based on electroencephalographic signals (EEG). Nevertheless, these can be considered insufficient due to the levels of accuracy achieved so far. The main objective of this study was to investigate the capability of the classifiers systems for identification pleasant and unpleasant odors from EEG signals. The methodology applied was carried out in three stages. First, an odor database was collected using the signals recorded with an Emotiv Epoc+ with 14 channels of electroencephalography (EEG) and using a survey for establishing the emotion levels based on valence and arousal considering that the odor induces emotions. The registers were acquired from three subjects, each was subjected to 10 different odor stimuli two times. The second stage was the feature extraction which was carried out on 5 sub-bands δ, θ, α, β, γ of EEG signals using discrete wavelet transform, statistical measures, and other measures such as area, energy, and entropy. Then, feature selection was applied based on Rough Set algorithms. Finally, in the third stage was applied a Support vector machine (SVM) classifier, which was tested with five different kernels. The performance of classifiers was compared using k-fold cross-validation. The best result of 99.9% was achieved using the linear kernel. The more relevant features were obtained from sub-bands ββ and αα . Finally, relations among emotion, EEG, and odors were demonstrated.
Voice Pathology Detection Using Artificial Neural Networks and Support Vector Machines Powered by a Multicriteria Optimization Algorithm
Communications in Computer and Information Science
Henry Jhoán Areiza-Laverde, Andrés Eduardo Castro-Ospina, Diego Hernán Peluffo-Ordóñez See abstract | see full paper
Computer-aided diagnosis (CAD) systems have allowed to enhance the performance of conventional, medical diagnosis procedures in different scenarios. Particularly, in the context of voice pathology detection, the use of machine learning algorithms has proved to be a promising and suitable alternative. This work proposes the implementation of two well known classification algorithms, namely artificial neural networks (ANN) and support vector machines (SVM), optimized by particle swarm optimization (PSO) algorithm, aimed at classifying voice signals between healthy and pathologic ones. Three different configurations of the Saarbrucken voice database (SVD) are used. The effect of using balanced and unbalanced versions of this dataset is proved as well as the usefulness of the considered optimization algorithm to improve the final performance outcomes. Also, proposed approach is comparable with state-of-the-art methods.
Optimization of the University Transportation by Contraction Hierarchies Method and Clustering Algorithms
HAIS: International Conference on Hybrid Artificial Intelligence Systems
Israel D. Herrera-GrandaLeandro, L. Lorente-Leyva, Diego H. Peluffo-Ordóñez, Robert M. Valencia-Chapi, Yakcleem Montero-Santos, Jorge L. Chicaiza-Vaca, Andrés E. Castro-Ospina See abstract | see full paper
This research work focuses on the study of different models of solution reflected in the literature, which treat the optimization of the routing of vehicles by nodes and the optimal route for the university transport service. With the recent expansion of the facilities of a university institution, the allocation of the routes for the transport of its students, became more complex. As a result, geographic information systems (GIS) tools and operations research methodologies are applied, such as graph theory and vehicular routing problems, to facilitate mobilization and improve the students transport service, as well as optimizing the transfer time and utilization of the available transport units. An optimal route management procedure has been implemented to maximize the level of service of student transport using the K-means clustering algorithm and the method of node contraction hierarchies, with low cost due to the use of free software.
Fingertips Segmentation of Thermal Images and Its Potential Use in Hand Thermoregulation Analysis
HAIS: International Conference on Hybrid Artificial Intelligence Systems
A. E. Castro-Ospina, A. M. Correa-Mira, I. D. Herrera-Granda, D. H. Peluffo-Ordóñez, H. A. Fandiño-Toro See abstract | see full paper
Thermoregulation refers to the physiological processes that maintain stable the body temperatures. Infrared thermography is a non-invasive technique useful for visualizing these temperatures. Previous works suggest it is important to analyze thermoregulation in peripheral regions, such as the fingertips, because some disabling pathologies affect particularly the thermoregulation of these regions. This work proposes an algorithm for fingertip segmentation in thermal images of the hand. By using a supervised index, the results are compared against segmentations provided by humans. The results are outstanding even when the analyzed images are highly resized.
Computer Vision-Based Method for Automatic Detection of Crop Rows in Potato Fields.
International Conference on Information Technology & System (ICITS 2018)
I. Garcia-Santillan, D. H. Peluffo-Ordóñez, V. Caranqui, M. Pusda, F. Garrido, P. Granda. See abstract | see full paper
This work presents an adaptation and validation of a method for automatic crop row detection from images captured in potato fields (Solanum tuberosum) for initial growth stages based on the micro-ROI concept. The crop row detection is a crucial aspect for autonomous guidance of agricultural vehicles and site-specific treatments application. The images were obtained using a color camera installed in the front of a tractor under perspective projection. There are some issues that can affect the quality of the images and the detection procedure, among them: uncontrolled illumination in outdoor agricultural environments, different plant densities, presence of weeds and gaps in the crop rows. The adapted approach was designed to address these adverse situations and it consists of three linked phases. The main contribution is the ability to detect straight and curved crop rows in potato crops. The performance was quantitatively compared against two existing methods, achieving acceptable results in terms of accuracy and processing time.
Cardiac Pulse Modeling Using a Modified van der Pol Oscillator and Genetic Algorithms
International Conference on Bioinformatics and Biomedical Engineering 2018
F.M. Lopez-Chamorro, A.F. Arciniegas-Mejia, D.E. Imbajoa-Ruiz, P.D. Rosero-Montalvo, P.García, A.E. Castro-Ospina, A. Acosta, D.H. Peluffo-Ordóñez. See abstract | see full paper
This paper proposes an approach for modeling cardiac pulses from electrocardiographic signals (ECG). A modified van der Pol oscillator model (mvP) is analyzed, which, under a proper configuration, is capable of describing action potentials, and, therefore, it can be adapted for modeling a normal cardiac pulse. Adequate parameters of the mvP system response are estimated using non-linear dynamics methods, like dynamic time warping (DTW). In order to represent an adaptive response for each individual heartbeat, a parameter tuning optimization method is applied which is based on a genetic algorithm that generates responses that morphologically resemble real ECG. This feature is particularly relevant since heartbeats have intrinsically strong variability in terms of both shape and length. Experiments are performed over real ECG from MIT-BIH arrhythmias database. The application of the optimization process shows that the mvP oscillator can be used properly to model the ideal cardiac rate pulse.
Advances in Homotopy Applied to Object Deformation
International Conference on Bioinformatics and Biomedical Engineering 2018
J.A. Salazar-Castro, A.C. Umaquinga-Criollo, L.D. Cruz-Cruz, L.O. Alpala-Alpala, C. González-Castaño, M.A. Becerra-Botero, D.H. Peluffo-Ordóñez, C.G. Castellanos-Domínguez. See abstract | see full paper
This work explores novel alternatives to conventional linear homotopy to enhance the quality of resulting transitions from object deformation applications. Studied/introduced approaches extend the linear mapping to other representations that provides smooth transitions when deforming objects while homotopy conditions are fulfilled. Such homotopy approaches are based on transcendental functions (TFH) in both simple and parametric versions. As well, we propose a variant of an existing quality indicator based on the ratio between the coefficients curve of resultant homotopy and that of a less-realistic, reference homotopy. Experimental results depict the effect of proposed TFH approaches regarding its usability and benefit for interpolating images formed by homotopic objects with smooth changes.
Case-Based Reasoning Systems for Medical Applications with Improved Adaptation and Recovery Stages
International Conference on Bioinformatics and Biomedical Engineering 2018
X. Blanco Valencia, D. Bastidas Torres, C. Piñeros Rodriguez, D. H. Peluffo-Ordóñez, M. A. Becerra, A. E. Castro-Ospina. See abstract | see full paper
Case-Based Reasoning Systems (CBR) are in constant evolution, as a result, this article proposes improving the retrieve and adaption stages through a different approach. A series of experiments were made, divided in three sections: a proper pre-processing technique, a cascade classification, and a probability estimation procedure. Every stage offers an improvement, a better data representation, a more efficient classification, and a more precise probability estimation provided by a Support Vector Machine (SVM) estimator regarding more common approaches. Concluding, more complex techniques for classification and probability estimation are possible, improving CBR systems performance due to lower classification error in general cases.
Low Data Fusion Framework Oriented to Information Quality for BCI Systems
International Conference on Bioinformatics and Biomedical Engineering 2018
Miguel Alberto Becerra, Karla C. Alvarez-Uribe, Diego Hernán Peluffo Ordoñez. See abstract | see full paper
The evaluation of the data/information fusion systems does not have standard quality criteria making the reuse and optimization of these systems a complex task. In this work, we propose a complete low data fusion (DF) framework based on the Joint Director of Laboratories (JDL) model, which considers contextual information alongside information quality (IQ) and performance evaluation system to optimize the DF process according to the user requirements. A set of IQ criteria was proposed by level. The model was tested with a brain-computer interface (BCI) system multi-environment to prove its functionality. The first level makes the selection and preprocessing of electroencephalographic signals. In level one feature extraction is carried out using discrete wavelet transform (DWT), nonlinear and linear statistical measures, and Fuzzy Rough Set – FRS algorithm for selecting the relevant features; finally, in the same level a classification process was conducted using support vector machine – SVM. A Fuzzy Inference system is used for controlling different processes based on the results given by an IQ evaluation system, which applies quality measures that can be weighted by the users of the system according to their requirements. Besides, the system is optimized based on the results given by the cuckoo search algorithm, which uses the IQ traceability for maximizing the IQ criteria according to user requirements. The test was carried out with different type and levels of noise applied to the signals. The results showed the capability and functionality of the model.
Developments on Solutions of the Normalized-Cut-Clustering Problem Without Eigenvectors
15th International Symposium on Neural Networks (ISNN 2018)
L. L. Lorente-Leyva, I. D. Herrera-Granda, P. D. Rosero-Montalvo, K. L. Ponce-Guevara, A. E. Castro-Ospina, M. A. Becerra, D. H. Peluffo-Ordóñez, J. L. Rodríguez-Sotelo. See abstract | see full paper
Normalized-cut clustering (NCC) is a benchmark graph-based approach for unsupervised data analysis. Since its traditional formulation is a quadratic form subject to orthogonality conditions, it is often solved within an eigenvector-based framework. Nonetheless, in some cases the calculation of eigenvectors is prohibitive or unfeasible due to the involved computational cost – for instance, when dealing with high dimensional data. In this work, we present an overview of recent developments on approaches to solve the NCC problem with no requiring the calculation of eigenvectors. Particularly, heuristic-search and quadratic-formulation-based approaches are studied. Such approaches are elegantly deduced and explained, as well as simple ways to implement them are provided.
A Novel Color-Based Data Visualization Approach Using a Circular Interaction Model and Dimensionality Reduction
15th International Symposium on Neural Networks (ISNN 2018)
J. A. Salazar-Castro, P. D. Rosero-Montalvo, D. F. Peña-Unigarro, A. C. Umaquinga-Criollo, Z. Castillo-Marrero, E. J. Revelo-Fuelagán, D. H. Peluffo-Ordóñez, C. G. Castellanos-Domínguez. See abstract | see full paper
Dimensionality reduction (DR) methods are able to produce low-dimensional representations of an input data sets which may become intelligible for human perception. Nonetheless, most existing DR approaches lack the ability to naturally provide the users with the faculty of controlability and interactivity. In this connection, data visualization (DataVis) results in an ideal complement. This work presents an integration of DR and DataVis through a new approach for data visualization based on a mixture of DR resultant representations while using visualization principle. Particularly, the mixture is done through a weighted sum, whose weighting factors are defined by the user through a novel interface. The interface’s concept relies on the combination of the color-based and geometrical perception in a circular framework so that the users may have a at hand several indicators (shape, color, surface size) to make a decision on a specific data representation. Besides, pairwise similarities are plotted as a non-weighted graph to include a graphic notion of the structure of input data. Therefore, the proposed visualization approach enables the user to interactively combine DR methods, while providing information about the structure of original data, making then the selection of a DR scheme more intuitive.