+212 665396851 (WhatsApp)
contact(at)sdas-group.com

Gallery

A smart solar-powered DC supply source - Low-cost, IoT device

Using a solar-powered DC source with a Raspberry Pi for IoT services, such as visualizing electric variables and incorporating a small screen, offers numerous benefits. Here are some advantages: Renewable Energy Source: Solar power is a clean and renewable energy source, reducing reliance on traditional power grids and minimizing the environmental impact. Energy Independence: Solar-powered setups provide autonomy, making them ideal for remote locations or areas with unreliable access to conventional power sources. Off-Grid Operation: The solar-powered Raspberry Pi allows for off-grid operation, making it suitable for applications in remote areas, agricultural settings, or infrastructure monitoring where access to a power grid may be challenging. Continuous Operation: Solar panels can charge batteries during the day, enabling continuous operation of the Raspberry Pi and connected IoT devices even during periods of low sunlight or at night. Customization and Scalability: Raspberry Pi provides a flexible platform for developing and deploying IoT services. You can easily customize and scale your system to accommodate additional sensors or functionalities. Real-time Monitoring: With the Raspberry Pi connected to sensors and a small screen, users can visualize real-time electric variables, facilitating effective monitoring and analysis. Data Logging: The Raspberry Pi can be programmed to log data from various sensors, providing historical information for analysis, troubleshooting, and performance optimization. Remote Access and Control: Raspberry Pi supports remote access, allowing users to control and monitor the IoT system from anywhere with an internet connection. Educational Opportunities: Such a setup offers educational benefits by providing a practical and hands-on experience in solar energy, IoT, and programming with the Raspberry Pi. Community and Home Automation: Solar-powered Raspberry Pi systems are suitable for community-based or home automation projects, promoting energy-efficient solutions at a smaller scale. In summary, combining a solar-powered DC source with a Raspberry Pi for IoT services presents an environmentally friendly, cost-effective, and versatile solution for various applications, ranging from remote monitoring to educational projects.

Monitoring a turkey hatchery based on a cyber-physical system

The implementation of a turkey farm brings with it severe environmental problems due to the deficient study of the physical space where the animals are placed. To counteract this situation and improve the quality of life in the hatchery, it is necessary to monitor and control the following variables: Temperature, Humidity, Ammonia Emission and Lux. The solution is based on a cyber-physical system which is composed of a network of sensors, controller and actuator. The sensors will provide information from the physical environment, the controller evaluates these parameters to execute an action to the actuator. Proportional, Integral and Derivative (PID) control defines the setpoint for temperature while Pulse-Width Modulation (PWM) adjusts the light intensity in a spotlight. The End Device executes these actions and its parameters will be sent to ThingSpeak which monitors system behavior on the Internet of Things.

Joint Exploration of Kernel Functions Potential for DataRepresentation and Classification

Dimensionality reduction (DR) approaches are often a crucial step in data analysis tasks , particularly for data visualization purposes. DR-based techniques are essentially designed to retain the inherent structure of high dimensional data in a lower-dimensional space, leading to reduced computational complexity and improved pattern recognition accuracy. Specifically, kernel principal component analysis (KPCA) is a widely utilized dimensionality reduction technique due to its capability to effectively handle nonlinear data sets. It offers an easily interpretable formulation from both geometric and functional analysis perspectives. However, Kernel PCA relies on free hyperparameters, which are usually tuned in advance. The relationship between these hyperparameters and the structure of the embedded space remains undisclosed. This work presents preliminary steps to explore said relationship by jointly evaluating the data classification and representation abilities. To do so, an interactive visualization framework is introduced. This study highlights the importance of creating interactive interfaces that enable interpretable dimensionality reduction approaches for data visualization and analysis. More info at: https://sdas-group.com/.

Smart Factory Production Plants Application using virtual reality and online Multi-User: Towards a Metaverse for Unreal Engine Experiment Frameworks

Virtual reality (VR) has been brought closer to the general public over the past decade as it has become increasingly available for desktop and mobile platforms. As a result, consumer-grade VR may redefine how people learn by creating an engaging "hands-on" training experience. Today, VR applications leverage rich interactivity in a virtual environment without real-world consequences to optimize training programs in companies and educational institutions. VR is a lifelike simulation achieved using computer graphics. Immersive VR, semi-immersive VR, and desktop-based VR are the most commonly used types. Furthermore, a school of thought considers VR a mental state that allows the user to interact with an environment that produces a continuous flow of experiences and stimuli. This paper aims to develop a VR System Framework using the Unreal Engine 4 (UE4) game engine. We follow a methodology that incorporates video game development and VR. The VR system includes functional components, object-oriented configuration, advanced core, interfaces, and an online Multi-User system that uses avatars in a small-scale metaverse. We introduce a case study on creating a metaverse for a production plant with a Smart Factory approach that enables effective teamwork in 3D virtual environments. Finally, the experimental results show that a commercial software framework for video games -particularly UE4- can accelerate the development of experiments on the metaverse experience. In addition, this system can function as a virtual laboratory environment to connect users from different parts of the world in real-time.

Interactive Tool for Dimensionality Reduction and Data Visualization via an Angle-based Mode

In this work, the implementation of a versatile tool for the support of databases visual analysis is presented, which enables the user to interactively generate low-dimensional graphic representations. For this purpose, the tool incorporates: i) interaction models - an existing one (chromatic model) and another proposed one (model based on angles) -, ii) a mixture of DR spectral methods represented in kernel-matrices-based approximations, iii) technical traditional visualization (scatter plots and parallel coordinates diagram). Additionally, aimed at generating a dynamic interaction (real time changes), the locally linear landmarks algorithm is implemented to perform the DR procedure at a low-computational cost. It is important to highlight that the entire tool is developed under scalability and modularity settings.

Developments on Support Vector Machines for Multiple-Expert Learning

In supervised learning scenarios, some applications require solve a classification problem wherein labels are not given as a single ground truth. Instead, the criteria of a set of experts is used to provide labels aimed at compensating for the erroneous influence with respect to a single labeler as well as the error bias (excellent or lousy) due to the level of perception and experience of each expert. This paper aims to briefly outline mathematical developments on support vector machines (SVM), and overview SVM-based approaches for multiple expert learning (MEL). Such MEL approaches are posed by modifying the formulation of a least-squares SVM, which enables to obtain a set of reliable, objective labels while penalizing the evaluation quality of each expert. Particularly, this work studies both two-class (binary) MEL classifier (BMLC) and its extension to multiclass through one-against all (OaA-MLC) including penalization of each expert’s influence. Formal mathematical developments are stated, as well as remarkable discussion on key aspects about the least-squares SVM formulation and penalty factors are provided.

Algorithms Air Quality Estimation: A Comparative Study of Stochastic and Heuristic Predictive Models

This paper presents a comparative analysis of predictive models applied to air quality estimation. Currently, among other global issues, there is a high concern about air pollution, for this reason, there are several air quality indicators, with carbon monoxide (CO), sulfur dioxide (SO2), nitrogen dioxide (NO2) and ozone (O3) being the main ones. When the concentration level of an indicator exceeds an established air quality safety threshold, it is considered harmful to human health, therefore, in cities like London, there are monitoring systems for air pollutants. This study aims to compare the efficiency of stochastic and heuristic predictive models for forecasting ozone (O3) concentration to estimate London's air quality by analyzing an open dataset retrieved from the London Datastore portal. Models based on data analysis have been widely used in air quality forecasting. This paper develops four predictive models (autoregressive integrated moving average - ARIMA, support vector regression - SVR, neural networks (specifically, long-short term memory - LSTM) and Facebook Prophet). Experimentally, ARIMA models and LSTM are proved to reach the highest accuracy in predicting the concentration of air pollutants among the considered models. As a result, the comparative analysis of the loss function (root-mean-square error) reveled that ARIMA and LSTM are the most suitable, accomplishing a low error rate of 0.18 and 0.20, respectively.

Design of a low computational cost prototype for cardiac arrhythmia detection: Preliminary results

This work presents the design of a limited-computational-resources prototype for cardiac arrhythmias detection. To do so, a heartbeat classification strategy is developed aimed at identifying normal and pathological heartbeats in long-term electrocardiographic (Holter) recordings. By incorporating an embedded system, a low computational cost system is developed, which is capable of analyzing the characteristics of QRS complexes waves -being representative waves of the heartbeat, and their analysis allows for the identification of ventricular arrhythmias. To develop this initial prototype, we experimentally demonstrated that the use of the algorithm of k nearest neighbors (k-NN) together with a stage for selection of training data variables is a good alternative, and represents a major contribution of this work. Experiments are performed on the Massachusetts Institute of Technology (MIT) cardiac arrhythmia database. Obtained results are satisfactory and promising.

A Data-Driven approach for automatic classification of extreme precipitation events: Preliminary results

Even though there exists no universal definition, in the South America Andean Region, extreme precipitation events can be referred to the period of time in which standard thresholds of precipitation are abruptly exceeded. Therefore, their timely forecasting is of great interest for decision makers from many fields, such as: urban planning entities, water researchers and in general, climate related institutions. In this paper, a data-driven study is performed to classify and anticipate extreme precipitation events through hydroclimate features. Since the analysis of precipitation-events-related time series involves complex patterns, input data requires undergoing both pre-processing steps and feature selection methods, in order to achieve a high performance at the data classification stage itself. In this sense, in this study, both individual Principal Component Analysis (PCA) and Regresional Relief (RR) as well as a cascade approach mixing both are considered. Subsequently, the classification is performed by a Support-Vector-Machine-based classifier (SVM). Results reflect the suitability of an approach involving feature selection and classification for precipitation events detection purposes. A remarkable result is the fact that a reduced dataset obtained by applying RR mixed with PCA discriminates better than RR alone but does not significantly hence the SVM rate at two- and three-class problems as done by PCA itself.

An Interactive Framework to Compare Multi-criteria Optimization Algorithms Preliminary Results on NSGA-II and MOPSO

A problem of Multi-criteria optimization, according to its approach, can mean either minimizing or maximizing a group of at least two objective functions, to find the best possible set of solutions to such functions. There are several methods of Multi-criteria optimization, in which the resulting solutions' quality varies depending on the method used and the complexity of the posed problem. A bibliographical review allowed us to notice, that the methods derived from the Evolutionary Computation deliver good results and are commonly used in research works. Although comparative studies among these optimization methods have been found, the conclusions that these offer to the reader do not allow us to define a general rule that determines when one method is better than another. Therefore, the choice of a well-adapted optimization method can be a difficult task for non-experts in the field. To implement a graphical interface that allows non-expert users in multi-objective optimization is proposed to interact and compare the performance of the NSGA-II and MOPSO algorithms. It is chosen qualitatively from a group of five preselected algorithms as members of evolutionary algorithms and swarms intelligence. Therefore, a comparison methodology is proposed to allow the user to analyze graphical and numerical results to observe the behavior of the algorithms and determine which of both is best suited to their needs.

Inverse Data Visualization Framework (IDVF): Towards a prior-knowledge-driven data visualization

Broadly, the area of dimensionality reduction (DR) is aimed at providing ways to harness high dimensional (HD) information through the generation of lower dimensional (LD) representations, by following a certain data-structure-preservation criterion. In literature there have been reported dozens of DR techniques, which are commonly used as a pre-processing stage withing exploratory data analyses for either machine learning or information visualization (IV) purposes. Nonetheless, the selection of a proper method is a nontrivial and -very often- toilsome task. In this sense, a readily and natural way to incorporate an expert's criterion into the analysis process, while making this task more tractable is the use of interactive IV approaches. Regarding the incorporation of experts' prior knowledge there still exists a range of open issues. In this work, we introduce a here-named Inverse Data Visualization Framework (IDVF), which is an initial approach to make the input prior knowledge directly interpretable. Our framework is based on 2D-scatter-plots visuals and spectral kernel-driven DR techniques. To capture either the user's knowledge or requirements, users are requested to provide changes or movements of data points in such a manner that resulting points are located where best convenient according to the user's criterion. Next, following a Kernel Principal Component Analysis approach and a mixture of kernel matrices, our framework accordingly estimates an approximate LD space. Then, the rationale behind the proposed IDVF is to adjust as accurate as possible the resulting LD space to the representation fulfilling users' knowledge and requirements. Results are greatly promising and open the possibility to novel DR-based visualizations approaches.

Kernel-spectral-clustering-driven motion segmentation: Rotating-objects first trials

Time-varying data characterization and classification is a field of great interest in both scientific and technology communities. There exists a wide range of applications and challenging open issues such as: automatic motion segmentation, moving-object tracking, and movement forecasting, among others. In this paper, we study the use of the so-called kernel spectral clustering (KSC) approach to capture the dynamic behavior of frames -representing rotating objects- by means of kernel functions and feature relevance values. On the basis of previous research works, we formally derive a here-called tracking vector able to unveil sequential behavior patterns. As a remarkable outcome, we alternatively introduce an encoded version of the tracking vector by converting into decimal numbers the resulting clustering indicators. To evaluate our approach, we test the studied KSC-based tracking over a rotating object from the COIL 20 database. Preliminary results produce clear evidence about the relationship between the clustering indicators and the starting/ending time instance of a specific dynamic sequence.

Dimensionality Reduction for Interactive Data Visualization via a Geo-Desic Approach (LA-CCI 2016)

This work presents a dimensionality reduction (DR) framework that enables users to perform either the selection or mixture of DR methods by means of an interactive model, here named Geo-Desic approach. Such a model consists of linear combination of kernel-based representations of DR methods, wherein the corresponding coefficients are related to coordinated latitude and longitude inside of the world map. By incorporating the Geo-Desic approach within an interface, the combination may be made easily and intuitively by users -even non-expert ones- fulfilling their criteria and needs, by just picking up points from the map. Experimental results demonstrates the usability and ability of DR methods representation of proposed approach.

A Novel Color-Based Data Visualization Approach Using a Circular Interaction Model and Dimensionality Reduction (ISNN 2018)

Dimensionality reduction (DR) methods are able to produce low-dimensional representations of an input data sets which may become intelligible for human perception. Nonetheless, most existing DR approaches lack the ability to naturally provide the users with the faculty of controlability and interactivity. In this connection, data visualization (DataVis) results in an ideal complement. This work presents an integration of DR and DataVis through a new approach for data visualization based on a mixture of DR resultant representations while using visualization principle. Particularly, the mixture is done through a weighted sum, whose weighting factors are defined by the user through a novel interface. The interface’s concept relies on the combination of the color-based and geometrical perception in a circular framework so that the users may have a at hand several indicators (shape, color, surface size) to make a decision on a specific data representation. Besides, pairwise similarities are plotted as a non-weighted graph to include a graphic notion of the structure of input data. Therefore, the proposed visualization approach enables the user to interactively combine DR methods, while providing information about the structure of original data, making then the selection of a DR scheme more intuitive.

A Color-Based Model for Dimensionality Reduction (IDEAL 2017)

This work describes a new model for interactive data visualization followed from a dimensionality-reduction (DR)-based approach. Particularly, the mixture of the resultant spaces of DR methods is considered, which is carried out by a weighted sum. For the sake of user interaction, corresponding weighting factors are given through an intuitive color-based interface. Also, to depict the DR outcomes while showing information about the input high-dimensional data space, the low-dimensional representations reached by the mixture is graphically presented using scatter plots improved with an interactive data-driven visualization. In this connection, a constrained dissimilarity approach is proposed to define the graph to be drawn on the scatter plot. Proposed data visualization model enables users (even non-expert ones) to make decisions on what are the most suitable lower-dimensional representations of the original data in a friendly-user fashion.

Data visualization using interactive dimensionality reduction with RGB model (IWINAC 2017)

This work presents an improved interactive data visualization interface based on a mixture of the outcomes of dimensionality reduction (DR) methods. Broadly, it works as follows: The user can input the mixture weighting factors through a visual and intuitive interface with a primary-light-colors-based model (Red, Green, and Blue). By design, such a mixture is a weighted sum of the color tone. Additionally, the low-dimensional representation space produced by DR methods are graphically depicted using scatter plots powered via an interactive data-driven visualization. To do so, pairwise similarities are calculated and employed to define the graph to simultaneously be drawn over the scatter plot. Our interface enables the user to interactively combine DR methods by the human perception of color, while providing information about the structure of original data. Then, it makes the selection of a DR scheme more intuitive -even for non-expert users.

INTERACTIVE DATA VISUALIZATION INTERFACE USING DIMENSIONALITY REDUCTION (CIARP 2016)

This work presents a new interactive data visualization approach based on mixture of the outcomes of dimensionality reduction (DR) methods. Such a mixture is a weighted sum, whose weighting factors are defined by the user through a visual and intuitive interface. Additionally, the low-dimensional representation space produced by DR methods are graphically depicted using scatter plots powered via an interactive data-driven visualization. To do so, pairwise similarities are calculated and employed to define the graph to be drawn on the scatter plot. Our visualization approach enables the user to interactively combine DR methods while provided information about the structure of original data, making then the selection of a DR scheme more intuitive.