@cin.ufpe.br
Computer Science
Universidade Federal de Pernambuco
Artificial Intelligence
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
Lucas O. Teixeira, Diego Bertolini, Luiz S. Oliveira, George D. C. Cavalcanti, and Yandre M. G. Costa
Springer Science and Business Media LLC
George D. C. Cavalcanti, Wesley Mendes-Da-Silva, Israel José dos Santos Felipe, and Leonardo A. Santos
Springer Science and Business Media LLC
Carlos A. Alves Junior, Luis F. Alves Pereira, George D. C. Cavalcanti, and Tsang Ing Ren
IEEE
Risks related to excessive exposure of patients to ionizing radiation are a significant concern in the medical community. Several approaches based on Convolutional Neural Networks (CNNs) have been proposed to develop safer and more reliable sparse-view Computed Tomography (SVCT) systems. Most of those solutions process tomographic data within 2D slices individually. However, recent works have shown that 3D models - that exploit data correlation among adjacent slices - can outperform previous 2D models. Once the kernel size in most of those 3D models is not bigger than 5 x 5 x 5, such inter-slice exploration is restricted to a limited neighborhood, resulting in minor inter-slice analysis during the training/validation phase. To efficiently exploit data correlation among the coronal, axial, and sagittal views of the SVCT volume, we propose an ensemble of four 2D CNNs. Three of them are used to process the orthogonal SVCT volume views separately, and the fourth CNN combines the outputs from the previous three networks. Since our final architecture is highly deep, we also present a training method in stages to avoid the non-convergence of the deepest layers. We conducted experiments using head cone-beam Computed Tomography (CBCT) scans extensively used in imaged guided radiotherapy (IGTR) during brain tumor treatment. Our method presented superior results in reducing reconstruction artifacts of SVCT volumes compared to the state-of-the-art 2D and 3D models.
Mateus Baltazar de Almeida, Luis F. Alves Pereira, Tsang Ing Ren, George D. C. Cavalcanti, and Jan Sijbers
IEEE
The ionizing radiation that propagates through the human body at Computed Tomography (CT) exams is known to be carcinogenic. For this reason, the development of methods for image reconstruction that operate with reduced radiation doses is essential. If we reduce the electrical current in the electrically powered X-ray tubes of CT scanners, the amount of radiation that passes through the human body during a CT exam is reduced. However, significant image noise emerges in the reconstructed CT slices if standard reconstruction methods are applied. To estimate routine-dose CT images from low-dose CT images and thus reduce noise, the Conditional Generative Adversarial Network (cGAN) was recently proposed in the literature. In this work, we introduce the Gated Recurrent Conditional Generative Adversarial Network (GRC-GAN) that is based on the usage of network gates to learn the specific regions of the input image to be updated using the cGAN denoising operation. Moreover, the GRC-GAN is executed recurrently in multiple time steps. At each time step, different parts of the input image are denoised. As a result, our GRC-GAN better focus on the denoise criterium than the regular cGAN in the LoDoPaB-CT benchmark.
Luiz Vieira e Silva Filho and George DC Cavalcanti
IEEE
Multiple Classifiers Systems (MCS) are based on the idea that the combination of the opinion of several experts can generate better results than when only one expert is used. Several MCS techniques have been developed; each one has its strengths and weaknesses depending on the context in which they are applied. This work presents a two-level sampling strategy for pruning methods that are applied to the credit scoring task. The first step of the proposal is to generate a pool using two well-known sampling methods, bagging and random subspace, that work complementarity in order to produce a diverse pool. After, a pruning method reduces the generated pool maintaining only the most competent classifiers. So, the proposal improves the MCS regarding the accuracy and the computational effort, since only a small percentage of the original pool is stored. The proposed architecture is evaluated in a credit scoring application, and the results showed that the proposed architecture obtained better accuracy rates than the single best approach and literature methods. These results were also obtained with ensembles whose sizes were around 20% of the original pools generated in the training phase.
Mariana A. Souza, George D.C. Cavalcanti, Rafael M.O. Cruz, and Robert Sabourin
Elsevier BV
Rafael M.O. Cruz, Dayvid V.R. Oliveira, George D.C. Cavalcanti, and Robert Sabourin
Elsevier BV
Rafael M.O. Cruz, Robert Sabourin, and George D.C. Cavalcanti
Elsevier BV
Anandarup Roy, Rafael M.O. Cruz, Robert Sabourin, and George D.C. Cavalcanti
Elsevier BV
Tiago B.A. de Carvalho, Maria A.A. Sibaldo, Ing Ren Tsang, George D.C. Cavalcanti, Jan Sijbers, and Ing Jyh Tsang
Elsevier BV
Rafael Ferreira, George D.C. Cavalcanti, Fred Freitas, Rafael Dueire Lins, Steven J. Simske, and Marcelo Riss
Elsevier BV
Rafael M. O. Cruz, Robert Sabourin, and George D. C. Cavalcanti
Springer Science and Business Media LLC
Dayvid V.R. Oliveira, George D.C. Cavalcanti, and Robert Sabourin
Elsevier BV
Rafael M.O. Cruz, Robert Sabourin, and George D.C. Cavalcanti
Elsevier BV
Dennis Carnelossi Furlaneto, Luiz S. Oliveira, David Menotti, and George D.C. Cavalcanti
Elsevier BV
Roberto H.W. Pinheiro, George D.C. Cavalcanti, and Ing Ren Tsang
Elsevier BV
Paulo S.G. de Mattos Neto, George D.C. Cavalcanti, and Francisco Madeiro
Elsevier BV
Rafael M. O. Cruz, Hiba H. Zakane, Robert Sabourin, and George D. C. Cavalcanti
IEEE
Multiple classifier systems focus on the combination of classifiers to obtain better performance than a single robust one. These systems unfold three major phases: pool generation, selection and integration. One of the most promising MCS approaches is Dynamic Selection (DS), which relies on finding the most competent classifier or ensemble of classifiers to predict each test sample. The majority of the DS techniques are based on the K-Nearest Neighbors (K-NN) definition, and the quality of the neighborhood has a huge impact on the performance of DS methods. In this paper, we perform an analysis comparing the classification results of DS techniques and the K-NN classifier under different conditions. Experiments are performed on 18 state-of-the-art DS techniques over 30 classification datasets and results show that DS methods present a significant boost in classification accuracy even though they use the same neighborhood as the K-NN. The reasons behind the outperformance of DS techniques over the K-NN classifier reside in the fact that DS techniques can deal with samples with a high degree of instance hardness (samples that are located close to the decision border) as opposed to the K-NN. In this paper, not only we explain why DS techniques achieve higher classification performance than the K-NN but also when DS should be used.
Rafael M. O. Cruz, Robert Sabourin, and George D. C. Cavalcanti
IEEE
In dynamic selection (DS) techniques, only the most competent classifiers, for the classification of a specific test sample are selected to predict the sample's class labels. The more important step in DES techniques is estimating the competence of the base classifiers for the classification of each specific test sample. The classifiers' competence is usually estimated using the neighborhood of the test sample defined on the validation samples, called the region of competence. Thus, the performance of DS techniques is sensitive to the distribution of the validation set. In this paper, we evaluate six prototype selection techniques that work by editing the validation data in order to remove noise and redundant instances. Experiments conducted using several state-of-the-art DS techniques over 30 classification problems demonstrate that by using prototype selection techniques we can improve the classification accuracy of DS techniques and also significantly reduce the computational cost involved.
Mariana A. Souza, George D. C. Cavalcanti, Rafael M. O. Cruz, and Robert Sabourin
IEEE
The Oracle model has been used not only for comparison between techniques but also in the design of different methods in Multiple Classifier Systems (MCS). Even though the model represents the ideal classifier selection scheme, Dynamic Classifier Selection (DCS) techniques present a large performance gap from the Oracle. This means that, for a significant number of instances, the DCS techniques are not able to select a competent classifier, despite the Oracles assurance of its presence in the pool. Given that issue, this work aims to investigate the reasons why the Oracle model may not be well suited for guiding the search for a promising pool of classifiers for DCS techniques. For this purpose, a pool generation method that guarantees an Oracle accuracy rate of 100% in the training set is proposed. This method is further used to analyse the behavior of DCS techniques when the presence of at least one competent classifier in the pool for each training sample is assured. Experiments show that integrating Oracle information in the generation phase of an MCS has little impact on the gap between the accuracy rates of DCS techniques and the Oracle. Moreover, it is also shown that, for a theoretical limit of 100%, the DCS techniques were only able to select a competent classifier for at most 85% of the instances, on average. Results suggest that the Oracle is not the best guide for generating a pool of classifiers for DCS, for the model is performed globally whilst DCS techniques work with local data only.
Hector N. B. Pinheiro, Fernando M. P. Neto, Adriano L. I. Oliveira, Tsang Ing Ren, George D. C. Cavalcanti, and Andre G. Adami
IEEE
In this work, we investigate speaker-specific filter banks for text-independent speaker verification. The proposed method performs an heuristic search for the best filter-bank configuration using the Artificial Bee Colony (ABC) algorithm and a proper fitness function for the standard i-vectors/PLDA-based speaker verification system. Furthermore, filter-bank decorrelated amplitudes are used instead of the cepstral coefficients produced by Discrete Cosine Transform (DCT). In the experiments, the proposed method is compared to standard Mel and linear scales in both cases where the decorrelation is performed using DCT and high-pass filtering. The comparison is performed on the MIT Mobile Device Speaker Verification Corpus in a gender-dependent trial scheme. The proposed method outperformed the baseline systems in almost all the test sets for both genders. Performance gains of 4.6% and 26.0% are achieved for male and female speakers, respectively.
Leonardo V. Neri, Hector N.B. Pinheiro, Ing Ren Tsang, George D. da C. Cavalcanti, and Andre G. Adami
IEEE
In this paper, we propose a speaker segmentation method for meeting audio based on i-vector. The motivation is to utilize the Total Variability (TV) framework as a feature extractor and to exploit the potential of modeling the speaker and channel variabilities for speaker segmentation in meetings. A distance-based segmentation method is designed with the cosine distance. A sliding window with variable length searches for speaker turns, through the distance between the i-vectors extracted from two segments with the same size. The experiments are conducted on the AMI Meeting Corpus, covering several conversation scenarios. For the training data of the UBM and TV matrix, 5 conversations from AMI Meeting Corpus are sampled. Other 10 conversations from AMI Meeting Corpus to compose the test data. The experiments show an improvement in the MDR and FAR curves compared with the FixSlid approach with different distance metrics, and for most of the operating points when compared with the classical BIC based WinGrow. The proposed method has on average a better computational performance, improving in 61.5% compared with the XBIC based FixSlid, and improving in 86.7% compared with the BIC based WinGrow.
Luis F. Alves Pereira, Eline Janssens, George D.C. Cavalcanti, Ing Ren Tsang, Mattias Van Dael, Pieter Verboven, Bart Nicolai, and Jan Sijbers
Elsevier BV
Paulo S.G. de Mattos Neto, Tiago A.E. Ferreira, Aranildo R. Lima, Germano C. Vasconcelos, and George D.C. Cavalcanti
Elsevier BV
Bruno S. C. M. Vilar, Cezar P. Schroeder, Cristina Wada, Rayanne H. Bezerra, Leonardo L. A. Heitzmann, Rafael Simionato, and George D. C. Cavalcanti
IEEE
In smart environments, the extraction of relevant information in large volumes of data collected from intelligent devices is a crucial issue. The extracted information can assist in automation of user activities and on daily chores, either suggesting or even changing the state of devices based on his/her routine. In this work, we propose a prediction architecture which combines an innovative preprocessing strategy with some well known classification algorithms for the environment automation. The preprocessing enhances the datasets by including features and organizing them in structures that improve the classification results. We verify which preprocessing parameters have significant impact on prediction performance using datasets collected from a real home equipped with sensors. In simulations, the avNNet, mlp and C5.0 classifiers attained the higher accuracies using Friedman and Nemenyi statistical tests, but none of them outperformed the others in all scenarios using this architecture.