Dr. Chandrashekhar B N

@nmit.ac.in

ASSOCIATE PROFESSOR AND INFORMATION SCIENCE AND ENGINEERING
Nitte Meenakshi Institute of Technology.



                 

https://researchid.co/cnaikodi

Dr. B.N Chandrashekhar is an associate professor at Nitte Meenakshi Institute of Technology. He received a BE degree in computer science and engineering from the Visvesvaraya Technological University, India, in 2004 and an M.Tech degree in computer science and engineering from Visvesvaraya Technological University, India, in 2010. He obtained his Ph.D. in computer science engineering from Visvesvaraya Technological University, India, in 2021. His research interests include Hybrid (CPU-GPU) computing, parallel and distributed systems, and performance modeling of parallel HPC applications. He has published papers in peer-reviewed journals and conference proceedings and book chapters. Currently, he is working as an Associate Professor in the Department of Information Science and Engineering at Nitte Meenakshi Institute of Technology, Bengaluru, India.
Email:

EDUCATION

Ph.D. awarded Hybrid (CPU+GPU) Computing, “Performance Driven framework on HPC Application on CPU-GPU hybrid platform” in the year of 2021, from the research center of Nitte Meenakshi Institute of Technologies affiliated to Visveshwaraya Technological University (VTU), Belagavi, Karnataka.

 Master’s Degree in Engineering (First Class - M. Tech) in Computer Science in the year of 2010, from Nitte Meenakshi Institute of Technologies, Bangalore under Visveshwaraya Technological University, Belgaum, Karnataka.

 Bachelor’s Degree in Engineering (First Class -B. E) in Computer Science in the year 2004, from P.D.A. college of engineering, Gulbarga under Visveshwaraya Technological University, Belgaum, Karnataka.

 3 years Diploma Computer science Engineering in the year of 1999 Government polytechnic Gulbarga, under Board of Technical Education, Karnataka.

RESEARCH INTERESTS

 Hybrid [CPU-GPU] Computing, Parallel Computing, Cloud Computing
 Performance Modeling of parallel HPC applications, workload division and scheduling of HPC applications in Hybrid computing
 Artificial Intelligence and Machine learning in Hybrid computing

FUTURE PROJECTS

Balancing of Web Applications Workload Using Hybrid Computing (CPU-GPU) Architecture

In the current network system, there is no proper workload management of web applications and monitor the collaboration between users and web applications. Due to the truancy of centralized management in the mainstream network system, misgivings faced are proper memory ratio, data transfer between user and device, host outstanding methods, and tenure utilization of CPUs and GPUs. In order to make the graphical resources highly available to the network environment, it is important to have an efficient and enhanced service of the hybrid web application workload balancing model. It automates tasks that should be processed and reduces the overall time in processing, mitigating administrative costs and lesser Processing time. Service delivery is also a part of hybrid computing it governs where each job provisions the respective services with respect to each user. Each user’s information is stored in the database and used for authentication which will reduce the time of login each time users


Applications Invited
17

Scopus Publications

Scopus Publications

  • Impact of Hybrid [CPU-GPU] Architecture on Machine Learning-based Image-to-Image Translation Using HiDT
    Kantharaju V, Chandrashekhar B N, Niranjanamurthy M, and Murthy Svn

    IEEE
    Image-to-image translation is the process of transforming an image from one domain to another, where the goal is to learn the mapping between an input image and an output image. This task has been generally performed by using a training set of aligned image pairs on fewer cores-based CPU-based architecture, which mainly aims to transfer images from a source domain to a target domain while preserving the content representations by consuming more execution time. Due to its broad range of applications in numerous computer vision and image processing problems, including image synthesis, segmentation, style transfer, restoration, and pose estimation, GPU-based Image-to-image has attracted growing attention and made enormous progress in recent years. It can be utilized for a variation of principles, including photo enhancement, object transformation, season transfer, and collection style transfer. Only CPU and only GPU-based architecture are difficult in order to speed up the image processing task, especially during re-rendering the same scene under various illuminations characteristic for day, night, or dawn. To address this issue, in this work, we are proposing the Hybrid CPU-GPU-based architecture with HiDT technology for implementing the image translation works at tremendous speed. On the hybrid CPU-GPU-based architecture, it is possible to train a multi-domain image-to-image translation model with HiDT on variable size of dataset unaligned images without domain labels using this technology when it is integrated into an application. The speed of the mentioned application can be achieved by using emerging technologies such as pix2pixHD and HiDT on hybrid architecture, where pix2pixHD is a deep learning-based technique for high-resolution photorealistic image-to-image translation, and it is implemented in PyTorch. This article represents Impact of Hybrid Architecture on Machine Learning-based Image-toImage Translation Using HiDT.

  • Impact of Hybrid [CPU+GPU] HPC Infrastructure on AI/ML Techniques in Industry 4.0
    B. N. Chandrashekhar, H. A. Sanjay, and V. Geetha

    CRC Press

  • Magnetic Coupling Resonant Wireless Power Transmission
    B. A. Manjunatha, K. Aditya Shatry, P. Kishor Kumar Naik, and B. N. Chandrashekhar

    Springer Nature Singapore

  • Balancing of Web Applications Workload Using Hybrid Computing (CPU–GPU) Architecture
    B. N. Chandrashekhar, V. Kantharaju, N. Harish Kumar, and Lithin Kumble

    Springer Science and Business Media LLC

  • Forecast Model for Scheduling an HPC Application on CPU and GPU Architecture
    Chandrashekhar B N, Mohan M, and Geetha V

    IEEE
    Process scheduling is an essential part of multiprogramming operating systems. Scheduling is a process that allows one process to use the processing unit while the execution of another process is on hold (in a waiting state) due to the unavailability of any resource like I/O, thereby making full use of CPU or GPU. The major issue of scheduling is to make the system efficient, fast, and fair. This work focuses on developing a Forecast model and constructing scheduling strategies to schedule parallel applications on CPU and GPU. During the design of the Forecast model, we will consider the history of the actual execution time set of processes, then we compute the average time of individual sets of processes by considering parameters such as complete execution time, the sum of processes, and the number of threads assigned to individual processes. Then we will evaluate the Prediction time of CPU and GPU for individual sets of processes. By considering parameters such as the average time of the previous set of processes, the weight of processes, and the number of processes. Then based on the prediction time we will develop a scheduling strategy. As the minimum prediction time required set of process resources is assigned to the CPU and the GPU is assigned by the maximum predicted timed resource of the process. In this work we utilized the CPU and GPU resources effectively for stream benchmark application, our experiment shows that less than 20% average percentage prediction error in all cases.



  • Performance Model of HPC Application On CPU-GPU Platform*
    B. N Chandrashekar, K. Aditya Shastry, B.A Manjunath, and V. Geetha

    IEEE
    In recent years, the world of high-performance computing has been developing rapidly with enormous efforts in the integration of information technology and research. The emergence of CPU-GPU platform computing has made this possible in a very efficient manner. Nowadays, the graphic processing unit (GPU) delivers much better performance than the CPU, because of a few cores with lots of cache memory on the CPU that can handle a few software threads at a time. In contrast, a GPU is composed of hundreds of cores that can handle thousands of threads simultaneously. The CPU-GPU hybrid platform is becoming increasingly important in high-performance computing (HPC) domains such as deep learning, artificial intelligence, etc., because of its tremendous computing power. In this work, we have proposed a performance model to accelerate the performance of HPC applications on a hybrid CPU-GPU platform. We have tested and analyzed the proposed performance model using different HPC benchmark applications such as Merge sort and Matrix multiplication on different platforms such as sequential, OpenMP, MPI in a single system, MPI in the cluster, and CUDA. We have observed that parallel computing in a shared and distributed memory architecture gives better performance than sequential computing. After analyzing we have represented it in the terms of graphs for a better view of the results. Index Terms—hybrid computing, parallel computing, sequential computing, CUDA, MPI, OpenMP, CPU, GPU.

  • Performance Analysis of Parallel Programming Paradigms on CPU-GPU Clusters
    B N Chandrashekhar, H A Sanjay, and Tulasi Srinivas

    IEEE
    CPU-GPU based cluster computing in today’s modern world encompasses the domain of complex and high-intensity computation. To exploit the efficient resource utilization of a cluster, traditional programming paradigm is not sufficient. Therefore, in this article, the performance parallel programming paradigms like OpenMP on CPU cluster and CUDA on GPU cluster using BFS and DFS graph algorithms is analyzed. This article analyzes the time efficiency to traverse the graphs with the given number of nodes in two different processors. Here, CPU with OpenMP platform and GPU with CUDA platform support multi-thread processing to yield results for various nodes. From the experimental results, it is observed that parallelization with the OpenMP programming model using the graph algorithm does not boost the performance of the CPU processors, instead, it decreases the performance by adding overheads like idling time, inter-thread communication, and excess computation. On the other hand, the CUDA parallel programming paradigm on GPU yields better results. The implementation achieves a speed-up of 187 to 240 times over the CPU implementation. This comparative study assists the programmers provocatively and select the optimum choice among OpenMP and CUDA parallel programming paradigms.

  • Performance analysis of sequential and parallel programming paradigms on CPU-GPUs Cluster
    B N Chandrashekhar and H A Sanjay

    IEEE
    The entire world of parallel computing endured a change when accelerators are gradually embraced in today’s high-performance computing cluster. A hybrid CPU-GPU cluster is required to speed up the complex computations by using parallel programming paradigms. This paper deals with performance evaluation of sequential, parallel and hybrid programming paradigms on the hybrid CPU-GPU cluster using the sorting strategies such as quick sort, heap sort and merge sort. In this research work performance comparison of C, MPI, and hybrid [MPI+CUDA] on CPU-GPUs hybrid systems are performed by using the sorting strategies. From the analysis it is observed that, the performance of parallel programming paradigm MPI is better when compared against sequential programming model. Also, research work evaluates the performance of CUDA on GPUs and hybrid programming model [MPI+CUDA] on CPU+GPU cluster using merge sort strategies and noticed that hybrid programming model [MPI+CUDA] has better performance against traditional approach and parallel programming paradigms MPI and CUDA When the overall performance of all three programming paradigms are compared, MPI+CUDA based on CPU+GPU environment gives the best speedup.

  • Prediction Model of an HPC Application on CPU-GPU Cluster using Machine Learning Techniques
    B N Chandrashekhar and H.A Sanjay

    IEEE
    In today's world hybrid computing cluster, is comprised of high-intensity computation central processing unit (CPU) and graphical processing unit (GPU) based nodes. In this article, a novel analytical prediction model by considering parameters such as the number of CPUs+GPUs cores, peripheral component interconnects express(PCI-E) bandwidth and CPU-GPU memory access bandwidth for varying data input sizes and recorded as historical data. A partial amount of data is tested to train our novel prediction model. Predicted execution time against actual execution time has been compared to enhance the accuracy of the model and reduce or remove any errors. The proposed prediction model is a major module that has been utilized from scheduling the strategy to scheduling the high-performance computing (HPC) application, which gives the least predicted execution time on the best resources of a heterogeneous cluster. The proposed predictive scheduling scheme with scheduling strategy has been tested by using the game of life benchmark applications on a CPU-GPUs cluster. The prediction model has been compared against machine learning techniques and it is observed that the proposed novel analytical prediction model has achieved less than 19% prediction error. The performance of our predictive scheduling scheme with other best existing schemes TORQUE has been compared, and it is also observed that the predictive scheduling scheme is 63% more efficient than the TORQUE.

  • Prediction Model for Scheduling an Irregular Graph Algorithms on CPU-GPU Hybrid Cluster Framework
    B.N. Chandrashekhar, H.A. Sanjay, and H. Lakshmi

    IEEE
    The improvement of innovations in science, technology and industry is happening in a current trend because of hybrid many-core Graphical processing units (GPUs), and multi-core central processing units (CPUs) based cluster. In this article, we have designed a prediction model using parameters such as computation and communication cost of irregular graph algorithms. To schedule irregular graph algorithms on the best set of processors of the hybrid cluster, we used our prediction models that predict the execution time of irregular graph algorithms and recorded as the performance history of the data. A reasonable set of data is used to educate the prediction model. The data forecasted by the model can be used and compared against actual runtimes to increase the accurateness of the model and decrease or eliminate any errors. We have tested our scheduling strategy using irregular graph algorithms Breadth-First Search(BFS) and Depth-first search(DFS) benchmark applications on the hybrid cluster. Our algorithm shows that up to 75.32% average performance improvement for BFS against TORQUE. Similarly, when compared to our predictive scheduling algorithm against TORQUE for DFS we achieved 89.68%. And 18.52% of average percentage prediction errors compared to the linear regression model.

  • Performance Framework for HPC Applications on Homogeneous Computing Platform
    Chandrashekhar B. N, , and Sanjay H. A

    MECS Publisher
    In scientific fields, solving large and complex computational problems using central processing units (CPU) alone is not enough to meet the computation requirement. In this work we have considered a homogenous cluster in which each nodes consists of same capability of CPU and graphical processing unit (GPU). Normally CPU are used for control GPU and to transfer data from CPU to GPUs. Here we are considering CPU computation power with GPU to compute high performance computing (HPC) applications. The framework adopts pinned memory technique to overcome the overhead of data transfer between CPU and GPU. To enable the homogeneous platform we have considered hybrid [message passing interface (MPI), OpenMP (open multi-processing), Compute Unified Device Architecture (CUDA)] programming model strategy. The key challenge on the homogeneous platform is allocation of workload among CPU and GPU cores. To address this challenge we have proposed a novel analytical workload division strategy to predict an effective workload division between the CPU and GPU. We have observed that using our hybrid programming model and workload division strategy, an average performance improvement of 76.06% and 84.11% in Giga floating point operations per seconds(GFLOPs) on NVIDIA TESLA M2075 cluster and NVIDIA QUADRO K 2000 nodes of a cluster respectively for N-dynamic vector addition when compared with Simplice Donfack et.al [5] performance models. Also using pinned memory technique with hybrid programming model an average performance improvement of 33.83% and 39.00% on NVIDIA TESLA M2075 and NVIDIA QUADRO K 2000 respectively is observed for saxpy applications when compared with pagable memory technique.

  • Performance study of openMP and hybrid programming models on CPU–GPU cluster
    B. N. Chandrashekhar and H. A. Sanjay

    Springer Singapore

  • Implementation of image inpainting using OpenCV and CUDA on CPU-GPU environment



  • Parameters tuning of OLSR routing protocol with metaheuristic algorithm for VANET
    Anusha Bandi and Chandrashekhar B. N

    IEEE
    Vehicular Adhoc Network provides ability to wirelessly communicate between vehicles. Network fragmentations and frequent topology changes (Mobility of the nodes) and limited coverage of Wi-Fi, are issues in VANET, that arise due to absence of central manager entity. Because of these reasons, routing the packets within the network is difficult task. Hence, provisioning an adept routing strategy is vital for the deployment of VANETs. The optimized link state routing is a well-known mobile adhoc network routing protocol. In this paper, we are proposing an optimization strategy to fine-tune few parameters by configuring the OLSR protocol using metaheuristic method. We considered some of the quality parameters such as packet delivery ratio, latency, throughput and fitness value for fine tuning OSLR protocol. Then we made Comparison of genetic algorithm, particle swarm optimization algorithm by using QoS parameters. We implemented our work on Red Hat Enterprise Linux 6 platform. And results are shown by simulations using VanetMobiSim and NS2 simulators; the fine-tuned OSLR protocol behaves better than the original routing protocol with intelligence and optimization configuration.

RECENT SCHOLAR PUBLICATIONS

    Publications

    1) B. N. Chandrashekhar and H. A. Sanjay, “Accelerating Real-Time Face detection using Cascade Classifier on Hybrid [CPU-GPU] HPC infrastructure”, Seventh International Conference on “Emerging Research in Computing, Information, Communication, and Applications”, (ERCICA-2022) was held in BLENDED MODE during 25th-26th February 2022 at Nitte Meenakshi Institute of Technology (NMIT), Bangalore Springer Singapore. ISBN 978-981-19-5481-8 [SCOPUS]

    2) B. N. Chandrashekhar, H. A. Sanjay and T. Srinivas, "Performance Analysis of Parallel Programming Paradigms on CPU-GPU Clusters," 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), ©2021 IEEE, pp. 646-651, doi: 10.1109/. [SCOPUS]
    3) B. N. Chandrashekhar, H. A. Sanjay, " Performance Analysis of Sequential and Parallel Programming Paradigms on CPU-GPUs Cluster", IEEE 3rd International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV 2021) India | 978-1-6654-1960-4/20/$31.00 pp. 1205-1213 ©2021 IEEE | DOI: 10.1109/ [SCOPUS]
    4) B. N. Chandrashekhar, H. A. Sanjay, and H. Lakshmi, "Prediction Model for Scheduling an Irregular Graph Algorithms on CPU–GPU Hybrid Cluster Framework", 2020 IEEE International Conference on Inventive Computation Technologies (ICICT), Coimbatore, India, 2020, pp. 584-589, DOI: 10.1109/. [SCOPUS]
    5) B. N. Chandrashekhar and H. A. Sanjay, "Prediction Model of an HPC Application

    RESEARCH OUTPUTS (PATENTS, SOFTWARE, PUBLICATIONS, PRODUCTS)

    conferences and journals

    Industry, Institute, or Organisation Collaboration

    loadsaharing technology

    INDUSTRY EXPERIENCE

     Worked as a Software Engineer for HexawareTechnology, Chennai. From Feb 2005 to Mar 2006

     Worked as a Software Engineer for Global Softech, Bangalore. From June 2004 to Feb 2005.