@igdtuw.ac.in
Associate Professor
Indira Gandhi Delhi Technical University for Women, Kasmere Gate, Delhi
Image and Video Processing, Language Processing
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
Amol D. Gaikwad, Kavita R. Singh, and Shailesh D. Kamble
Springer Science and Business Media LLC
Amol D. Gaikwad, Kavitha R. Singh, and Shailesh D. Kamble
Springer Science and Business Media LLC
AbstractThis paper discusses the design of a novel hybrid bioinspired model for task-and-VM-dependency and deadline aware scheduling via dual service level agreements. The model uses a combination of grey wolf optimization with the league championship algorithm, to perform efficient scheduling operations. These optimization techniques model a fitness function that incorporates task make-span, task deadline, mutual dependencies with other tasks, the capacity of VMs, and energy needed for scheduling operations. This assists in improving its scheduling performance for multiple use cases. To perform these tasks, the model initially deploys a task-based service level agreement (SLA) method, which assists in enhancing task and requesting-user diversity. This is followed by the design of a VM-based SLA model, which reconfigures the VM's internal characteristics to incorporate multiple task types. The model also integrates deadline awareness along with task-level and VM-level dependency awareness, which assists in improving its scheduling performance under real-time task and cloud scenarios. The proposed model is able to improve cloud utilization by 8.5%, increase task diversity by 8.3%, reduce the delay needed for resource provisioning by 16.5%, and reduce energy consumption by 9.1%, making for a wide variety of real-time cloud deployments.
Dilip Kumar Jang Bahadur Saini, Shailesh D. Kamble, Ravi Shankar, M. Ranjith Kumar, Dhiraj Kapila, Durga Prasad Tripathi, and Arunava de
Elsevier BV
Lalit B. Damahe, Nileshsingh V. Thakur, Sachin R. Jain, and Shailesh D. Kamble
Taru Publications
The retrieval of similar type of data is performed by requesting the query, generally through personal computer. As the smart mobile devices are available at the general user, retrieval of visually similar data is now performed on these devices such as smart phones, PDA etc. A wireless medium is the key component of mobile devices and visually similar image retrieval with minimum latency is the typical issue.Hence the main objective of our proposed work is to develop the efficient mechanism for image retrieval on mobile devices. The image representation scheme V-HOG captures the useful information from the image with the help of gradient vector and the gradients are calculated by using various block size. The size of block, cell and bins are the main components which are varied and developed the V-HOG with feature values 1296, 1176, 672. The experimentation are conceded on the standard Holidays Dataset. The proposed V-HOG approach having a feature value 1296, 1176 and 672 perform better with existing HOG. In comparison with HOG, the precision and recall rate for V-HOG is improved by 27% to 90% and 7% to 32% respectively. The retrieval time is also reduced to 3% to 9%. The proposed V-HOG with 1176 feature perform well and having a good precision and recall rate.
Shailesh D. Kamble, Dilip Kumar Jang Bahadur Saini, Sachin Jain, Kapil Kumar, Sunil Kumar, and Dharmesh Dhabliya
Taru Publications
The problem of searching videos in large databases i.e. multimedia applications is a major challenge. Therefore, video indexing is used to search the location of the particular video in a large database quickly. Quickly locating the video in the large database is the good quality of indexing. Still, there is a scope of improvement in quickly searching a video in a large database in terms of assigning labels to video. In computer vision, real-time object detection and tracking is a gigantic, vibrant yet indecisive and intricate area. You only look once (YOLO) algorithm is used to detect the object and background subtraction is used to track the object. In this paper, video indexing using object detection / tracking can be performed on single object in a video. In future, video indexing can be performed on multiple objects in a video.
Priyanka Tomar, Nonita Sharma, and Shailesh D. Kamble
IEEE
The trending machine learning algorithms have been applied in almost all spheres of life and able to give accurate results. However, none of the algorithms perform aspect analysis beforehand to see the nature and inferences of data. This research work presents the aspect analysis of heart failure data. As per the day today life scenario where each one is living a busy life, it’s very important to know the health of our heart. This work will show case different types of models for cardiovascular diseases and its prediction through various heart patients’ data and detect various heart disease. This work is developed based on the data visualization approach using python by plotting graphs between different types of attributes in dataset and analyzing the graphs which are plotted between various attributes of our data like age, cholesterol, trestbps, sex and fbs. Research work would visualize the pattern of heart diseases according to the age and gender of a heart patient and would like to present a profound and relevant result which will be helpful for the doctors to analyze and provide proper treatment to heart patients.
Mayur Jiwtode, Aman Asati, Shailesh Kamble, and Lalit Damahe
IEEE
Over the past few years, free mobile application tools on Artificial Intelligence and deep learning have made it easy to create reliable face exchanges in a video called “DeepFake” (DF) video that leaves a little hint of traces to check if its fake. Creating a computerized edited video has been demonstrated for quite a long time by actually taking advantage of enhanced visual effects. Recently, Artificial Intelligence has led to a rise in fake content and the ability to access free tools to create it. These purported AI-engineered media are normally called DeepFake(DF). Making a DF with a computerized AI tool is a simple job. However, with regards to identifying this DF, it's a major challenging dispute. Preparing the calculations and training the model to distinguish DF is difficult. The challenge to train an algorithm model to spot the DeepFake (DF) is not simple. We have tried recognizing DF with the use of CNN and RNN. The framework utilizes a CNN for feature extraction at the frame level. The model uses features extracted from the frame level to train the RNN, which then learns to classify videos according to their temporal inconsistencies. Anticipated results, when compared with a large number of hoax videos, were gathered from standard datasets. Using a simple architecture, we will show you how this errand can make your framework accurate.
Sonali Gaiki, Hrucha Deshmukh, Sanjana Gore, Ramya Maddipati, and Shailesh Kamble
IEEE
As today a disease called COVID-19 is causing health crisis and deaths, it became most essential to wear a mask for protecting ourselves from Corona virus. Even in public areas, where is more rush we should wear mask as no virus can spread from person to person if any one of person from public is Corona positive. This paper introduces face mask detection that can be used by the authorities to make mitigation, evaluation, prevention, and action planning against COVID19. So basically in this project we are going to use Python, Keras, OpenCV alongwith MobileNet for this Face Mask Detection System. This includes some steps like data preprocessing, training and testing the model, run and view the accuracy and applying model in the camera. The inputs has provided here are 1000+ images of people with mask and without mask. First the data get processed and then by checking features of each image it will train all the models and the persons with and without mask get separated to two categories: with mask and without mask. If person is wearing mask with 90 or more percent of accuracy, then he will get added to with mask category and person not wearing mask get added to without mask category, so that we can permit with mask person to public areas.
Shailesh Kamble, Dilip Kumar J. Saini, Vinay Kumar, Arun Kumar Gautam, Shikha Verma, Ashish Tiwari, and Dinesh Goyal
Informa UK Limited
Abstract In cloud computing, the services are observed in the video stream and clustering their pixels is the initial task in service detection. Tracking is the practice to observe or tracking the moments of a given item in each frame. Numerous false positives are included in the frame. Using the saliency map model and the Extended Kalman Filter, the proposed approach can recognize and track moving objects in video. The item is tracked using an Extended Kalman Filter. In the proposed research the evaluation is based on the delay and accuracy of the evaluation parameter. Finally, the suggested method is compared to existing object tracking methods, with an accuracy of greater than 90% attained.
Lalit Damahe, Saurabh Diwe, Shailesh Kamble, Sandeep Kakde, and Praful Barekar
Springer Singapore
A D Gaikwad, K R Singh, S D Kamble, and M M Raghuwanshi
IOP Publishing
Abstract The future growth of the Internet of Services has fundamentally changed the emergence of cloud computing. Cloud data centres serve multiple tenant demands for cloud applications that discharge vast amounts of electricity, leading to high operating costs and environmental diffusion of carbon dioxide (CO2). To fix this is the need for preservation to enable potential use by building a new structure and measuring the effect in a cloud data centre. Consequently, the use of pruned electricity reduces the cost of processing power. In order to meet energy-efficient data centres in the cloud, adjusting to optimal load balancing processing a good way for energy savings. To minimise the large energy use of cloud data centres herds, this focuses on increasing efficacy by breaking the workload evenly. In this paper, we plan to provide a comprehensive comparative analysis in cloud computing of current load balancing algorithms.
Shailesh D. Kamble, Nileshsingh V. Thakur, and Preeti R. Bajaj
IGI Global
Main objective of the proposed work is to develop an approach for video coding based on Fractal coding using the weighted finite automata (WFA). The proposed work only focuses on reducing the encoding time as this is the basic limitation why the Fractal coding not becomes the practical reality. WFA is used for the coding as it behaves like the Fractal Coding (FC). WFA represents an image based on the idea of fractal that the image has self-similarity in itself. The plane WFA (applied on every frame), and Plane FC (applied on every frame) coding approaches are compared with each other. The experimentations are carried out on the standard uncompressed video databases, namely, Traffic, Paris, Bus, Akiyo, Mobile, Suzie etc. and on the recorded video, namely, Geometry and Circle. Developed approaches are compared on the basis of performance evaluation parameters, namely, encoding time, decoding time, compression ratio, compression percentage, bits per pixel and Peak Signal to Noise Ratio (PSNR). Though the initial number of states is 256 for every frame of all the types of videos, but we got the different number of states for different frames and it is quite obvious due to minimality of constructed WFA for respective frame. Based on the obtained results, it is observed that the number of states is more in videos namely, Traffic, Bus, Paris, Mobile, and Akiyo, therefore the reconstructed video quality is good in comparison with other videos namely, Circle, Suzie, and Geometry.
Sonali D. Khambalkar, Shailesh D. Kamble, Nileshsingh V. Thakur, Nilesh U. Sambhe, and Nikhil S. Mangrulkar
Springer Singapore
Rutuja Channe, Shrikant Ambatkar, Sandeep Kakde, and Shailesh Kamble
IEEE
This paper describes a lossless image compression techniques on colour images with the help of Huffman and CALIC coding. In this approach, an RGB image is first transformed into a gray scale image. In a gray color space, the color components are downsampled and we estimate the value between original image and the downsampled image. These values are substracted from an original counterpart to obtain prediction error. At the decoder end, the exact reverse method is used to obtain the original color image. Mean Square Error (MSE) and Peak Signal-to-Noise Ratio (PSNR) are the two basic parameters used to obtain a good quality image as same as the original image. As compaired to other existing techniques, Huffman method provide a good compression results.
Ashlesha R. Chakole, Praful V. Barekar, Rajeshree V. Ambulkar, and Shailesh D. Kamble
Springer Singapore
Shraddha Zade, Nilesh Sambhe, Shailesh Kamble, and Vikas Palekar
IEEE
Wireless sensor networks comprise of an expansive number of distributed sensor gadgets, which are associated and composed through multi-hop steering. Because of the presence of related data and excess in measuring data, data messages can be joined and converged by performing data aggregation work in the steering procedure. To diminish energy utilization is a noteworthy enhancement target of data aggregation approaches, which can be accomplished by diminishing the mandatory correspondence load of steering. To improvise the network lifetime as much as possible in Wireless Sensor Networks (WSNs) the ways for data move are picked in a way that the aggregate energy used along the way is limited. To help high adaptability and better data aggregation, sensor nodes are routinely collected into disjoint, non-covering subsets called clusters. Clusters make various leveled WSNs which consolidate proficient use of constrained assets of sensor nodes and in this manner broadens network lifetime. The objective of this paper is to demonstrate a forefront survey on clustering calculations announced in the writing of WSNs. This paper presents different energy effective clustering calculations in WSNs. From the hypothetical level, an energy show is proposed to approve the advantages of data aggregation on energy utilization. The key parameters which may affect the aggregation execution are additionally examined.
Ashwini V. Ingole, Shailesh D. Kamble, Nileshsingh V. Thakur, and Apurva S. Samdurkar
IEEE
In many video applications fractal video compression is use for video coding caused by its different features and lower bit rate. Self similarity concepts of image compression are used in fractal video compression. However selfsimilarity means that fractal picture is consists of duplicates of itself that are interpreted and indicated by a change. More computational complexity is present in fractal video compression for reducing this complexity different technique has been implemented. In video compression, finding the motion vectors (MV) is one of the major factor in motion estimation, due to its high computation complexity allows in between the frames. Many application like multimedia service contains the temporal type of redundancies for emission of video i.e. storage space, bandwidth and transmission cost to reduces this kind of redundancy the motion estimation is used while not degrade a quality of the video. There are number of algorithm has been evolved for fast block based matching techniques in motion estimation to remonstrate the drawbacks relate to diminishing the number of searching point, complexities and computational cost etc., by reason of its effortlessness the block-based technique is demand in motion estimation. Block matching algorithms attracts many researchers from algorithms.the different domain for motion vector estimation also for solving real life applications in motion estimation for video coding. This paper laborite a review of various fractal compression techniques and block matching motion estimation purpose. So, transmission of video takes more time to reach its destination. Therefore, some video compression techniques are involved to remove the redundancy that present in original video. In continuation of fractal image compression uses fractal video compression technique. One of the image compression methods is fractal coding [1]. Its clam is that within a given local region the correlation not only presents in adjacent pixels, but also in global regions or different regions. Mainly video compression technique contains two types of technique i.e. lossy and lossless compression [2]. In lossless technique, reconstruction of total original data is possible. Due to this characteristic, most lossless compression technique referred it for data and executable files etc. But few data may be removed permanently in lossy compression. Mainly two types of redundancies are evolving in sequence of video they are temporal redundancy & spatial redundancy. Spatial redundancies define as correlation present in a frame among neighboring pixel value. Temporal redundancy means by considering a redundancy present in between adjacent frames of images in video. The interframe coding concept uses to lower the temporal redundancy. Similarly, the intraframe coding concept lower the spatial type of redundancy.
Shailesh D. Kamble, Nileshsingh V. Thakur, and Preeti R. Bajaj
IGI Global
Main objective of the proposed work is to develop an approach for video coding based on Fractal coding using the weighted finite automata (WFA). The proposed work only focuses on reducing the encoding time as this is the basic limitation why the Fractal coding not becomes the practical reality. WFA is used for the coding as it behaves like the Fractal Coding (FC). WFA represents an image based on the idea of fractal that the image has self-similarity in itself. The plane WFA (applied on every frame), and Plane FC (applied on every frame) coding approaches are compared with each other. The experimentations are carried out on the standard uncompressed video databases, namely, Traffic, Paris, Bus, Akiyo, Mobile, Suzie etc. and on the recorded video, namely, Geometry and Circle. Developed approaches are compared on the basis of performance evaluation parameters, namely, encoding time, decoding time, compression ratio, compression percentage, bits per pixel and Peak Signal to Noise Ratio (PSNR). Though the initial number of states is 256 for every frame of all the types of videos, but we got the different number of states for different frames and it is quite obvious due to minimality of constructed WFA for respective frame. Based on the obtained results, it is observed that the number of states is more in videos namely, Traffic, Bus, Paris, Mobile, and Akiyo, therefore the reconstructed video quality is good in comparison with other videos namely, Circle, Suzie, and Geometry.
Tejas Thubrikar, Sandeep Kakde, Shweta Gaidhani, Shailesh Kamble, and Nikit Shah
IEEE
The testing of VLSI circuits entitles many challenges in term of area overhead, power and latency. The Low transition test pattern generation is a very crucial technique for testing of a complex architecture of VLSI design. In this paper, 32-bit test pattern generator has been proposed for testing the VLSI design. This 32-bit test pattern generator is implemented with efficient LFSR and with extra combinational circuitry which achieved Low power consumption. This paper is implemented using Xilinx 13.1 ISE design suite in Verilog HDL. The switching activity between the tests vector are reduced, this results in low power consumption. The design of test pattern generation which yield a power of 23 mW with a latency of 5.194ns. The switching activity required for 32-bit test pattern generation has been improved and presented in this paper. The experimental result shows that the total power in low transition linear feedback shift register is 50.06% less than the conventional LFSR.
Shraddha S. Bhanuse, Shailesh D. Kamble, and Sandeep M. Kakde
Elsevier BV
Arpita M. Bhise and Shailesh D. Kamble
Elsevier BV
Shraddha S. Bhanuse, Shailesh D. Kamble, and Sandeep M. Kakde
Elsevier BV
Arpita M. Bhise and Shailesh D. Kamble
Elsevier BV
Shubhangi Rathkanthiwar, Sandeep Kakde, Rajesh Thakare, Rahul Kamdi, and Shailesh Kamble
Springer India