Verified @gmail.com
Assistant Professor Information Technology
VIIT/SP Pune University
BE (Information Technology) ME(Computer Network) PhD(Computer Science)
Cloud Computing
Network Security
Machine Learning
Deep Learning
Stock Insights is a web-based application designed to provide users with valuable insights into stock prices, utilizing two powerful algorithms - Prophet and ARIMA. The project aims to provide stock investors and traders with an easy-to-use tool to visualize stock trends and predict future prices. Streamlit is used as the primary tool for developing the web application, which provides an intuitive and user-friendly interface. Users can select stocks and timeframes to analyze, and the algorithms generate predictions based on historical data. The predictions are displayed in an easy-to- read format, enabling users to make informed investment decisions. The Prophet algorithm is used to analyze seasonality and trends in stock prices, while ARIMA is utilized to predict future values based on historical patterns. Both algorithms work together to provide more accurate predictions and a better understanding of the stock market. Overall, Stock Insights is an essential tool for investors and t
Scopus Publications
Yogita Hande, Rupali Sachin Vairagade, Mahesh Ashok Bhandari, Vitthal Sadashiv Gutte, Sandeep Muktinath Chitalkar, and Deepali Pankaj Javale
National Taiwan University
Brain activity leads to devastating effects on life which may lead to the loss of human lives. It can be detected at early stages to save human life. An electroencephalogram (EEG) is a test that will detect abnormalities in the brain wave. Electrodes are applied to the scalp during an EEG. These are tiny metal disks connected by slender wires. They pick up microscopic electrical charges produced by the brain’s cell activity. The results of an EEG reveal alterations in brain activity that may be helpful in the diagnosis of various brain disorders, particularly epilepsy and other conditions that result in seizures. In this research, a novel approach termed Tasmanian Devil Hunting Optimization-Deep Maxout Network (TDHO-DMN) is devised for brain activity detection based on motor imagery EEG signals. Initially, the input EEG signal obtained from the dataset is subjected to the signal pre-processing phase. Here, the input signals are pre-processed for denoising utilizing the Gaussian Filter. After that, the pre-processed signal is allowed for the feature extraction to extract the suitable feature vectors like amplitude modulation spectrum (AMS), frequency-based features and statistical features. Then, extracted features are fed to data augmentation which is carried out utilizing the oversampling technique. Finally, brain activity detection is accomplished by the Deep Maxout Network (DMN), which is trained by the Tasmanian Devil Hunting Optimization (TDHO) algorithm. TDHO is formed by the combination of Tasmanian Devil Optimization (TDO) and Deer Hunting Optimization Algorithm (DHOA). The performance evaluation of the proposed TDHO_DMN is analyzed using two benchmark datasets, where the proposed TDHO_DMN approach obtained a better performance in terms of accuracy, sensitivity and specificity of 90.70%, 91.00% and 91.40%, respectively.
R Thanga Kumar, S.K Sunori, K. Yuvaraj, Mridula Gupta, Bhandari Mahesh Ashok, and S. Sathiya Naveena
IEEE
For many therapeutic uses, including image-guided therapies, picture registration is crucial in medicine. Though challenging, this field of medical picture registration has lately seen significant algorithmic performance improvements because to the rise of machine learning. Some medical applications are made possible by deep neural networks, such as faster and more accurate picture registration, which is crucial in the fight against malignancies during surgery. Medical image registration procedures sometimes include complex distortion issues. Despite the several suggested sophisticated registration models, deformable registration continues to be a tough task, particularly when dealing with large volumetric deformations, in terms of accuracy and efficiency. Our unsupervised pyramid attention networks (PAN) and network are designed for non-rigid and flexible registration, respectively, with this goal in mind. In contrast to networks that depend on high-weight attentions or transformers, our network is a clean convolutional pyramid, making full use of the pyramid structure's inherent benefits. Specifically, our network guarantees the deformation field's rationality by predicting it from coarse to fine using a step-by-step recursion method that incorporates high-level semantics. Our network is able to achieve deformable registration without the need for separate affine pre-alignment because of the recursive pyramid technique. In terms of Dice scores, average symmetric surface separation, Hausdorff distance, and Jacobian, our network constantly outperforms state-of-the-art, according to experimental data. Our network continues to perform well in adjusting for the huge deformation even when the affine pre-alignment is not included in the data.
Ashvin B Amale, I K Kantharaj, Bhandari Mahesh Ashok, A. Amudha, Lakshay Bareja, and Priya M Raut
IEEE
The oil and gas industry is arguably one of the most important industries in this modern world because some say it supports civilization as we know it. With the changing dynamics and increased competition, this sector is growing in demand for convenient solutions to improve operations with higher profits. Implementing real-time automation and big data analytics has become vital for the oil and gas industry. Real-time automation refers to using a control system, sensors, and an IT solution to monitor or control operations in real-time. It leads to faster decision-making and efficiency in drilling, production, and distribution processes. At the same time, big data analytics is about storing and analyzing massive amounts of data from different sources—these are sensor data production processes for the oil and gas industry market trends. Big data analytics can intelligently detect patterns & trends with the help of sophisticated algorithms along with machine learning, based on which necessary insights are drawn that lead to better operations and decision-making. Why these technologies will transform the oil and gas sector: Individually, both powerful new technologies provide many benefits, but integrated, they can potentially change the game in this industry. When combined with real-time automation (robots and material handling equipment), it can lead to actionable insights as things are happening, helping the business get control of its operations and reduce cost, ultimately leading to increased profitability.
Sunil MP, Saket Mishra, Srisathirapathy S, Mahesh A. Bhandari, Vishal Sharma, and Girija Shankar Sahoo
IEEE
Quite properly privacy (PGP) is a protection protocol utilized by businesses and people to provide comfortable information communications. It's far an extensively identified solution for defensive information traversing the public net, and is more and more getting used in the context of wireless networks to defend privacy and authentication information. This paper explores PGP's function in shielding the records transmitted over wireless networks, and discusses its implementation, benefits, and drawbacks. Topics discussed include authentication strategies, encryption and decryption methodologies, and the usage of hardware-degree protection mechanisms. Advantages and disadvantages of PGP within Wi-Fi community protection are explored, at the side of various implementation concerns. The paper concludes with a discussion of the future of PGP in WiFi network security.
Vaishali Singh, Sudha D, Manish Srivastava, Sidhant Das, S. Sathiya Naveena, and Mahesh A. Bhandari
IEEE
Dynamic community Bandwidth Optimization and Its utility to Underwater Fiber Optic communique is a studies topic focused on enhancing the transmission competencies of submerged fiber-optic networks. The primary aim is to design a dynamic bandwidth allocation algorithm able to acquiring most efficient transmission overall performance given the underwater channel and conversation necessities. To accomplish that, the set of rules have to be able to update the to be had channel bandwidth depending on the utility's call for it whilst additionally keeping safety and privacy. The resulting set of rules should be relevant to an extensive range of underwater fiber-optic systems. Furthermore, the solution must be flexible sufficient to be used for both asynchronous and synchronous packages, making an allowance for greater green utilization of the total bandwidth to be had. Moreover, the studies ought to awareness on finding alternative techniques for growing the transmission capacities of a given fiber-optic network which will accommodate accelerated visitors or new conversation necessities. These can include methods which include coding, dynamic modulation, and equalization. In end, dynamic community bandwidth optimization and its software to underwater fiber-optic communication is a complicated research region, however the resulting answer may want to revolutionize underwater communique networks.
Ashendra Kumar Saxena, Priyanka Vishwkarma, Sandeep Kumar, N. Thangarasu, K. S. Bhuvaneshwari, and Bhandari Mahesh Ashok
IEEE
The present research examines how the Felsenstein approach might be improved in light of huge data and evolutionary inference. The Felsenstein approach, phylogeny starting points, huge data, statistical techniques, and testing hypotheses are among the important ideas and terms covered in the overview. It emphasizes the importance of repeatability in evolutionary inferences as well as the difficulties brought on by rogue genera and bootstrapping proportions. The article suggests a brand-new strategy to increase the accuracy of phylogenetic investigations that makes use of transfer separation and bipartition distance. The abstract's conclusion emphasizes the need of better methods for managing huge datasets and for recognizing instability taxonomic.
Vitthal S Gutte, Pramod Mundhe, and Mahesh Bhandari
IEEE
In today's world, agriculture is the most important source of growth. It is an essential component of both economic and social life. Plant disease is becoming an important field in India, in the literal sense of the word. In the past, disease detection systems were designed for monocot or dicot plant families. Gradually, as scientific and special to some science or trade progress, more safe, good, ready, and working well methods are offered and have undergone growth for early detection of plant disease through the smallest wide space for turning time. We have instrumented a careful way to show monocot and dicot disease in this paper. Our methodical approach aids in the treatment of the disease by reducing the number of errors. To detect plant leaf disease, we took three steps. It primarily takes three forms: first, breaking down the leaf into parts, then removing the points, and finally, ordering. It has come to be part of the acted-on part of plant leaf using the k-means quince into group's expert way in the breaking down into parts process. The material's feel, appearance, and color are revealed after it is broken down into primarily form parts. Finally, the information gleaned from the features is applied to the order using the Support Guide Machine to detect plant disease.
Mahesh Bhandari, Vitthal S. Gutte, and Pramod Mundhe
Springer Nature Singapore
Sumit Asthana, Rahul Kumar, Ranjita Bhagwan, Christian Bird, Chetan Bansal, Chandra Maddila, Sonu Mehta, and B. Ashok
ACM
Today's software development is distributed and involves continuous changes for new features and yet, their development cycle has to be fast and agile. An important component of enabling this agility is selecting the right reviewers for every code-change - the smallest unit of the development cycle. Modern tool-based code review is proven to be an effective way to achieve appropriate code review of software changes. However, the selection of reviewers in these code review systems is at best manual. As software and teams scale, this poses the challenge of selecting the right reviewers, which in turn determines software quality over time. While previous work has suggested automatic approaches to code reviewer recommendations, it has been limited to retrospective analysis. We not only deploy a reviewer suggestions algorithm - WhoDo - and evaluate its effect but also incorporate load balancing as part of it to address one of its major shortcomings: of recommending experienced developers very frequently. We evaluate the effect of this hybrid recommendation + load balancing system on five repositories within Microsoft. Our results are based around various aspects of a commit and how code review affects that. We attempt to quantitatively answer questions which are supposed to play a vital role in effective code review through our data and substantiate it through qualitative feedback of partner repositories.
Dhananjaya Sharma, Mustafa Ahmed, and B Ashok
Diva Enterprises Private Limited
Pranav Ramarao, Suresh Iyengar, Pushkar Chitnis, Raghavendra Udupa, and Balasubramanyan Ashok
ACM
Emails continue to remain the most important and widely used mode of online communication despite having its origins in the middle of last century and being threatened by a variety of online communication innovations. While several studies have predicted the continuous growth of volume of email communication, there is little innovation on improving the search in emails, an imperative part of the user experience. In this work, we present a lightweight email application codenamed InLook, that intends to provide a productive search experience.
Andrew Cross, B. Ashok, Srinath Bala, Edward Cutrell, Naren Datha, Rahul Kumar, Viraj Kumar, Madhusudan Parthasarathy, Siddharth Prakash, Sriram Rajamani,et al.
ACM
Due to the recent emergence of massive open online courses (MOOCs), students and teachers are gaining unprecedented access to high-quality educational content. However, many questions remain on how best to utilize that content in a classroom environment. In this small-scale, exploratory study, we compared two ways of using a recorded video lecture. In the online learning condition, students viewed the video on a personal computer, and also viewed a follow-up tutorial (a quiz review) on the computer. In the blended learning condition, students viewed the video as a group in a classroom, and received the follow-up tutorial from a live lecturer. We randomly assigned 102 students to these conditions, and assessed learning outcomes via a series of quizzes. While we saw significant learning gains after each session conducted, we did not observe any significant differences between the online and blended learning groups. We discuss these findings as well as areas for future work.
D Sharma, K Kumaresan, and Ashok B
Diva Enterprises Private Limited
An understanding of postsurgical anatomy and physiology is an obvious prerequisite to the development of new prosthetic procedures for mandibulectomy patients. Loss of the potential basal seat area, atrophic and fragile oral mucosa, reduction in salivary output, angular pathway of mandibular closure, deviation of the mandible and impairment of the motor and sensory control of the tongue, lips and cheeks makes the fabrication of a prosthesis difficult in these situations. Several prosthetic options include sectional prosthesis, use of palatal ramp, setting double rows of teeth on the unresected side in maxilla and use of functional chew in technique. This article describes the use of two rows of maxillary posterior teeth on the unresected side in a patient who had undergone segmental mandibulectomy. The inner row helped in restoring the function whereas the outer row helped in restoring the cheek support and esthetics.
B. Ashok, Joseph Joy, Hongkang Liang, Sriram K. Rajamani, Gopal Srinivasa, and Vipindeep Vangala
ACM
In large software development projects, when a programmer is assigned a bug to fix, she typically spends a lot of time searching (in an ad-hoc manner) for instances from the past where similar bugs have been debugged, analyzed and resolved. Systematic search tools that allow the programmer to express the context of the current bug, and search through diverse data repositories associated with large projects can greatly improve the productivity of debugging This paper presents the design, implementation and experience from such a search tool called DebugAdvisor. The context of a bug includes all the information a programmer has about the bug, including natural language text, textual rendering of core dumps, debugger output etc. Our key insight is to allow the programmer to collate this entire context as a query to search for related information. Thus, DebugAdvisor allows the programmer to search using a fat query, which could be kilobytes of structured and unstructured data describing the contextual information for the current bug. Information retrieval in the presence of fat queries and variegated data repositories, all of which contain a mix of structured and unstructured data is a challenging problem. We present novel ideas to solve this problem. We have deployed DebugAdvisor to over 100 users inside Microsoft. In addition to standard metrics such as precision and recall, we present extensive qualitative and quantitative feedback from our users.