@tezu.ernet.in
Assistant Professor and Computer Science & Engineering
Tezpur University
Recommender System
Information Retrieval
Knowledge Graph
Social Network Analysis
Data Mining
Natural Language Processing
Scopus Publications
K. Rajesh Rao, Aditya Kolpe, Tribikram Pradhan, and Bruno Bogaz Zarpelão
River Publishers
Role Based Access Control (RBAC) systems face an essential issue related to systematic handling of users’ access requests known as the User Authentication Query (UAQ) Problem. In this paper, we show that the UAQ problem can be resolved using Unsupervised machine learning following the guaranteed access request and Dynamic Separation of Duty relations. The use of Agglomerative Hierarchical Clustering not only improves efficiency but also avoids disordered merging of existing roles to create new ones and steers clear of duplication. With a time complexity of O(n^3), the algorithm proves to be one of the fastest and promising models in state-of-the-art. The proposed model has been compared with the existing models and experimentally evaluated.
Tribikram Pradhan, Prashant Kumar, and Sukomal Pal
Elsevier BV
Abstract Scholarly venue recommendation is an emerging field due to a rapid surge in the number of scholarly venues concomitant with exponential growth in interdisciplinary research and cross collaboration among researchers. Finding appropriate publication venues is confronted as one of the most challenging aspects in paper publication as a larger proportion of manuscripts face rejection due to a disjunction between the scope of the venue and the field of research pursued by the research article. We present CLAVERG??an integrated framework of Convolutional Layer, bi-directional LSTM with an Attention mechanism-based scholarly VEnue Recommender system. The system is the first of its kind to integrate multiple deep learning-based concepts, that only requiring only the abstract and title of a manuscript to identify academic venues. An extensive and exhaustive set of experiments conducted on the DBLP dataset certify that the postulated model CLAVER performs better than most of the modern techniques as entrenched by standard metrics such as stability, accuracy, MRR, average venue quality, precision@k, nDCG@k and diversity.
Tribikram Pradhan, Suchit Sahoo, Utkarsh Singh, and Sukomal Pal
Elsevier BV
Abstract Peer review is an essential part of scientific communications to ensure the quality of publications and a healthy scientific evaluation process. Assigning appropriate reviewers poses a great challenge for program chairs and journal editors for many reasons, including relevance, fair judgment, no conflict of interest, and qualified reviewers in terms of scientific impact. With a steady increase in the number of research domains, scholarly venues, researchers, and papers in academia, manually selecting and accessing adequate reviewers is becoming a tedious and time-consuming task. Traditional approaches for reviewer selection mainly focus on the matching of research relevance by keywords or disciplines. However, in real-world systems, various factors are often needed to be considered. Therefore, we propose a multilayered approach integrating Topic Network, Citation Network, and Reviewer Network into a reviewer Recommender System (TCRRec). We explore various aspects, including relevance between reviewer candidates and submission, authority, expertise, di- versity, and conflict of interest and integrate them into the proposed framework TCRRec. The proposed system also considers the temporal changes of reviewers’ interest and the stability of reviewers’ interests trends to enhance their performance. The paper also addresses cold start issues for researchers having unique areas of interest or for isolated researchers. Experiments based on the NIPS and AMiner dataset demonstrate that the proposed TCRRec outperforms state-of-the-art recommendation techniques in terms of standard metrics of precision@k, MRR, nDCG@k, authority, expertise, diversity, and coverage.
Tribikram Pradhan, Chaitanya Bhatia, Prashant Kumar, and Sukomal Pal
Elsevier BV
Abstract Peer reviews form an essential part of scientific communications. Research papers and proposals are reviewed by several peers before they are finally accepted or rejected for publication and funding, respectively. With the steady increase in the number of research domains, scholarly venues (journal and/or conference), researchers, and papers, managing the peer review process is becoming a daunting task. Application of recommender systems to assist peer reviewing is, therefore, being explored and becoming an emerging research area. In this paper, we present a deep learning network based Meta-Review Generation considering peer review prediction of the scholarly article (MRGen). MRGen is able to provide solutions for: (i) Peer review prediction (Task 1) and (ii) Meta-review generation (Task 2). First, the system takes the peer reviews as input and produces a draft meta-review. Then it employs an integrated framework of convolution layer, long short-term memory (LSTM) model, Bi-LSTM model, and attention mechanism to predict the final decision (accept/reject) of the scholarly article. Based on the final decision, the proposed model MRGen incorporates Pointer Generator Network-based abstractive summarization to generate the final meta-review. The focus of our approach is to give a concise meta-review that maximizes information coverage, coherence, readability and also reduces redundancy. Extensive experiments conducted on the PeerRead dataset demonstrate good consistency between the recommended decisions and original decisions. We also compare the performance of MRGen with some of the existing state-of-the-art multi-document summarization methods. The system also outperforms a few existing models based on accuracy, Rouge scores, readability, non-redundancy, and cohesion.
Tribikram Pradhan, Abhinav Gupta, and Sukomal Pal
Elsevier BV
Abstract Manually selecting appropriate scholarly venues is becoming a tedious and time-consuming task for researchers due to many reasons that include relevance, scientific impact, and research visibility. Sometimes, high-quality papers get rejected due to mismatch between the area of the paper and the scope of the journal. Recommending appropriate academic venues can, therefore, enable researchers to identify and take part in relevant conferences and publish in journals that matter the most. A researcher may certainly know of a few leading venues for her specific field of interest. However, a venue recommendation system becomes particularly helpful when exploring a new domain or when more options are needed. Due to high dimensionality and sparsity of text data, and complex semantics of the natural language, journal identification presents difficult challenges. We propose a novel and unified architecture that contains a Bi-directional LSTM (Bi-LSTM) and a Hierarchical Attention Network (HAN) to address the above problems. We call the proposed architecture modularized Hierarchical Attention-based Scholarly Venue Recommender system (HASVRec), which only requires the abstract, title, keywords, field of study, and author of a new paper along with its past publication record to recommend scholarly venues. Experiments on the DBLP-Citation-Network V11 dataset exhibit that our proposed approach outperforms several state-of-the-art methods in terms of accuracy, F 1 , nDCG, MRR, average venue quality, and stability.
Tribikram Pradhan and Sukomal Pal
Elsevier BV
Abstract Rapidly developing academic venues throw a challenge to researchers in identifying the most appropriate ones that are in-line with their scholarly interests and of high relevance. Even a high-quality paper is sometimes rejected due to a mismatch between the area of the paper, and the scope of the journal attempted to. Recommending appropriate academic venues can, therefore, enable researchers to identify and take part in relevant conferences and to publish in impactful journals. Although a researcher may know a few leading high-profile venues for her specific field of interest, a venue recommender system becomes particularly helpful when one explores a new field or when more options are needed. We propose DISCOVER: A Diversified yet Integrated Social network analysis and COntextual similarity-based scholarly VEnue Recommender system. Our work provides an integrated framework incorporating social network analysis, including centrality measure calculation, citation and co-citation analysis, topic modeling based contextual similarity, and key-route identification based main path analysis of a bibliographic citation network. The paper also addresses cold start issues for a new researcher and a new venue along with a considerable reduction in data sparsity, computational costs, diversity, and stability problems. Experiments based on the Microsoft Academic Graph (MAG) dataset show that the proposed DISCOVER outperforms state-of-the-art recommendation techniques using standard metrics of precision@k, nDCG@k, accuracy, MRR, F − m e a s u r e m a c r o , diversity, stability, and average venue quality.
Chaitanya Bhatia, Tribikram Pradhan, and Sukomal Pal
ACM
Peer reviews form an essential part of scientific communications. Research papers and proposals are reviewed by several peers before they are finally accepted or rejected. The procedure followed requires experts to review the research work. Then the area/program chair/ editor writes a meta-review summarizing the review comments and taking a call based on the reviewers' decisions. In this paper, we present MetaGen, a novel meta-review generation system which takes the peer reviews as input and produces an assistive meta-review. This meta-review generation can help the area/program chair writing a meta-review and taking the final decision on the paper/proposal. Thus it can also help to speed up the review process for conference/journals where a large number of submissions need to be handled within a stipulated time. Our approach first generates an extractive draft and then uses fine-tuned UniLM (Unified Langauge Model) for predicting the acceptance decision and making the final meta-review in an abstractive manner. To the best of our knowledge, this is the first work in the direction of meta-review generation. Evaluation based on ROUGE score shows promising results and comparison with few state-of-the-art summarizers demonstrates the effectiveness of the system.
Anubhav Apurva, Aditya Singh Verma, and Tribikram Pradhan
IEEE
Scholarly article recommendation systems are an essential tool for effective research work. It plays a major role in retrieving relevant scientific papers in the era of big scholarly data. When researchers start working on a research problem, they are not always sure which papers to refer to learn the state-of-the-art or which papers are the most appropriate for their work. Numerous methods for generating recommendations have been proposed in the past decades. Very often, these are generalized systems, not specifically designed for scholarly articles. Moreover, they fail to capture a researcher's preferences for year, authorship, publication venue and so on. In this paper, we present an alternative approach to implementing a recommendation system based on relevance feedback to resolve these concerns. Extensive experiments have been performed on a real-world Microsoft Academic Graph (MAG) dataset to demonstrate that the proposed algorithm produces more accurate recommendations as compared to the baseline methods. Finally, the evaluation has been performed against a few search engines like Google scholar and CiteSeer to demonstrate the effectiveness and the scalability of proposed recommender system.
Tribikram Pradhan and Sukomal Pal
Elsevier BV
Abstract In academia, researchers collaborate with their peers to improve the quality of research and thereby enhance academic profiles. However, information overload in big scholarly data poses a challenge in identifying potential researchers for fruitful collaboration. In this article, we introduce a multi-level fusion-based model for collaborator recommendation, DRACoR (Deep learning and Random walk based Academic Collaborator Recommender). DRACoR fuses deep learning and biased random walk model to provide the recommendation for potential collaborators that share similar research interests at the peer level. We run a topic model on abstracts and Doc2Vec on titles on year-wise publications to capture the dynamic research interests of researchers. Author–author cosine similarity is computed from the feature vectors extracted from abstracts and titles and is then used to weigh edges in the author–author graph (AAG). We also aggregate various meta-path features with profile-aware features to bias the random walk behavior. Finally, we employ a random walk with restart(RWR) to recommend top N collaborators where the edge weights are used to bias the random walker’s behavior. Extensive experiments on DBLP and hep-th datasets demonstrate the effectiveness of our proposed DRACoR model against various state-of-the-art methods in terms of precision, recall, F1-score, MRR, and nDCG.
Tribikram Pradhan and Sukomal Pal
Elsevier BV
Abstract The phenomenon of rapidly developing academic venues poses a significant challenge for researchers: how to recognize the ones that are not only in accordance with one’s scholarly interests but also of high significance? Often, even a high-quality paper is rejected because of a mismatch between the research area of the paper and the scope of the journal. Recommending appropriate scholarly venues to researchers empowers them to recognize and partake in important academic conferences and assists them in getting published in impactful journals. A venue recommendation system becomes helpful in this scenario, particularly when exploring a new field or when further choices are required. We propose CNAVER: A Content and Network-based Academic VEnue Recommender system. It provides an integrated framework employing a rank-based fusion of paper-paper peer network (PPPN) model and venue-venue peer network (VVPN) model. It only requires the title and abstract of a paper to provide venue recommendations, thus assisting researchers even at the earliest stage of paper writing. It also addresses cold start issues such as the involvement of an inexperienced researcher and a novel venue along with the problems of data sparsity, diversity, and stability. Experiments on the DBLP dataset exhibit that our proposed approach outperforms several state-of-the-art methods in terms of precision, nDCG, MRR, accuracy, F − m e a s u r e m a c r o , average venue quality, diversity, and stability.
Aakash Bhattacharya, Riju Khatri, and Tribikram Pradhan
IEEE
Obesity is an increasingly prevalent metabolic disorder, which results in increased risk of various diseases. One such disease is the coronary artery disease, which is the most common type of heart disease. Coronary artery disease (CAD) leads to the blockage of the arteries, that supply blood to the heart muscles, due to the accumulation of cholesterol and other material called plaque on the inner walls. This makes the arteries narrower and rigid thus restricting blood flow to the heart. In this paper, a model has been proposed to evaluate the severity of CAD among the different classes of obesity based on the prognostic markers and the various causative factors of CAD.
Aman Chopra, Ashray Dimri, and Tribikram Pradhan
IEEE
Calcium channel blockers (CCB) disrupt the movement of calcium and prevent it from entering cells of the blood vessel walls. They are used to widen blood vessel resulting in lower blood pressure. Amlodipine is one such calcium channel blocker that dilates the blood vessel and improves blood flow. It is used to treat angina, high blood pressure and hypertension. Amlodipine is quite effective in treating aforementioned conditions but it has been found to induce pedal edema in patients. In this paper, we have evaluated the causative factors for Amlodipine induced pedal edema and also performed a classification of patients based on the side effects of Amlodipine.
K. R. Sai Vineeth, Ayush Pandey, and Tribikram Pradhan
IEEE
'Crime analysis plays a major part in crime prevention and safety of people in a country. The paper focuses on state based frequent crime pattern knowledge discovery and prevention. Our work concentrates on Finding frequent crimes state wise using FP Max a bottom up approach which uses linked lists for reduction of space complexity. The generated frequent crime sets of state will be undergone through knowledge discovery process. Correlation between the crime types is done to find the weightage factor of crime types to find crime intensity point. The crime intensity point of 29 states and 7 union territories are calculated according to the weightages derived from the correlation analysis. Later we classified the states as most dangerous, dangerous, moderate or safe based on their crime intensity point using Random forests classification technique. Prediction of crime intensity point for the state based on the responsible factors that contribute to a crime.
R Vignesh and Tribikram Pradhan
IEEE
This paper aims at introducing a new sorting algorithm which sorts the elements of an array In Place. This algorithm has O(n) best case Time Complexity and O(n log n) average and worst case Time Complexity. We achieve our goal using Recursive Partitioning combined with In Place merging to sort a given array. A comparison is made between this particular idea and other popular implementations. We finally draw out a conclusion and observe the cases where this outperforms other sorting algorithms. We also look at its shortcomings and list the scope for future improvements that could be made.
Siddharth Sahay, Suruchi Khetarpal, and Tribikram Pradhan
IEEE
'Data mining' has transformed into a ubiquitous term in the world of IT and Computer Science in recent times. Developments in this field have been countless. Using one of Apriori algorithm's numerous variants with a couple of insightful additions can significantly improve upon the existing standard of Data Mining. In this paper a new approach to considerably reduce the time complexity of the database scan has been proposed. This has been achieved by using the MapReduce framework for Hadoop Distributed File System (HDFS). Coupled with Cloud computing, which handles large data sets and processing remotely, the resultant system — that uses MapReduce for the full table scan, the Pincer-Search Algorithm, and Cloud Computing — is a force to reckon with.
Swati Choudhary, Angkirat Singh Sandhu, and Tribikram Pradhan
Springer Singapore
Jayant Prakash, Mayank Khandelwal, and Tribikram Pradhan
IEEE
Before the auction the teams have the liberty to retain some of its previously selected players and the rest of the players can be selected via auction. Initially, all owners of the teams have the same limited amount of funds to build their team. The more the players an owner retains, the less funds the owner would have to take to the auction. Hence, the decision of retaining players has to be perfect for an optimal selection for retaining players as well as selection of player in the auction. We analyze the requirement of the structure of the team, based on voids created due to the players left after the selective retaining process. For an optimal decision making in the auction, we define the size and type of voids, which helps the owner buy the best combination of players in the auction. Our method attempts to ensure that the owner is aware to his next steps clearly, if s/he buys a player in the auction and direct their resources to buy specifically those players that will fill the void in the team.
Saakshi Gusain, Kunal Kansal, and Tribikram Pradhan
Springer International Publishing
Prajwal Rao, Ritvik Sachdev, and Tribikram Pradhan
Springer International Publishing
Mayank Khandelwal, Jayant Prakash, and Tribikram Pradhan
Springer International Publishing
Shivani Jakhmola and Tribikram Pradhan
ACM Press
Data mining when applied on medical diagnosis can help doctors to take major decisions. Diabetes is a disease which has to be monitored by the patient so as not to cause severe damage to the body. Therefore to predict diabetes is an important task that is most important for the patient. In this study, a new data smoothening technique is proposed for noise removal from the data. It is very important for the user to have control over the smoothening of the data so that the information loss can be monitored. The proposed method allows the user to control the level of data smoothening by accepting the loss percentage on the individual data points. Allowable loss is calculated and a decision is made to smoothen the value or retain it to the level which is accurate. The proposed method will enable the user to get the output based on his requirements of preprocessing. The proposed algorithm will allow the user to interact with the data preprocessing system unlike the primitive algorithms. Different levels of smoothened output are obtained by different loss percentage. This preprocessed output produced will be of a better quality and will resemble more to the real world data. Furthermore, correlation and multiple regression is applied on the preprocessed diabetes dataset and a prediction is made on this basis.
Satvik Sachdev, Aparna Nayak, and Tribikram Pradhan
IEEE
Visual Cryptography (VC) is a cryptographic technique that allows visual information such as textual images, handwritten notes etc to be encrypted in such a way that the decryption can be performed using the human visual system i.e. eyes. Here we propose an algorithm for data hiding in halftone images. This algorithm uses a morphological operation and ordered dithering, and is a modified version of DHCOD[1].In this paper we focus on improving the security and robustness of shares in VC and generating more meaningful shares with respect to basic cryptographic scheme.
Arjun Chaudhary, Satvik Sachdev, Tribikram Pradhan, and Santosh Kamath
IEEE
Generally Cooperative communication means network communication where nodes assist instead of contend to transmit data for themselves and other. Recently, there has been increasing interest in integrating multi-hop relaying functionalities into MANET. Multi-hop wireless networks can potentially enhance coverage, data rates. In this paper, we studied multi-hop cooperative communication and proposed a new multi hop cooperative protocol for dynamic traffic pattern. And we have compared the performance with selective relaying in MANET with dynamic traffic pattern using network simulator NS2.34. Throughput and packet delivery ratio are used as performance metrics for contrast. We are comparing the performance when the size of the packet changes, when the time interval between the packets are changed, when mobility of nodes changes.
Tribikram Pradhan, Akash Israni, and Manish Sharma
IEEE
This paper describes a hybrid algorithm to solve the 0-1 Knapsack Problem using the Genetic Algorithm combined with Rough Set Theory. The Knapsack problem is a combinatorial optimization problem where one has to maximize the benefit of objects in a knapsack without exceeding its capacity. There are other ways to solve this problem, namely Dynamic Programming and Greedy Method, but they are not very efficient. The complexity of Dynamic approach is of the order of O(n3) whereas the Greedy Method doesn't always converge to an optimum solution[2]. The Genetic Algorithm provides a way to solve the knapsack problem in linear time complexity[2]. The attribute reduction technique which incorporates Rough Set Theory finds the important genes, hence reducing the search space and ensures that the effective information will not be lost. The inclusion of Rough Set Theory in the Genetic Algorithm is able to improve its searching efficiency and quality.
Tribikram Pradhan, Santosh S Patil, and Pramod Kumar Sethy
NADIA
Cloud Computing Is an Important Transition That Makes Change In Service Oriented Computing Technology. With The Widespread Adoption Of Cloud Computing, The Ability To Record And Account For The Usage Of Cloud Resources In A Credible And Verifiable Way Has Become Critical For Cloud Service Providers And Users Alike. The Success of Such A Billing System Depends On Several Factors: The Billing Transactions Must Have Integrity and No Repudiation Capabilities; the Billing Transactions Must Be No Obstructive and Have A Minimal Computation Cost; And the Service Level Agreement (SLA) Monitoring Should Be Provided In A Trusted Manner. Existing Billing Systems Are Limited In Terms Of Security Capabilities or Computational Overhead. This Project Proposes A Secure And NonObstructive Billing System Called THEMIS As A Remedy For These Limitations. The System Uses A Novel Concept Of A Cloud Notary Authority For The Supervision Of Billing. The Cloud Notary Authority Generates Mutually Verifiable Binding Information That Can Be Used To Resolve Future Disputes Between A User And A Cloud Service Provider In A Computationally Efficient Way. Even Administrator of A Cloud System Cannot Modify or Falsify the Data.