@zu.edu.jo
Computer Science Department - IT Faculty
Zarqa University
I received my BS degree from Yarmouk University in Jordan in 1987, the MS degree from University of Jordan, in 1996, and the PhD degree in computer information systems from Bradford University, UK in 2003. I am working as a professor in the department of computer science at Zarqa University in Jordan. My research interest includes information retrieval systems and database systems.
• PhD in Computer Science - University of Bradford - UK - Dec. 2003.
Thesis Title: “Keyword-Based Approach for Solving the Database Selection Problem in
a Distributed Heterogeneous Autonomous Relational Databases Environment”.
• M. Sc. in Mathematics – University of Jordan – Jordan - June 1995.
• B. Sc. in Mathematics with minor in Computer Science – Yarmouk University – Jordan - June 1987.
Database Systems
Scopus Publications
Mohammad Aljaidi, Ghassan Samara, Manish Kumar Singla, Ayoub Alsarhan, Mohammad Hassan, Murodbek Safaraliev, Pavel Matrenin, and Alexander Tavlintsev
Elsevier BV
Mohammad Aljaidi, Mohammad Hassan, Ghassan Samara, Ayoub Alsarhan, Raed Alazaidah, Sattam Almatarneh, Hamzah Aljawawdeh, and Syed Muqtar Ahmed
Springer Nature Switzerland
Raed Alazaidah, Ghassan Samara, Sattam Almatarneh, Mohammad Hassan, Mohammad Aljaidi, and Hasan Mansur
MDPI AG
Associative classification (AC) has been shown to outperform other methods of single-label classification for over 20 years. In order to create rules that are both more precise and simpler to grasp, AC combines the rules of mining associations with the task of classification. However, the current state of knowledge and the views of various specialists indicate that the issue of multi-label classification (MLC) cannot be solved by any AC method. Since this is the case, adapting or using an AC algorithm to manage multi-label datasets is one of the most pressing issues. To solve the MLC issue, this research proposes modifying the classification based on associations (msCBA) method by extending its capabilities to consider more than one class label in the consequent of its rules and modifying its rules order procedure to fit the nature of the multi-label dataset. The proposed algorithm outperforms several other MLC algorithms from various learning techniques across a variety of performance measuresand using six datasets with different domains. The main findings of this research are the significance of utilizing the local dependencies among labels compared to global dependencies, and the important rule of AC in solving the problem of MLC.
Raed Alazaidah, Mohammad Hassan, Lara Al-Rbabah, Ghassan Samara, Marina Yusof, and Ala'a Saeb Al-Sherideh
IEEE
During the last few years, several real-life applications have attempted to utilize the proven high capabilities of artificial intelligence in general and machine learning in particular. Machine learning has been utilized in several domains, such as spam detection, image recognition, recommendation systems, self-driving cars, and medical diagnosis. This paper aims to survey the most related work of utilizing machine learning in the domain of medical diagnosis. Moreover, the paper proposes a comparative analysis for identifying and determining the best classification model and feature selection method in mind of handling medical datasets. Hence, four different medical datasets have been used to train twenty-three classification models and four well-known feature selection methods with respect to several evaluation metrics such as Accuracy, True Positive ratio, False Positive ratio, Precision, and Recall. The results reveal that RandomForest, J48, and SMO classifiers are the best classifiers when it comes to handling medical datasets respectively. Furthermore, the Gain Ratio method is the best choice for handling the step of feature selection.
Ghassan Samara, Mohammad Aljaidi, Raed Alazaidah, Mais Haj Qasem, Mohammad Hassan, Nabeel Al-Milli, Mohammad S. Al-Batah, and Mohammad Kanan
Springer Nature Switzerland
Abrar Mohammad Mowad, Hamed Fawareh, and Mohammad A. Hassan
IEEE
This paper focuses on how to use continuous integration (CI) and continuous Delivery (CD) methodology in DevOps to reduce the developer-operator gap. It also, shows how CI can be a CD bridge. The paper review DevOps and analyze strategies, methodologies, issues, and processes identified for the adoption and implementation of continuing practices. The result of our case studies shows the benefits, and advantages of using CI/CD in software development. Furthermore, this paper presents DevOps as a new model for reducing the gaps between development (Dev) and operations (ops). The Azure tool is used as DevOps CI/CD for the empowerment of continuous delivery of software to enable rapid and frequent releases, this enables rapid responses to changing customer requirements and thus it may be a decisive competitive advantage. This paper also measures the effectiveness of using CI/CD for reducing the time and effort in software development. We also, focus on the DevOps initiative to benefit of CI/CD and to effect of enhances flexibility in delivering the program with the expected quality on time to determine the areas that bridge the gap between Continuous Integration for Continuous Delivery. The methodology used in this paper by exploring and tracking a project developed by the company using the Azure software development tool. The experiment includes a project performance and evaluation.
Mohammad Hassan, Ghassan Samara, and Mohammad Fadda
Alzaytoonah University of Jordan
Abstract In the Internet of Things, millions of electronic items, including automobiles, smoke alarms, watches, eyeglasses, webcams, and other devices, are now connected to the Internet (IoT). Aside from the luxury and comfort that the individual obtains in the field of IoT, as well as its ability to communicate and obtain information easily and quickly, the other concerning aspect is the achievement of privacy and security in this connection, especially given the rapid increase in the number of existing and new IoT devices. Concerns, threats, and assaults related to IoT security have been regarded as a potential and problematic area of research. This necessitates the quick development or creation of suitable technologies with the nature of crimes in the IoT environment. On the other hand, criminal investigation specialists encounter difficulties and hurdles due to various locations, data types, instruments used, and device recognition. This paper provides an in-depth explanation of the criminal content of the Internet of Things. It compares its stages to the detailed stages of traditional digital forensics in terms of similarities and differences, the frameworks used in dealing with electronic crimes, and the techniques used in both types. This paper presents previous discussions of researchers in the field of digital forensics. For the IoT, which brings us to the most important parts of this paper, which is a comprehensive study of the IoT criminal frameworks that are used to protect communication in the field of IoT, such as Digital Forensic Investigation Framework (DFIF), Digital Forensic Framework for Smart Environments (IoTDOTS), Forensic State Acquisition from the Internet of Things (FSAIoT), and discusses the challenges in their general frameworks and provides solutions and strategies. Keywords: digital forensic, FSAIoT, IoT, IoT Challenges, IoT Forensic, IoT Framework, IOTDOTS.
Mohammad A. Hassan
IEEE
For over four decades, Relational database management systems RDBMS have been the primary model for data storage, retrieval and management. However, due to the continuous information growth in current organizations and the increasing needs for scalability and performance, specially while handling a very huge amount of data that generated by various new generation real time applications or social networking sites that could be unstructured or semi-structured data, poses a set of challenges to the existing RDBMS Vendors. Such challenges have created a need for adaptation alternative technologies in the field of data storage and manipulation. NoSQL technology is the alternative category of Database Management Systems that have been emerged as the solution to the ever-growing data requirements. In this paper, the advantages and the limitations of relational databases we will be presented. The NoSQL data model, types of NOSQL data stores, characteristics of each data store, advantages and disadvantages of NoSQL over RDBMS will also be discussed. The paper helps the interest users to take a review of the different database model solutions, which can serve as a base for selecting the proper database model that can satisfy their application requirements.
Yaser Al-Lahham and Mohammad Hassan
Zarqa University
This paper proposes a new autonomous self-organizing content-based node clustering peer to peer Information Retrieval (P2PIR) model. This model uses incremental transitive document-to-document similarity technique to build Local Equivalence Classes (LECes) of documents on a source node. Locality Sensitive Hashing (LSH) scheme is applied to map a representative of each LEC into a set of keys which will be published to hosting node (s). Similar LECes on different nodes form Universal Equivalence Classes (UECes), which indicate the connectivity between these nodes. The same LSH scheme is used to submit queries to subset of nodes that most likely have relevant information. The proposed model has been implemented. The obtained results indicate efficiency in building connectivity between similar nodes, and correctly allocate and retrieve relevant answers to high percentage of queries. The system was tested for different network sizes and proved to be scalable as efficiency downgraded gracefully as the network size grows exponentially.
Ela Yildizer, Ali Metin Balci, Mohammad Hassan, and Reda Alhajj
Elsevier BV
Mohammad Hassan and Yaser Hasan
Springer Berlin Heidelberg
Mohammad Hassan, Reda Alhajj, Mick J. Ridley, and Ken Barker
ACM Press
This paper presents a tool that enables non-technical (naive) end-users to use free-form queries in exploring distributed relational databases with simple and direct technique, in a fashion similar to using search engines to search text files on the web. This allows web designers and database developers to publish their databases for web browsers exploring. The proposed approach can be used for both Internet and Intranet application areas. Our approach depends on identifying first databases that are most likely to provide useful results to the raised query, and then search only the identified databases. In our work, we developed and extended an estimation technique to assess the usefulness measure of each database. Our technique has been borrowed from the similar techniques used for information retrieval (IR), mainly for text and document databases; it supports working smoothly with the structured information stored in relational databases. Such a usefulness measure enables nave users to make decisions about databases to search and in what order.
M. Hassan, R. Alhajj, M.J. Ridley, and K. Barker
IEEE
The main target of the work described in this paper is to provide a powerful approach for naive users to search structured databases. Such a study is necessary especially to satisfy Web users who expect the ability to access all Web contents in a unified way, regardless of the structure of the available information. Given a set of distributed structured databases and a query which consists of a set of keywords connected by logical operators, the approach proposed in this paper adapts both Web text files search techniques and information retrieval techniques to rank the existing databases based on their relevance to the posed query. For each keyword, the user specifies a level of search, which may be column, record, or table. We developed an estimation method with statistical foundations to estimate the usefulness of individual relational databases. The system gives a hint of what databases might be useful for the user's query, based on word-frequency information kept for each database. Some experiments have been conducted to demonstrate the effectiveness of the proposed method in determining promising sources for a given query. As naive end-user satisfaction is a main target and motive, we developed a prototype system with a user friendly Web-based interface that accomplishes our goals in a simple and powerful way.