@unilorin.edu.ng
Social Sciences Education/Faculty of Education
University of Iloin
Education, Multidisciplinary
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
Jumoke Iyabode Oladele, Tharina Guse, and Henry O Owolabi
Negah Scientific Publisher
Jumoke I. Oladele and Mdutshekelwa Ndlovu
IGI Global
The experience of the COVID-19 pandemic has revealed a need for the educational sector in Africa to adopt innovative ways for assessment by leveraging technology with the increasing number of student enrollments at all levels of education. A systematic approach is also required to circumvent the challenges faced by developing countries transdisciplinary. Transdisciplinarity is still evolving and has been proved to be a viable tool for engagement, societal impact, new partnerships, fresh ideas, reality checks, new capacities, and systemic thinking leading to new career possibilities. This study conducted a documentary analysis of standardised examination administration in Africa and proposed a framework for action adopting a transdisciplinary approach to research for problem-solving. The study used a qualitative approach for collecting and analysing data from reports and documents supported by empirical findings from existing literature. The study created a framework for action through which the gains of the transdisciplinary approach can be translated to practice.
Jumoke Iyabode Oladele, Musa Adekunle Ayanwale, and Mdutshekelwa Ndlovu
Universiti Putra Malaysia
Challenges of a lack of formal technology-embeded teacher training, collaborative learning models, adequate technology know-how, and internet access are barriers to adopting technological-enabled teaching and learning STEM subjects in the African context. This study examined technology adoption for STEM in higher education while evaluating students’ experiences with evidence and implications for less developed countries. The survey research design was adopted for the study. The study population was students in higher learning institutions in selected countries in the sub-Saharan African region using a multi-stage sampling procedure consisting of convenience and purposive sampling techniques. A self-developed questionnaire titled Technology Adoption for Teaching and Learning Questionnaire “TATLQ” premised on the unified theory of acceptance and use of technology (UTAUT) model was used for data collection. The instrument had an overall reliability coefficient of 0.96. The collated data were analysed using descriptive of the median and a network chart to answer the research questions. In contrast, the inferential statistics of t-test and Analysis of Variance statistics were used to test the hypothesis generated for the study and implemented in the psych package of R programming language version 4.0.2 software. Findings revealed that students had a positive experience with online teaching and learning and concluded that technology adoption for STEM education online teaching and learning is feasible in sub-Sahara Africa, with the need for improvements in internet access and technical support on the basis for which recommendations were made.
Jumoke I. Oladele, Mdutshekelwa Ndlovu, and Erica D. Spangenberg
Institute of Advanced Engineering and Science
<p><span lang="EN-US">Computer adaptive testing (CAT) is a technological advancement for educational assessments that requires thorough feasibility studies through computer simulations to ensure strong testing foundations. This advancement is especially germane in Africa being adopters of technology, and this should not be done blindly without empirical evidence. A quasi-experimental design was adopted for this study to establish methodological choices for CAT ability estimation. Five thousand candidates were simulated with 100 items simulate through the three-parameter logistic model. The simulation design stipulated a fixed-length test of 30 items, while examinee characteristics were drawn from a normal distribution with a mean of 0 and a standard deviation of 1. Also, controls for the simulation were set not to control item exposure or to use the progressive restricted method. Data gathered were analyzed using descriptive statistics (mean and standard deviation) and inferential statistics (Two-way multivariate analysis of variance: MANOVA) for testing the generated hypotheses. This study provided empirical evidence for choosing ability estimation methods for CAT as part of the efforts geared towards designing accurate testing programs for use in higher education.</span></p>
Jumoke Oladele
The Academy of Transdisciplinary Learning and Advanced Studies
There is no gain-saying that machines can learn from data to derive patterns and insights to aid various applications, also known as artificial intelligence, which is gaining relevance today. This study implemented feature data generation (FDG) as a novel technique for psychometrics improvements using the Post-hoc Simulation approach. The descriptive design of the correlation type was adopted for this study and deployed quantitatively. The instrument for the study was a test aligned to the behavioural objectives of the Postgraduate Certificate Curriculum with the programme enrollees as the study participants. The test underwent a thorough validation procedure which yielded a reliability coefficient of 0.98. The item parameters of the test were analysed using XCalibre 4.2 to analyse the real data from 38 respondents, while the WINGEN application through the post-hoc approach was used to generate the simulated data with 500 respondents. The findings of the study revealed that the 3 Parameter Logistic Model fit the generated data determined using chi-square goodness of fit statistics, and the FDG is a viable approach with a strong and positive correlation between real data and simulated data, which enables the generalisation of findings on the basis on which conclusions were made. The developed FDG method for psychometric improvement has wide applicability, a plus for the novel technique while strengthening transdisciplinary research.
Jumoke I. Oladele, Mdutshekelwa Ndlovu, and Musa A. Ayanwale
Springer International Publishing
Musa A. Ayanwale, Mdutshekelwa Ndlovu, and Jumoke I. Oladele
Springer International Publishing
Jumoke I. Oladele
The Academy of Transdisciplinary Learning and Advanced Studies
Test experts and other stakeholders in education are interested in the quality of students’ assessments, while traditional reliability indices are giving way to Generalisability statistics encouraging transdisciplinarity in problem-solving. This study aimed at estimating measurement error in University Students’ Work Experience Programme (SWEP) assessment using generalizability theory. The design adopted for the study was a one-facet nested fixed design with assessors nested within persons. The study’s target population was all the 200 level undergraduate Engineering students registered for 2015/2016 academic sessions in the Faculty of Engineering in a Nigeria University and all the technicians who assessed them, with 591 students purposively selected. Their assessment scores were collated using a Proforma. Data obtained were analyzed using ANOVA statistics. Findings revealed that the assessor effect confounded the person by assessor interaction. The residual contributed more to measurement error in university engineering SWEP scores; the relative and absolute error variances were equal, with a value of 8.02. Therefore, relative and absolute interpretations cannot be distinguished. Based on these findings, educational researchers engaging in Genralizabilty theory analysis should employ the crossed facets G-study designs in assessment designs to fully harness G-theory’s strength to distinguish between the absolute and relative decisions. This conclusion is so as decision-makers may be interested in one or both decisions while interpreting measurement results in defining error and Generalisability coefficients to employ synergetic efforts to ensure quality engineering education through a transdisciplinary approach to their training.
Jumoke I Oladele and Mdutshekelwa Ndlovu
Society for Research and Knowledge Management
Teaching and learning have gone online in response to the pandemic, which reveals the need for accurately tailored educational assessments to ascertain the extent to which learning outcomes or objectives are achieved. Computer Adaptive Testing (CAT) is a technology-driven form of assessment that tailors items to a candidate's ability level with empirically proven benefits over the fixed-form computer based test. A systematic review was employed which shows that item bank is a key requirement for CAT and the items must through a rigorous item development process to ensure and maintain quality in terms of content, criterion constructs and internal consistency, determining the psychometric validation of behavioural measures while leveraging on variances of Item Response Theory (IRT). Following the item development stage is the need to compile validated items into administrable forms using advanced computer software for automatic test assembly and administration, such as FastTest which allows specifying empirically tried algorithms for CAT from start to termination of the test. This helps to ensure that assessment properly leverages the advantages that CAT holds. Furthermore, the review revealed that CAT has been widely applied with large-scale testing in various fields by educational, health and psychological professionals utilising different IRT models; however only in developed countries. This brings to bear the need for adoption in other parts of the world, for improvements in educational assessments. The interjections of 4IR with AI considering emerging technology aids the CAT algorithm for achieving expert and knowledge-based systems, being a requirement for survival in today’s world.
M. Ogunjimi, M. A. Ayanwale, J. Oladele, D. S. Daramola, I. M. Jimoh and H. Owolabi
North American Business Press
Like other African countries, high-stake testing in Nigeria has suffered significant setbacks due to the Covid-19 pandemic. Computerised Adaptive Tests (CAT) is a paradigm shift in the educational assessment that ensures accuracy in ability placements. A survey design was employed to describe the psychometric characteristics of a simulated 3-parameter logistic IRT model designs to support off-site assessments. This simulation protocol involved generating examinee and item pool data, specifying the item selection algorithm and specifying CAT administration rules for execution with SimulCAT. Findings revealed that the fixed-length test guarantees a higher testing precision with an observed systematic error less than zero, a CMAE ranging from 0.2 to 0.3 and RMSE being consistent around 0.2. Findings also revealed that the fixed-length test had a higher item exposure rate which can be handled by falling back on the item selection methods that rely less on the a-parameter. Also, item redundancy was lesser for the fixed-length test compared to the variable-length test. Conclusions are for the fixed-length test option for high-stakes assessment in Nigeria.