@geethashishu.in
Assistant Professor , Department of Electronics & Communication Engineering
GSSS Institute of Engineering & Technology for Women Mysuru
Doctor of Philosophy ( in Electrical & Electronics Engineering sciences (Robotics) under Visvesvaraya Technological University (VTU), June 2023.
Master of Technology (M.Tech) in Electronics, Canara Engineering College, Mangalore under VTU ,2012.
Bachelor of Engineering (B.E) in Electronics & Communication Engineering, Institute of Technology Bangalore under VTU, 2010.
Electrical and Electronic Engineering, Control and Systems Engineering, Computer Vision and Pattern Recognition, Artificial Intelligence
Scopus Publications
Scholar Citations
Scholar h-index
M. Basavanna, M. Shivakumar, and K. R. Prakash
Springer Singapore
M Basavanna., M Shivakumar., K.R Prakash., and Pratham Bhomkar
IEEE
The obstacle avoidance and navigation are important tasks for a mobile robot in applications such as industry, space, defense and transportation, and other social sectors. In autonomous systems still there are shortcomings in sensing and perception of the environment around the robot. An approach to improve these systems is by integration of multiple sensors data to improve perception of environment. The Lidar is widely used for the mapping of an indoor environment where it performs mapping by sensing the objects which are above axis of lidar and leaves the objects which are below the axis of the lidar. Since the LIDAR detect obstacles at a certain height, it has shortcomings in detection of non-uniform obstacles like tables or chairs, and negative obstacles which are below scanning line of lidar. In order to improve perception, arbec astra (kinect) camera is used to map the unknown indoor robotic workspace environment which can detect obstacles above and below the floor level, also able to detect obstacles at various heights that RPLidar cannot. This paper presents the implementation of 3D mapping of an unknown environment using fusion of orbbec astra camera and Lidar data. The Experimental result shows that the map using the fused sensor data shows better and clear images of the map, which in turn helps for improved navigation without any collision even with a multiple smaller objects in the path of the robot. The maps of the indoor robotic environment mapped by using RP-Lidar sensor and Orbbec astra-pro camera is presented which clearly indicate boundary detection is good in lidar and it failed to show intermediate smaller objects within the boundary, where as orbbec astra camera is providing more inner details compared to the above case. Therefore it is suggested to use fusion of data from both the sensors.