Laszlo Toka

@bme.hu



                 

https://researchid.co/laszlotoka
97

Scopus Publications

1650

Scholar Citations

20

Scholar h-index

43

Scholar i10-index

Scopus Publications

  • A career handbook for professional soccer players
    Balázs Ács, Roland Kovács, and László Toka

    SAGE Publications
    The success of a soccer player is not entirely pre-destined by their physical ability, talent, and motivation. There are certain decisions along the way that greatly affect the arc of their career: which skills to develop, and which club to sign a contract with. In this paper, we identify the optimal strategic choices toward multiple potential aims a soccer player can have and we seek the knowledge of what made the greatest soccer players in terms of those decisions. Our two main data sources are Transfermarkt and Sofifa from which we collect data for the period between 2007 and 2021 with 29,231 players. We perform time series analysis on skill features of soccer players, and network analysis of the players’ acquaintance graph, i.e., a graph that indicates whether two given players have ever been teammates before. Finally, we create key performance indicators to check the differences in certain features, i.e., individual player skills and connectivity attributes, between top-tier and the rest of the players, and use dynamic time warping for validation. The outcome of this work is a recommendation tool that helps players to find what needs to be improved in order to achieve their desired goals. The source code and the career advisor tool for soccer players that we have implemented are available online.

  • Towards maximizing expected possession outcome in soccer
    Pegah Rahimian, Jan Van Haaren, and Laszlo Toka

    SAGE Publications
    Soccer players need to make many decisions throughout a match in order to maximize their team’s chances of winning. Unfortunately, these decisions are challenging to measure and evaluate due to the low-scoring, complex, and highly dynamic nature of soccer. This article proposes an end-to-end deep reinforcement learning framework that receives raw tracking data for each situation in a game, and yields optimal ball destination location on the full surface of the pitch. Using the proposed approach, soccer players and coaches are able to analyze the actual behavior in their historical games, obtain the optimal behavior and plan for future games, and evaluate the outcome of the optimal decisions prior to deployment in a match. Concisely, the results of our optimization model propose more short passes (Tiki-Taka playing style) in all phases of a ball possession, and higher propensity of low distance shots (i.e. shots in attack phase). Such a modification will let the typical teams to increase their likelihood of possession ending in a goal by 0.025.

  • A data-driven approach to assist offensive and defensive players in optimal decision making
    Pegah Rahimian and Laszlo Toka

    SAGE Publications
    Among all the popular sports, soccer is a relatively long-lasting game with a small number of goals per game. This renders the decision-making cumbersome, since it is not straightforward to evaluate the impact of in-game actions apart from goal scoring. Although several action valuation metrics and counterfactual reasoning have been proposed by researchers in recent years, assisting coaches in discovering the optimal actions in different situations of a soccer game has received little attention of soccer analytics. This work proposes the application of deep reinforcement learning on the event and tracking data of soccer matches to discover the most impactful actions at the interrupting point of a possession. Our optimization framework assists players and coaches in inspecting the optimal action, and on a higher level, we provide for the adjustment required for the teams in terms of their action frequencies in different pitch zones. The optimization results have different suggestions for offensive and defensive teams. For the offensive team, the optimal policy suggests more shots in half-spaces (i.e. long-distance shots). For the defending team, the optimal policy suggests that when locating in wings, defensive players should increase the frequency of fouls and ball outs rather than clearances, and when located in the centre, players should increase the frequency of clearances rather than fouls and ball outs.

  • Boat Speed Prediction in SailGP
    Benedek Zentai and László Toka

    Springer Nature Switzerland

  • Pass Receiver and Outcome Prediction in Soccer Using Temporal Graph Networks
    Pegah Rahimian, Hyunsung Kim, Marc Schmid, and Laszlo Toka

    Springer Nature Switzerland

  • Momentum Matters: Investigating High-Pressure Situations in the NBA Through Scoring Probability
    Balazs Mihalyi, Gergely Biczók, and Laszlo Toka

    Springer Nature Switzerland

  • A survey on integrating edge computing with ai and blockchain in maritime domain, aerial system, iot, and industry 4.0
    Amad Alanhdi and László Toka

    Institute of Electrical and Electronics Engineers (IEEE)

  • Minimizing Resource Allocation for Cloud-Native Microservices
    Roland Erdei and Laszlo Toka

    Springer Science and Business Media LLC
    AbstractWith the continuous progress of cloud computing, many microservices and complex multi-component applications arise for which resource planning is a great challenge. For example, when it comes to data-intensive cloud-native applications, the tenant might be eager to provision cloud resources in an economical manner while ensuring that the application performance meets the requirements in terms of data throughput. However, due to the complexity of the interplay between the building blocks, adequately setting resource limits of the components separately for various data rates is nearly impossible. In this paper, we propose a comprehensive approach that consists of measuring the resource footprint and data throughput performance of such a microservices-based application, analyzing the measurement results by data mining techniques, and finally formulating an optimization problem that aims to minimize the allocated resources given the performance constraints. We illustrate the benefits of the proposed approach on Cortex, an extension to Prometheus for storing monitored metrics data. The data-intensive nature of this illustrative example stems from real-time monitoring of metrics exposed by a multitude of applications running in a data center and the continuous analysis performed on the collected data that can be fetched from Cortex. We present Cortex’s performance vs resource footprint trade-off, and then we build regression models to predict the microservices’ resource consumption and draw a mathematical programming formulation to optimize the most important configuration parameters. Our most important finding is the linear relationship between resource consumption and application performance, which allows for applying linear regression and linear programming models. After the optimization, we compare our results to Cortex’s recommendation, leading to a CPU reservation reduced by 50–80%.

  • Real-Time FaaS: Towards a Latency Bounded Serverless Cloud
    Márk Szalay, Péter Mátray, and László Toka

    Institute of Electrical and Electronics Engineers (IEEE)
    Today, Function-as-a-Service is the most promising concept of serverless cloud computing. It makes possible for developers to focus on application development without any system management effort: FaaS ensures resource allocation, fast response time, schedulability, scalability, resiliency, and upgradability. Applications of 5G, IoT, and Industry 4.0 raise the idea to open cloud-edge computing infrastructures for time-critical applications too, i.e., there is a strong desire to pose real-time requirements for computing systems like FaaS. However, multi-node systems make real-time scheduling significantly complex since guaranteeing real-time task execution and communication is challenging even on one computing node with multi-core processors. In this paper, we present an analytical model and a heuristic partitioning scheduling algorithm suitable for real-time FaaS platforms of multi-node clusters. We show that our task scheduling heuristics could outperform existing algorithms by 55%. Furthermore, we propose three conceptual designs to enable the necessary real-time communications. We present the architecture of the envisioned real-time FaaS platform, emphasize its benefits and the requirements for the underlying network and nodes, and survey the related work that could meet these demands.

  • 6G for Connected Sky: A Vision for Integrating Terrestrial and Non-Terrestrial Networks
    Mustafa Ozger, Istvan Godor, Anders Nordlow, Thomas Heyn, Sreekrishna Pandi, Ian Peterson, Alberto Viseras, Jaroslav Holis, Christian Raffelsberger, Andreas Kercek,et al.

    IEEE
    In this paper, we present the vision of our project 6G for Connected Sky (6G-SKY) to integrate terrestrial networks (TNs) and non-terrestrial networks (NTNs) and outline the current research activities in 6G research projects in comparison with our project. From the perspectives of industry and academia, we identify key use case segments connecting both aerial and ground users with our 6G-SKY multi-layer network architecture. We explain functional views of our holistic 6G-SKY architecture addressing the heterogeneity of aerial and space platforms. Architecture elements and communication links are identified. We discuss 6G-SKY network design and management functionalities by considering a set of inherent challenges posed by the multi-layer 3-dimensional networks, which we termed as combined airspace and NTN (combined ASN). Finally, we investigate additional research challenges for 6G-SKY project targets.

  • 5G on the Roads: Latency-Optimized Federated Analytics in the Vehicular Edge
    László Toka, Márk Konrad, István Pelle, Balázs Sonkoly, Marcell Szabó, Bhavishya Sharma, Shashwat Kumar, Madhuri Annavazzala, Sree Teja Deekshitula, and A. Antony Franklin

    Institute of Electrical and Electronics Engineers (IEEE)
    Coordination among vehicular actors becomes increasingly important at the dawn of autonomous driving. With communication serving as the basis for this process, latency emerges as a critical limiting factor in information gathering, processing, and redistribution. While these processes have further implications on data privacy, they are also fundamental in safety and efficiency aspects. In this work, we target exactly these areas: we propose a privacy-preserving system for collecting and sharing data in high-mobility automotive environments that aims to minimize the latency of these processes. Namely, we focus on keeping high definition maps (highly accurate environmental and road maps with dynamic information) up-to-date in a crowd-sourced fashion. We employ federated analytics for privacy-preserving, low-latency, scalable processing and data distribution running over a two-tiered infrastructural layout consisting of mobile vehicular nodes and static nodes leveraging the low latency, high throughput and broadcast capabilities of the 5G edge. We take advantage of this setup by proposing queuing theory based analytical models and optimizations to minimize information delivery latency. As our numerical simulations over wide parameter-ranges indicate, the latency of timely data distribution can be decreased only with careful system planning and 5G infrastructure. We obtain the optimal latency characteristics in densely populated central metropolitan scenarios when Gb/s uplink speeds are achievable and the coverage area (map segment size) can reach a diameter of 1km.

  • Federated learning for vehicular coordination use cases
    László Toka, Márk Konrád, István Pelle, Balázs Sonkoly, Marcell Szabó, Bhavishya Sharma, Shashwat Kumar, Madhuri Annavazzala, Sree Teja Deekshitula, and A Antony Franklin

    IEEE
    Vehicular coordination and communication tasks are crucial aspects of enabling autonomous driving, guaranteeing safety and efficiency. In our present work, we explore methods for collecting and distributing information among participants by employing collaboratively-built high-definition maps that contain fine-grained contextual data. We leverage a hierarchical federated learning structure and anticipatory onboarding of the maps through a mobility-aware content caching scheme and minimize the delay of data delivery in both subsystems. We provide analytical models built on queuing theory and integer linear programming and evaluate essential system parameters in an emulation testbed. Based on our results, we conclude that we can significantly reduce the delay in delivering timely information to vehicular clients by introducing intermediary layers in the federated learning structure and by pre-loading current map tiles corresponding to vehicle paths.

  • 5G on the roads: optimizing the latency of federated analysis in vehicular edge networks
    László Toka, Márk Konrád, István Pelle, Balázs Sonkoly, Marcell Szabó, Bhavishya Sharma, Shashwat Kumar, Madhuri Annavazzala, Sree Teja Deekshitula, and A Antony Franklin

    IEEE
    At the dawn of autonomous driving, vehicular communications and coordination become more vital than ever. Fast information gathering, processing and sharing creates the basis of safety and efficiency, the main promises of conceding the control of vehicles from humans to machines. In this paper we propose to deploy an information gathering and distributing system that aims exactly at minimizing the latency of delivering the essential information to the end clients. We specifically tackle the crowd-sourced maintenance of high definition maps, i.e., road maps with extremely high accuracy and environmental fidelity containing dynamic information about the traffic as well, via a federated analysis scheme, and by broadcasting those maps through a 5G network. The system is designed for minimizing the latency of information delivery: analytical models based on queuing theory and optimization are proposed, and a wide range of system parameters are evaluated in numerical simulations. We find that the latency of delivering timely high quality information to end clients can be reduced with careful dimensioning of the system. According to our measurements, high-speed 5G data connection is a must, as we reach the optimal latency by building map segments with 1km in diameter via Gb/s uplink speeds in densely populated central metropolitan settings.

  • A Comprehensive Performance Analysis of Stream Processing with Kafka in Cloud Native Deployments for IoT Use-cases
    István Pelle, Bence Szőke, Abdulhalim Fayad, Tibor Cinkler, and László Toka

    IEEE
    The constant growth of the number of Internet of Things devices drives a huge increase in data that needs to be analyzed, at times in real time. Multiple platforms are available for delivering such data to analytics engines that can perform various operations on the data with low processing latency. These platforms can find their home in cloud native environments where high availability and scaling to the actual workload can be easily achieved. While the deployment environment is elastic, clusters still need to be adequately dimensioned to accommodate the components of the platforms even under high load.In this paper, we provide an analysis in this regard: we discuss key performance indicators of the popular Kafka message bus and the related Kafka Streams processing engine. Namely, we analyze latency, throughput, CPU and memory resource footprint aspects of these services under varying load and processing tasks that appear in Internet of Things applications. We find subsecond processing latency and linear but heavily task-dependent scaling behavior in the other performance indicators’ case.

  • Let’s Penetrate the Defense: A Machine Learning Model for Prediction and Valuation of Penetrative Passes
    Pegah Rahimian, Dayana Grayce da Silva Guerra Gomes, Fanni Berkovics, and Laszlo Toka

    Springer Nature Switzerland

  • Optimizing and dimensioning a data intensive cloud application for soccer player tracking
    Gergely Dobreff, Marton Molnar, and Laszlo Toka

    Walter de Gruyter GmbH
    Abstract Cloud-based services revolutionize how applications are designed and provisioned in more and more application domains. Operating a cloud application, however, requires careful choices of configuration settings so that the quality of service is acceptable at all times, while cloud costs remain reasonable. We propose an analytical queuing model for cloud resource provisioning that provides an approximation on end-to-end application latency and on cloud resource usage, and we evaluate its performance. We pick an emerging use case of cloud deployment for validation: sports analytics. We have created a low-cost, cloud-based soccer player tracking system. We present the optimization of the cloud-deployed data processing of this system: we set the parameters with the aim of sacrificing as least as possible on accuracy, i.e., quality of service, while keeping latency and cloud costs low. We demonstrate that the analytical model we propose to estimate the end-to-end latency of a microservice-type cloud native application falls within a close range of what the measurements of the real implementation show. The model is therefore suitable for the planning of the cloud deployment costs for microservice-type applications as well.

  • Optical tracking in team sports A survey on player and ball tracking methods in soccer and other team sports
    Pegah Rahimian and Laszlo Toka

    Walter de Gruyter GmbH
    Abstract Sports analysis has gained paramount importance for coaches, scouts, and fans. Recently, computer vision researchers have taken on the challenge of collecting the necessary data by proposing several methods of automatic player and ball tracking. Building on the gathered tracking data, data miners are able to perform quantitative analysis on the performance of players and teams. With this survey, our goal is to provide a basic understanding for quantitative data analysts about the process of creating the input data and the characteristics thereof. Thus, we summarize the recent methods of optical tracking by providing a comprehensive taxonomy of conventional and deep learning methods, separately. Moreover, we discuss the preprocessing steps of tracking, the most common challenges in this domain, and the application of tracking data to sports teams. Finally, we compare the methods by their cost and limitations, and conclude the work by highlighting potential future research directions.

  • Cost and Latency Optimized Edge Computing Platform
    István Pelle, Márk Szalay, János Czentye, Balázs Sonkoly, and László Toka

    MDPI AG
    Latency-critical applications, e.g., automated and assisted driving services, can now be deployed in fog or edge computing environments, offloading energy-consuming tasks from end devices. Besides the proximity, though, the edge computing platform must provide the necessary operation techniques in order to avoid added delays by all means. In this paper, we propose an integrated edge platform that comprises orchestration methods with such objectives, in terms of handling the deployment of both functions and data. We show how the integration of the function orchestration solution with the adaptive data placement of a distributed key–value store can lead to decreased end-to-end latency even when the mobility of end devices creates a dynamic set of requirements. Along with the necessary monitoring features, the proposed edge platform is capable of serving the nomad users of novel applications with low latency requirements. We showcase this capability in several scenarios, in which we articulate the end-to-end latency performance of our platform by comparing delay measurements with the benchmark of a Redis-based setup lacking the adaptive nature of data orchestration. Our results prove that the stringent delay requisites necessitate the close integration that we present in this paper: functions and data must be orchestrated in sync in order to fully exploit the potential that the proximity of edge resources enables.

  • LSSO: Long Short-Term Scaling Optimizer
    Balazs Fodor, Laszlo Toka, and Balazs Sonkoly

    IEEE
    Demand forecast-based resource allocation is a key element of both application and network service management and resource cost optimization in cloud environments. A central mechanism called scaling or auto-scaling is responsible for adjusting the allocated resources dynamically to current or predicted demands. Most of today's solutions support short-term optimization, where the long-term effect of scaling actions is not considered, therefore the overall operational cost can be sub-optimal. In this paper, we address this issue and provide a long and short-term scaling optimization method called LSSO. Our contribution is threefold. First, we introduce the formal description of the LSSO scaling problem motivated by real cloud native applications. Second, we transform the problem into a tractable graph representation and show that the optimal solution of the original problem emerges as a shortest path problem in the graph. As a result, the transformation and the optimal solution can be computed in polynomial time. Third, we evaluate and validate the proposed algorithm by numerical analysis on multiple datasets using a cost model which considers the running and scaling costs. Results suggest that if the dynamics of the traffic is fairly predictable and the scaling cost is not negligible, LSSO is able to reduce the operation costs significantly. The exact cost gain depends on the ratio of the running and scaling costs, but in realistic operation regimes it can reach even 18 % and the method can also tolerate some inaccuracies in the forecast.

  • The Shape of Your Cloud: How to Design and Run Polylithic Cloud Applications
    Laszlo Toka

    Institute of Electrical and Electronics Engineers (IEEE)
    Nowadays the major trend in IT dictates deploying applications in the cloud, cutting the monolithic software into small, easily manageable and developable components, and running them in a microservice scheme. With these choices come the questions: which cloud service types to choose from the several available options, and how to distribute the monolith in order to best resonate with the selected cloud features. We propose a model that presents monolithic applications in a novel way and focuses on key properties that are crucial in the development of cloud-native applications. The model focuses on the organization of scaling units, and it accounts for the cost of provisioned resources in scale-out periods and invocation delays among the application components. We analyze dis-aggregated monolithic applications that are deployed in the cloud, offering both Container-as-a-Service (CaaS) and Function-as-a-Service (FaaS) platforms. We showcase the efficiency of our proposed optimization solution by presenting the reduction in operation costs as an illustrative example. We propose to group similarly low scale components together in CaaS, while running dynamically scaled components in FaaS. By doing so, the price is decreased as unnecessary memory provisioning is eliminated, while application response time does not show any degradation.

  • Optimal Resource Provisioning for Data-intensive Microservices
    Roland Mark Erdei and Laszlo Toka

    IEEE
    With the continuous progress of cloud computing, many microservices and complex multi-component applications arise for which resource planning is a great challenge. For example, when it comes to data-intensive cloud-native applications, the tenant might be eager to provision cloud resources in an economical manner while ensuring that the application performance meets the requirements in terms of data throughput. However, due to the complexity of the interplay between the building blocks, adequately setting resource limits of the components separately for various data rates is nearly impossible. In this paper, we propose a comprehensive approach that consists of measuring the resource footprint and data throughput performance of such a microservices-based application, analyzing the measurement results by data mining techniques, and finally formulating an optimization problem that aims to minimize the allocated resources given the performance constraints. We illustrate the benefits of the proposed approach on Cortex, an extension to Prometheus for storing monitored metrics data. The data-intensive nature of this illustrative example stems from real-time monitoring of metrics exposed by a multitude of applications running in a data center and the continuous analysis performed on the collected data that can be fetched from Cortex. We present Cortex’s performance vs resource footprint trade-off, and then we build regression models to predict the microservices’ resource consumption and draw a mathematical programming formulation to optimize the most important configuration parameters. Our most important finding is the linear relationship between resource consumption and application performance, which allows for applying linear regression and linear programming models. After the optimization, we compare our results to Cortex’s recommendation, leading to a CPU reservation reduced by 50-80%.

  • Optimizing Performance and Resource Consumption of Cloud-Native Logging Application Stacks
    Gergo Csati, Istvan Pelle, and Laszlo Toka

    IEEE
    Nowadays cloud-based applications and Internet of Things use-cases are becoming more and more common in the field of IT benefiting from the virtually limitless resources and microservice-based deployment options available in the cloud. Observability in such environments is key for tracing application execution to detect possible malfunctions and anomalies. Collecting logs can greatly help in this regard, however, a high volume of logging data can add huge costs for the maintenance of the infrastructure gathering monitoring data. In order to increase the profitability of the application, monitoring-related infrastructure needs to have the lowest cost possible while still being able to fully serve the application’s monitoring needs. In this work, we investigate this aspect and provide an evaluation of the resource footprint of one of the most prominent log collection services, Elastic Stack, from the perspective of its write path.


  • Predicting Player Transfers in the Small World of Football
    Roland Kovacs and Laszlo Toka

    Springer International Publishing

  • A Career in Football: What is Behind an Outstanding Market Value?
    Balazs Acs and Laszlo Toka

    Springer International Publishing

RECENT SCHOLAR PUBLICATIONS

  • A Survey on Integrating Edge Computing With AI and Blockchain in Maritime Domain, Aerial Systems, IoT, and Industry 4.0
    A Alanhdi, L Toka
    IEEE Access 12, 28684-28709 2024

  • A career handbook for professional soccer players
    B cs, R Kovcs, L Toka
    International Journal of Sports Science & Coaching 19 (1), 444-458 2024

  • Towards maximizing expected possession outcome in soccer
    P Rahimian, J Van Haaren, L Toka
    International Journal of Sports Science & Coaching 19 (1), 230-244 2024

  • A data-driven approach to assist offensive and defensive players in optimal decision making
    P Rahimian, L Toka
    International Journal of Sports Science & Coaching 19 (1), 245-256 2024

  • Boat Speed Prediction in SailGP
    B Zentai, L Toka
    International Workshop on Machine Learning and Data Mining for Sports 2023

  • Momentum Matters: Investigating High-Pressure Situations in the NBA Through Scoring Probability
    B Mihalyi, G Biczk, L Toka
    International Workshop on Machine Learning and Data Mining for Sports 2023

  • Pass Receiver and Outcome Prediction in Soccer Using Temporal Graph Networks
    P Rahimian, H Kim, M Schmid, L Toka
    International Workshop on Machine Learning and Data Mining for Sports 2023

  • 5g on the roads: Latency-optimized federated analytics in the vehicular edge
    L Toka, M Konrd, I Pelle, B Sonkoly, M Szab, B Sharma, S Kumar, ...
    IEEE Access 2023

  • Erőforrs-kezels felhőrendszerekben= Resource provisioning in cloud systems
    L Toka
    Budapesti Műszaki s Gazdasgtudomnyi Egyetem 2023

  • 6G for Connected Sky: A Vision for Integrating Terrestrial and Non-Terrestrial Networks
    M Ozger, I Godor, A Nordlow, T Heyn, S Pandi, I Peterson, A Viseras, ...
    2023 Joint European Conference on Networks and Communications & 6G Summit 2023

  • 5G on the roads: optimizing the latency of federated analysis in vehicular edge networks
    L Toka, M Konrd, I Pelle, B Sonkoly, M Szab, B Sharma, S Kumar, ...
    NOMS 2023-2023 IEEE/IFIP Network Operations and Management Symposium, 1-5 2023

  • Federated learning for vehicular coordination use cases
    L Toka, M Konrd, I Pelle, B Sonkoly, M Szab, B Sharma, S Kumar, ...
    NOMS 2023-2023 IEEE/IFIP Network Operations and Management Symposium, 1-6 2023

  • A Comprehensive Performance Analysis of Stream Processing with Kafka in Cloud Native Deployments for IoT Use-cases
    I Pelle, B Szőke, A Fayad, T Cinkler, L Toka
    NOMS 2023-2023 IEEE/IFIP Network Operations and Management Symposium, 1-6 2023

  • Minimizing Resource Allocation for Cloud-Native Microservices
    R Erdei, L Toka
    Journal of Network and Systems Management 31 (2), 35 2023

  • Lsso: Long short-term scaling optimizer
    B Fodor, L Toka, B Sonkoly
    2022 IEEE Conference on Network Function Virtualization and Software Defined 2022

  • Let’s penetrate the defense: a machine learning model for prediction and valuation of penetrative passes
    P Rahimian, DG da Silva Guerra Gomes, F Berkovics, L Toka
    International Workshop on Machine Learning and Data Mining for Sports 2022

  • The Shape of Your Cloud: How to Design and Run Polylithic Cloud Applications
    L Toka
    IEEE Access 10, 97971-97982 2022

  • Technique for online video-gaming with sports equipment
    I Gdor, G Feher, AB Lajtha, L Toka, A Vidacs
    US Patent 11,318,377 2022

  • Optimal resource provisioning for data-intensive microservices
    RM Erdei, L Toka
    NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium, 1-6 2022

  • Optimizing Performance and Resource Consumption of Cloud-Native Logging Application Stacks
    G Csti, I Pelle, L Toka
    NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium, 1-4 2022

MOST CITED SCHOLAR PUBLICATIONS

  • 5g support for industrial iot applications—challenges, solutions, and research gaps
    P Varga, J Peto, A Franko, D Balla, D Haja, F Janky, G Soos, D Ficzere, ...
    Sensors 20 (3), 828 2020
    Citations: 223

  • Traffic analysis for HTTP user agent based device category mapping
    P Kersch, G Nemeth, L Toka
    US Patent 9,755,919 2017
    Citations: 101

  • Machine learning-based scaling management for kubernetes edge clusters
    L Toka, G Dobreff, B Fodor, B Sonkoly
    IEEE Transactions on Network and Service Management 18 (1), 958-972 2021
    Citations: 99

  • Analysis of end‐to‐end multi‐domain management and orchestration frameworks for software defined infrastructures: an architectural survey
    R Guerzoni, I Vaishnavi, D Perez Caparros, A Galis, F Tusa, P Monti, ...
    Transactions on Emerging Telecommunications Technologies 28 (4), e3103 2017
    Citations: 80

  • Online data backup: A peer-assisted approach
    L Toka, M Dell'Amico, P Michiardi
    2010 IEEE Tenth International Conference on Peer-to-Peer Computing (P2P), 1-10 2010
    Citations: 69

  • Survey on placement methods in the edge and beyond
    B Sonkoly, J Czentye, M Szalay, B Nmeth, L Toka
    IEEE Communications Surveys & Tutorials 23 (4), 2590-2629 2021
    Citations: 53

  • Adaptive AI-based auto-scaling for Kubernetes
    L Toka, G Dobreff, B Fodor, B Sonkoly
    2020 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet 2020
    Citations: 48

  • Transition to SDN is HARMLESS: Hybrid architecture for migrating legacy ethernet switches to SDN
    L Csikor, M Szalay, G Rtvri, G Pongrcz, DP Pezaros, L Toka
    IEEE/ACM Transactions On Networking 28 (1), 275-288 2020
    Citations: 44

  • Orchestration of network services across multiple operators: The 5G exchange prototype
    A Sgambelluri, F Tusa, M Gharbaoui, E Maini, L Toka, JM Perez, ...
    2017 European Conference on Networks and Communications (EuCNC), 1-5 2017
    Citations: 39

  • Manufactured by software: SDN-enabled multi-operator composite services with the 5G exchange
    G Biczok, M Dramitinos, L Toka, PE Heegaard, H Lonsethagen
    IEEE Communications Magazine 55 (4), 80-86 2017
    Citations: 36

  • NFPA: Network Function Performance Analyzer
    L Csikor, M Szalay, B Sonkoly, L Toka
    2015
    Citations: 31

  • Sharpening kubernetes for the edge
    D Haja, M Szalay, B Sonkoly, G Pongracz, L Toka
    Proceedings of the ACM SIGCOMM 2019 Conference Posters and Demos, 136-137 2019
    Citations: 30

  • Automatic protocol signature generation framework for deep packet inspection
    G Szab, Z Turnyi, L Toka, S Molnr, A Santos
    5th International ICST Conference on Performance Evaluation Methodologies 2012
    Citations: 30

  • On incentives in global wireless communities
    G Biczk, L Toka, A Vidcs, TA Trinh
    Proceedings of the 1st ACM workshop on User-provided networking: challenges 2009
    Citations: 30

  • Systems and Methods for Identifying Applications in Mobile Networks
    P Hga, Z Kenesi, L Toka, A Veres
    US Patent App. 14/268,529 2014
    Citations: 29

  • Data transfer scheduling for p2p storage
    L Toka, M Dell'Amico, P Michiardi
    2011 IEEE International Conference on Peer-to-Peer Computing, 132-141 2011
    Citations: 29

  • Ultra-reliable and low-latency computing in the edge with kubernetes
    L Toka
    Journal of Grid Computing 19 (3), 31 2021
    Citations: 28

  • Scalable edge cloud platforms for IoT services
    B Sonkoly, D Haja, B Nmeth, M Szalay, J Czentye, R Szab, R Ullah, ...
    Journal of Network and Computer Applications 170, 102785 2020
    Citations: 27

  • A resource-aware and time-critical IoT framework
    L Toka, B Lajtha, Hosszu, B Formanek, D Ghberger, J Tapolcai
    IEEE INFOCOM 2017-IEEE Conference on Computer Communications, 1-9 2017
    Citations: 27

  • Managing a peer-to-peer data storage system in a selfish society
    P Maill, L Toka
    IEEE Journal on Selected Areas in Communications 26 (7), 1295-1301 2008
    Citations: 24