All tutorials will be held on 22 September 2019.
9:00–12:30, Room: TBA
T1: Learning-based Wireless Positioning and Wireless Sensing: from Meter to Centimeter Precision
Presented by: Kai-Ten Feng (National Chiao Tung Univ.) and Po-Hsuan Tseng (National Taipei Univ. of Technology)
Time: 9:00–12:30
Room: TBA
Abstract—This tutorial aims at providing the fundamental limits of wireless positioning, including non-line-of-sight path, multi-path attenuation, lack of map information, and time-varying interferences caused by environmental changes and/or people blocking. We discuss how signal processing and machine/deep learning techniques enhance the positioning performance from meter to centimeter precision. The tutorial focuses on the general positioning/sensing problems using channel state information and received signal strength measurements. The described algorithms/implementations can be directly applied to the wireless localization and sensing for the in-car, roadside, and drone landing applications.
Tutorial Objectives
In the tutorial, we will discuss various types of wireless localization and tracking technologies, including ranging/angle-based, database matching-based, and sensor fusion-based techniques.
Secondly, we will examine the performance limiting factors for wireless positioning based on the theoretical limits. We will take Wi-Fi-based technologies as an example in this tutorial and focus on the techniques related to the received signal strength (RSS) and channel state information (CSI). Based on the multi-path channels, we will explain the problem of non-line-of-sight paths. With the help of map information, the surrounding geometric layout provides further prior information to the position estimation. The other unavoidable time-varying Interferences, which are caused by environmental changes and/or people blocking, will limit the performance for wireless positioning.
On top of the signal processing aspect, we examine how to obtain the time-of-arrival (TOA) and the angle-of-arrival (AOA) using the Wi-Fi system. Based on the distance information obtained from TOA or RSS and the angle information from AOA, several recent proposals which achieve the decimeter-level accuracy will be presented. Moreover, by utilizing the broader bandwidths to enhance the time-resolution, the splicing methods which adopt multiple Wi-Fi channels will be discussed.
Finally, we discuss how machine learning/deep learning approaches overcome the positioning problems that cannot be well-modeled. The channel state information-based fingerprinting methods have been proposed for highly accurate positioning down to centimeter-level. The adoption of clustering techniques with database collection can ease the effect of time-varying channels. With the map information, the skeleton-based location tracking is designed for auto-construction of the walkable region and constrains the possible position at next time instant using the generalized Voronoi diagram. Moreover, we will address the device-free wireless positioning and sensing, which could be utilized for high precision drone landing and presence detection for multiple targets.
Tutorial Outline
- Introduction of Positioning: State of Art Technologies
- Received Signal Strength vs Channel State Information (CSI)
- Fundamental Limits of Wireless Positioning
- Multi-path Attenuation
- Non-line-of-sight Paths
- Lack of Map Information
- Time-varying Interferences Caused by Environmental Changes and/or People Blocking
- Signal Processing Aspects of Positioning Technologies
- Avoiding Multi-path to Revive In-building WiFi Localization – CUPID: RSS/AoA/IMU-based methods
- Decimeter-level Localization Using WiFi – SpotFi: ToA/AoA-based methods
- CSI Splicing Methods: Enhancing Time-Resolution using Multiple Wi-Fi Channels
- Machine/Deep Learning Techniques on Wireless Positioning and Wireless Sensing
- Channel State Information-based Positioning Fingerprinting
- Super-resolution-based CSI Fingerprinting
- Deep Neural Network-based CSI Fingerprinting
- Autoencoder-based CSI Hidden Feature Extraction for Spot Localization
- Time-varying interferences Caused by Environmental Changes and/or People Blocking
- Map Information-Assisted: Spatial Skeleton-Enhanced Location Tracking
- Device-free CSI-based Wireless Localization for High Precision Drone Landing Applications
- Device-Free Multiple Presence Detection using CSI Information with Machine Learning Methods
- Channel State Information-based Positioning Fingerprinting
Primary Audience
This tutorial targets on both academic researchers and industrial engineers who are interested in the topics of wireless positioning and sensing. The described algorithms/implementations can be directly applied to the wireless localization and sensing for the in-car, roadside, and drone landing applications. The covered methodologies include conventional model-based and emerging AI-based approaches, which are related to the graduate/post-graduate in the fields of signal processing and computer science.
Novelty
This tutorial explains the fundamental limits of wireless positioning. This tutorial reviews how recent model-based approaches reach their performance limits by using signal processing methodologies. To deal with the difficulties to model the positioning problems, the design of machine learning/deep learning methods and their corresponding advantages are provided in this tutorial. Moreover, the experimental implementations for wireless localization based on machine learning/deep learning methods and the results of field trials will also be demonstrated.
Biography
Kai-Ten Feng received the B.S. degree from the National Taiwan University, Taipei, Taiwan, in 1992, the M.S. degree from the University of Michigan, Ann Arbor, MI, USA, in 1996, and the Ph.D. degree from the University of California—Berkeley, Berkeley, CA, USA, in 2000.,Since August 2011, he has been a Full Professor with the Department of Electrical and Computer Engineering, National Chiao Tung University (NCTU), Hsinchu, Taiwan, where he was an Associate Professor and Assistant Professor from August 2007 to July 2011 and from February 2003 to July 2007, respectively. He served as the Associated Dean of Electrical and Computer Engineering College, NCTU, starting from February 2017. From July 2009 to March 2010, he was a Visiting Research Fellow with the Department of Electrical and Computer Engineering, University of California at Davis. Between 2000 and 2003, he was an In-Vehicle Development Manager/Senior Technologist with OnStar Corporation, a subsidiary of General Motors Corporation, where he worked on the design of future telematics platforms and in-vehicle networks. His research interests include broadband wireless networks, cooperative and cognitive networks, smartphone and embedded system designs, wireless location technologies, and intelligent transportation systems. Dr. Feng was the recipient of the Best Paper Award from the Spring 2006 IEEE Vehicular Technology Conference, which ranked his paper first among the 615 accepted papers. He was also the recipient of the Outstanding Youth Electrical Engineer Award in 2007 from the Chinese Institute of Electrical Engineering, and the Distinguished Researcher Award from NCTU in 2008, 2010, and 2011. Since 2018, he has been serving as the Technical Advisor for IEEE-HKN Honor Society and National Academy of Engineering Grand Challenges Scholars Program at NCTU. He has also served on the technical program committees in various international conferences.
Po-Hsuan Tseng received the B.S. and Ph.D. degrees in communication engineering from the National Chiao Tung University, Hsinchu, Taiwan, in 2005 and 2011, respectively. Since Feb. 2017, he has been an Associate Professor with the Department of Electronic Engineering, National Taipei University of Technology, Taipei, Taiwan, where he was an Assistant Professor from Aug. 2012 to Jan. 2017. His current research focuses on signal processing for networking and communications, including location estimation and tracking, and mobile broadband access system design
T2: Communication Networks Design: Model-Based, Data-Driven, or Both?
Presented by: Alessio Zappone (CentraleSupelec), Marco Di Renzo (Paris-Saclay Univ.), Merouane Debbah (Huawei R&D)
Time: 14:00–17:30
Room: TBA
Abstract—The tutorial will provide the audience with a solid understanding of the fundamentals of deep learning and its use for the design of wireless communications. Artificial neural networks, which are the distinctive feature of deep learning as compared to other machine learning methods, will be introduced. The main artificial neural networks architectures will be described, focusing in particular on feedforward networks and of the problem of their supervised training. The most widely used methods for neural network training will be described and tips and tricks to improve the training process will be explained.
After introducing the fundamentals of deep learning, the tutorial will address how deep learning can be merged with more traditional model-based approaches to perform wireless networks design, exploiting the frameworks of transfer learning and reinforcement learning. Several relevant applications will be described, quantifying the advantages of embedding expert knowledge coming from theoretical models, into data-driven methods, considering diverse system scenarios, such as dense heterogeneous cellular networks, energy-efficient networks, network-slicing systems, and chemical-based communication systems. A main conclusion drawn by the tutorial is the embedding expert knowledge into traditional neural network design can significantly reduce the amount of data that is necessary to use for training purposes, thus significantly simplifying the overall system design.
Tutorial Objectives
Data-driven approaches are not new to wireless communications, but their implementation through deep learning techniques has never been considered in the past, even though deep learning is the most widely used machine learning approach in other fields than wireless communication. In our opinion, this is mainly due to the fact that, unlike other fields of science where theoretical modeling is particularly hard, thus motivating the use of data-driven approaches, wireless communications could always rely on strong mathematical models for system design. However, the situation is rapidly changing, and very recently the use of deep learning has started being envisioned for wireless communications too. Indeed, the increasing complexity of wireless networks makes it harder and harder to come up with theoretical models that are at the same time accurate and tractable. The rising complexity of 5G and beyond 5G networks is exceeding the modeling and optimization possibilities of standard mathematical tools.
Nevertheless, purely data-driven approaches require a huge amount of data to operate, which might be difficult and/or expensive to acquire in practical large-scale scenarios. This issue has been acknowledged also by the deep learning community, for which one major research trend lies precisely in the development of techniques able to exploit any prior information that is available about the problem at hand in order to reduce the amount of data needed to achieve given performance levels. In this context, the specific field of communication theory presents a major opportunity thanks to the availability of many more theoretical models compared to other fields of science. Indeed, despite being usually inaccurate and/or cumbersome, available communication models still provide important prior information that should be exploited. Accordingly, the overall aim of this tutorial is to put forth the idea that theoretical modeling and data-driven approaches are not two contrasting paradigms, but should rather be used jointly to get the most out of them.
This tutorial will cover the most recent approaches to merge advanced deep learning techniques with the latest model-based methodologies for system-level design and optimization of wireless networks. Specifically, the following objectives are pursued:
1) Provide the foundations of deep learning by artificial neural networks and introduce the main concepts about supervised training of neural networks.
2) Show how deep learning can complement, rather than replace, traditional wireless networks design methodologies, to develop novel design approaches with reduced complexity and improved performance.
3) Present a wide range of applications to evidence the gain that a joint data-driven / model-based approach can bring, as compared to using only one of the two approaches.
Tutorial Outline
- Fundamentals of machine learning for communications
- A definition of machine learning
- Supervised vs. unsupervised learning
- Underfitting, overfitting, and capacity
- Deep learning vs. Machine learning
- Artificial neural networks
- Feedforward neural networks
- Training artificial neural networks
- Embedding models into deep learning
- Model-based or data-driven?
- Expert knowledge into artificial neural networks
- Learning to optimize
- Transfer learning
- Deep reinforcement learning
- Applications
- Resource allocation in dense heterogeneous cellular networks
- Resource allocation in network-slicing systems
- Resource allocation in energy-efficient cellular networks
- Optimization of beyond-RF wireless networks
- Concluding remarks
Primary Audience
The tutorial is aimed at both academic researchers wishing to learn the fundamentals of this emerging field, as well as to wireless engineers and industry practitioners wishing to employ deep learning to improve their products.
Novelty
This tutorial is the first to:
1) Focus specifically on deep learning considering the cross-fertilization between data-driven methods and model-based approaches.
2) Discuss emerging machine learning tools like deep transfer learning and deep reinforcement learning
3) Focus on the use of deep learning for the resource management of wireless networks, presenting many relevant applications for several instances of communication systems.
Biography
Dr. A. Zappone is currently an experienced Marie Curie Fellow at CentraleSupelec, France, working in the field of resource allocation for 5G wireless networks and beyond. He is an IEEE Senior Member, an Associate Editor of the IEEE Signal Processing Letters, and has been a Guest Editor of the IEEE JSAC Special issue on “Energy-Efficient Techniques for 5G Wireless Communication Systems”. He was appointed exemplary reviewer for both the IEEE Transactions on Communications and IEEE Transactions on Wireless Communications in 2017.
Dr. M. Di Renzo is Associate Professor with the Laboratory of Signals and Systems of Paris-Saclay University – CNRS, CentraleSupelec, Univ Paris Sud, France. He is a Distinguished Visiting Fellow of the Royal Academy of Engineering (UK), and co-founder of the university spin-off company WEST Aquila s.r.l., Italy. He serves as associate Editor-in-chief of the IEEE Communication Letters, and editor of the IEEE Transactions on Communications, (Heterogeneous Networks Modeling and Analysis) and Transactions on Wireless Communications. He is an IEEE Senior Member, an EURACON Member, and a Distinguished Lecturer of the IEEE Communications and IEEE Vehicular Technology Societies.
Dr. M. Debbah is Vice-President of the Huawei France R&D center and director of the Mathematical and Algorithmic Sciences Lab, as well as full professor at CentraleSepelec. He is an IEEE Fellow, a WWRF Fellow and a member of the academic senate of Paris-Saclay. He received several awards, among which the 2015 IEEE Communications Society Leonard G. Abraham Prize, the 2015 IEEE Communications Society Fred W. Ellersick Prize, the 2016 IEEE Communications Society Best Tutorial paper award, the 2016 European Wireless Best Paper Award, the 2017 Eurasip Best Paper Award and the 2018 IEEE Marconi Prize Paper Award, the Mario Boella award in 2005, the IEEE Glavieux Prize Award in 2011, and the Qualcomm Innovation Prize Award in 2012.
All speakers have extensive tutorial experience, being regular tutorial/keynote speakers.
T3: 5G New Radio (NR) Protocols and Architecture
Presented by: Icaro Leonardo Da Silva, Gunnar Mildh, Paul Schliwa-Bertling, and Magnus Stattin (Ericsson Research)
Time: 9:00–12:30
Room: TBA
Abstract—In this tutorial the authors explain the fundamentals of the 5th Generation (5G) New Radio (NR) protocols and architecture, recently standardized by the 3rd Generation Partnership Project (3GPP). The focus of the tutorial is on the higher layer protocols (i.e. almost everything on the radio network, except the physical layer). More emphasis will be given to the Radio Resource Control (RRC) protocol, which is the protocol responsible for fundamental functions in the user equipment (UE) such as state model (e.g. IDLE, CONNECTED, INACTIVE states), connection control procedures (e.g. state transitions like IDLE to CONNECTED, INACTIVE to CONNECTED, etc.), measurement configuration and reporting (different types of measurements the UE performs, impact due to beamforming, etc.), mobility, dual connectivity with 4G (first version of the 5G standard), etc.
Tutorial Objectives
In the next few years we expect the development of algorithms and optmizations proposed by the research community based on 5G NR from 3GPP standards. Hence, the understanding of the recently standardized 5G NR protocols, which is the objective of the tutorial, is essential for the research community.
The tutorial held by 5G RAN protocol experts explains the fundamentals of the 5th Generation (5G) New Radio (NR) protocols and architecture. The main focus is on the higher layer protocols (i.e. almost everything on the radio network, except the physical layer). More emphasis will be given to the Radio Resource Control (RRC) protocol, which is the protocol responsible for fundamental functions in the user equipment (UE) such as state model (e.g. IDLE, CONNECTED, INACTIVE states), connection control procedures (e.g. state transitions like IDLE to CONNECTED, INACTIVE to CONNECTED, etc.), measurement configuration and reporting (different types of measurements the UE performs, impact due to beamforming, etc.), mobility, dual connectivity with 4G (first version of the 5G standard), etc.
Tutorial Outline
The tutorial will be divided in three parts.
The first part of the tutorial will be on the overall 5G architecture, including the different options being standardized and the envisioned deployments for each of these options. That part will include an explanation of two main alternatives: Evolved Universal Terrestrial Network (EUTRAN) – New Radio (NR) Dual Connectivity (EN-DC), where E-UTRAN connected to Evolved Packet Core (EPC) serves as anchor for NR carrier frequencies, so the UEs uses Dual Connectivity with a 5G carrier.
This part is divided as follows:
- Fundamentals of 5G architecture
- EN-DC architecture
- 5G NR Standalone Architecture
The second part is an overview of 5G NR protocols. That contains a discussion of some design choices in NR compared to LTE e.g. due to some specific challenges such as the need to be deployed in higher frequencies (such as in mmWave frequencies), the extensive use of beamforming, and the need to address a diversity of use cases from very low latency, to machine type devices.
This part is divided as follows:
- Overview of Physical layer (PHY)
- Overview of Medium Access Control (MAC)
- Overview of Radio Link Control (RLC)
- Overview of Packet Data Convergence Protocol (PDCP)
- Overview of Radio Resource Control (RRC)
In the third part, the authors do a deep dive on control plane procedures specified in the RRC protocol (3GPP TS 38.331) [3], in particular some details concerning the design of the RRC INACTIVE state, as proposed by the authors in [4]. Here is where the authors expect to spend half of the time of this tutorial.
This part consists of the following topics:
- RRC state model (CONNETED, IDLE and INACTIVE states)
- IDLE to CONNECTED transition (e.g. what happens when the UE is turned on, etc.)
- CONNECTED to IDLE transition (e.g. what happens when the UE stops transmitting/receiving data, etc.)
- INACTIVE state introduced in 5G NR (e.g. properties, advantages compared to IDLE, RAN
paging) [4]
- INACTIVE to CONNECTED transition (resume procedure)
- CONNECTED to INACTIVE transition (suspend procedure)
- CONNECTED measurement configuration and measurement reporting
- Reestablishment and Reject
- Mobility procedures
- Dual Connectivity procedures
Primary Audience
Researchers, students, engineers, and research leaders from industry or academia interested to learn and discuss the fundamental aspects of 5G New Radio (NR) architecture and protocols recently standardized in 3GPP. We also expect the audience to highlight aspects that could be later improved in the future.
Novelty
This tutorial is an unique opportunity for attendees.
First of all, 5G NR standards have just been completed and commercial deployments are expected in 2019. Hence, we expect many conference attendees to be interested in the tutorial as there could not be in better time.
Second, it is a chance to interact with industry experts on 5G NR protocols and architecture standardized by 3GPP. They have been involved in early 5G research external and internal projects, and actively participated in 5G standardization.
Third, there is currently a lack of material/books on tutorial format on 5G NR architecture and protocols.
Biography
Icaro Da Silva received his M.Sc. in electrical engineering from the Universidade Federal of Ceara (UFC), Fortaleza, Brazil in 2009. In 2010, he joined Ericsson Research, Ericsson AB, Stockholm, and has since been working on standardization and concept development for LTE and 5G NR, in particular driving control plane topics in 3GPP RAN2. He has also worked as a 3GPP delegate in RAN2 and actively participated in the draft of the 5G NR RRC specifications (TS 38.331). His focus areas has been on 5G NR radio network architecture and protocols, in particular the control plane design and RRC protocol. Icaro has lead the 5G control plane in the EU project on 5G RAN architecture METIS-II, part of the 5G-PPP framework. He has also participated as panelist on 5G design and 5G architecture in VTC and EuCNC. He is currently a senior researcher in radio network architecture and protocols, Ericsson Research.
Magnus Stattin graduated and received his Ph.D. degree in radio communication systems from the Royal Institute of Technology, Stockholm, Sweden in 2005. He joined Ericsson Research in Stockholm, Sweden, in June 2005. At Ericsson Research he has been working with research in the areas of radio resource management and radio protocols of various wireless technologies. He is active in concept development and 3GPP standardization of LTE, NB-IoT, NR and future wireless technologies.
Gunnar Mildh received his M.Sc. in electrical engineering from the Royal Institute of Technology (KTH), Stockholm, Sweden, in 2000. In the same year, he joined Ericsson Research, Ericsson AB, Stockholm, and has since been working on standardization and concept development for GSM/EDGE, HSPA, LTE(-A) and 5G NR. His focus areas has been on radio network architecture and protocols, and recently on 5G architecture including RAN and Packet Core. He is currently an expert in radio network architecture at Ericsson Research.
Paul Schliwa-Bertling received his B.Sc. in electrical engineering from the University of Duisburg-Essen, Essen, Germany. He joined Ericsson Research, Ericsson AB, Stockholm, and has since been working on standardization and concept development for GSM/EDGE, HSPA, LTE(-A) and 5G NR. He has also worked many years as a 3GPP delegate in SA2. His focus areas has been on radio network architecture and protocols, and recently on 5G architecture including RAN and Packet Core. He is currently an expert in mobile network architecture and signaling at Ericsson Research.
T4: Networking and Communications for Autonomous Driving
Presented by: Jiajia Liu (Northwestern Polytechnical University, China) and Nei Kato (Tohoku University)
Time: 14:00–17:30
Room: TBA
Abstract—The development of LIDAR, Radar, camera, and other advanced sensor technologies inaugurated a new era in autonomous driving. However, due to the intrinsic limitations of these sensors, autonomous vehicles are prone to making erroneous decisions and causing serious disasters. At this point, networking and communication technologies can greatly make up for sensor deficiencies, and are more reliable, feasible and efficient to promote the information interaction, thereby improving autonomous vehicle’s perception and planning capabilities as well as realizing better vehicle control. We provide in this tutorial a comprehensive review of recent research works concerning the networking and communication technologies in autonomous driving from two aspects: intra- and inter-vehicle. The intra-vehicle network as the basis of realizing autonomous driving connects the on-board electronic parts. The inter-vehicle network is the medium for interaction between vehicles and outside information. In addition, we present the new trends of communication technologies in autonomous driving, as well as investigate the current mainstream verification methods and emphasize the challenges and open issues of networking and communications in autonomous driving.
Tutorial Objectives
(1) Intra-vehicle Networking and Communications: Intra-vehicle networking and communications as the basis of autonomous driving can realize the transmission of status information and control signals among sensors, actuators and electronic units in the autonomous vehicle, combine the wired and wireless technologies to form an extensible connection backbone structure, and actualize advanced functions in a centralized system. In this part, we review the state-of-the-art of intra-vehicle networking and communication technologies that are already in use on autonomous vehicles or have the potential to be used.
(2) Inter-Vehicle Networking and Communications: Due to the limitations of sensors and autonomous vehicle’s dependence on environmental information, the inter-vehicle networking and communications is of particular importance and can greatly benefit the perception, planning and interaction of autonomous driving. In this part, we focus on the autonomous vehicle’s inter-vehicle networking form, promising architecture, effective management scheme, as well as the promising inter-vehicle communication technologies.
(3) New Trends of Communication Technologies in Autonomous Driving: In this part, we introduce the new trends of communication technologies that have great potential for the foreseeable future and will be more in line with the requirements of autonomous driving.
(4) Verification Methodologies: The research of autonomous driving’s networking and communications needs a lot of experimental support. In this part, we present a review of three kinds of simulators as well as several real-world experiments to test and evaluate networking and communication technologies in autonomous driving.
(5) Challenges and Open Issues: Networking and communications for autonomous driving still have a long way to go and require the joint efforts of academia and industry. In this part, we analyze the challenges and open issues in autonomous vehicle’s networking and communications from the perspectives of requirement, management, standard, and security.
Tutorial Outline
- Intra-vehicle Networking and Communications
- Intra-vehicle networking (wired and wireless interconnnection)
- Intra-vehicle communications (wired and wireless technologies)
- Inter-vehicle Networking and Communications
- Inter-vehicle networking (characteristics, ICN architecture, clustering, platooning, routing protocol)
- Inter-vehicle communications (low power technologies, IEEE 802.11 family technologies, base station driven technologies, auxiliary technologies)
- New Trends of Communication Technologies in Autonomous Driving
- The emerging 5G technology
- Computing technologies
- Simultaneous wireless information and power transfer
- Visible light communications
- Deep learning
- Verification Methodologies
- Simulators (network, traffic and integrated)
- Real-world experiments
- Challenges and Open Issues
- Strict requirements
- Network management
- Computing systems
- Development of standards
- Security issues
- Information transmission priority
- Data sharing
Primary Audience
The target audience of this tutorial will be researchers, engineers, and regulators in the academia and the industry of vehicular technology, who are interested in understanding the latest research progress in autonomous vehicular networks as well as autonomous vehicular communication system, vehicular wireless communication technologies, and vehicle automation.
Novelty
Networking and communication technologies can greatly make up for sensor deficiencies, and are more reliable, feasible and efficient to promote the information interaction, ultimately greatly improve the security of autonomous vehicles. We aim at providing an up-to-date tutorial of the networking and communication technologies available for autonomous driving as well as presenting relevant challenges and suggestions, enabling both researchers and beginners to keep abreast of the latest global achievements in this field and to produce a quick and overall mastery.
Biography
Jiajia Liu (SM’15) received his Ph.D. degree in information sciences from Tohoku University in 2012. Since Jan. 2019, he has been a full professor at the School of Cybersecurity, Northwestern Polytechnical University. He has published more than 130 peer-reviewed papers in many high quality publications, including prestigious IEEE journals and conferences. He received IEEE ComSoc Asia-Pacific Outstanding Young Researcher Award in 2017, IEEE TVT Top Editor Award in 2017, the Best Paper Awards from IEEE GLOBECOM in 2016, IEEE WCNC in 2012 and 2014, IEEE IC-NIDC in 2018. He was the recipient of the prestigious 2012 Niwa Yasujiro Outstanding Paper Award and also a recipient of the Tohoku University President Award 2013, Professor Genkuro Fujino Award 2012. He is the Secretary of IEEE AHSN TC, and is a Distinguished Lecturer of IEEE ComSoc.
Nei Kato (F'13) is a full professor (Deputy Dean) with Graduate School of Information Sciences(GSIS) and the Director of Research Organization of Electrical Communication(ROEC), Tohoku University, Japan. He has been engaged in research on computer networking, wireless mobile communications, satellite communications, ad hoc & sensor & mesh networks, smart grid, AI, IoT, Big Data, and pattern recognition. He has published more than 400 papers in prestigious peer-reviewed journals and conferences. He is the Vice-President (Member & Global Activities) of IEEE Communications Society(2018-2019), the Editor-in-Chief of IEEE Transactions on Vehicular Technology(2017-), and the Chair of IEEE Communications Society Sendai Chapter. He served as the Editor-in-Chief of IEEE Network Magazine (2015-2017), a Member-at-Large on the Board of Governors, IEEE Communications Society(2014-2016), a Vice Chair of Fellow Committee of IEEE Computer Society(2016), and a member of IEEE Communications Society Award Committee (2015-2017). Nei Kato is a Distinguished Lecturer of IEEE Communications Society and Vehicular Technology Society. He is also a fellow of The Engineering Academy of Japan and IEICE.
T5: Towards UAV-Based Airborne Computing: Applications, Design, and Prototype
Presented by: Kejie Lu (U Puerto Rico), Yan Wan (U Texas), Shengli Fu (U North Texas), Junfei Xie (Texas A&M U)
Time: 9:00–12:30
Room: TBA
Abstract—In recent years, unmanned aerial vehicles (UAVs) have attracted significant attention from industry, federal agencies, and academia. To design and implement future UAV systems and applications, many researchers and engineers have been working on different UAV functions in various domains, such as control, communications, networking, etc. While all these UAV functions require advanced on-board computing capabilities, they are usually designed separately and there is a lack of a general framework to exploit airborne computing for all on-board UAV functions. In this tutorial, our objective is to address this timely and important issue by exploring a new and cross-disciplinary area: UAV-based airborne computing.
To this end, we will first systematically analyze existing and emerging UAV applications and then use case studies to demonstrate how airborne computing can help to facilitate advanced UAV functions and UAV applications. Based on such analysis, we will discuss and summarize important design guidelines for future generations of UAV systems with airborne computing capabilities. We will then introduce our recent design of a general UAV-based airborne computing platform and the latest version of our UAV-based airborne computing prototype. Finally, using our prototype, we will explain and demonstrate a number of advanced UAV functions, including reinforcement-learning based antenna heading control, image-processing based 3D mapping, and deep-learning based object detection. Finally, we will invite audience to participate in some hands-on exercises using our prototype, and we will discuss open issues and important future directions before concluding the tutorial.
Tutorial Objectives
In this tutorial, our main objective is to address a timely and important issue: UAV-based airborne computing.
The specific objectives include:
+ To give the audience an overview about the state-of-the-art UAV applications and UAV functions.
+ To discuss how airborne computing can help to facilitate advanced UAV functions, including control, communication, networking and computing functions.
+ To investigate how airborne computing can help to facilitate novel UAV application, for which we will also use case studies, such as advanced precision agriculture and emergency response, to demonstrate the potentials of airborne computing.
+ To discuss and summarize important design guidelines for future generations of UAV systems that integrate airborne computing capabilities.
+ To introduce our recent design of a general UAV-based airborne computing platform, which consists of a team of networked smart UAVs that integrate communication, control, computing and storage capabilities.
+ To explain our design and implementation of a prototype, including processor selection and carrier board design, directional antenna-enabled UAV-to-UAV communication, intelligent heading control for directional antennas, etc.
+ To demonstrate how our prototype can facilitate a number of advanced UAV functions, including (1) a reinforce-learning based long-range broadband communication, (2) on-board three-dimensional terrain map generation, (3) deep-learning based real-time object-detection, (4) advanced coded distributed computing for distributed machine learning.
+ To invite audience to participate in some hands-on exercises using our prototype.
+ To discuss open issues and important future directions.
Tutorial Outline
1. Introduction
1.1 The market trend for UAV-based applications
1.2 Regulation and policy
1.3 UAV-based applications
1.4 UAV functions and the needs for on-board computing
1.5 Motivation for UAV-based airborne computing
2. Comprehensive Analysis for UAV-based applications
2.1 A classification for UAV-based applications
2.2 A layered model for analysis and design
2.2.1 The mission layer
2.2.2 The task layer
2.2.3 The function layer
2.3 Computing-enabled UAV functions
2.3.1 Control
2.3.2 Communication
2.3.3 Networking
2.3.4 Computing
2.4 Computing-enabled UAV applications
2.4.1 UAV-based precision agriculture
2.4.2 UAV-based emergency response
3. Design guidelines and a general platform
3.1 Design guidelines
4. A general UAV-based airborne computing platform
4.1 The system overview
4.2 The components
4.2.1 The quadcopter unit
4.2.2 The control unit
4.2.3 The communication unit
4.2.4 The computing unit
4.3 The prototype
4.3.1 The processor and carrier board design
4.3.2 The directional antenna system design
4.4 Computing-enabled UAV functions
4.4.1 Reinforcement-learning based antenna heading control
4.4.2 Image processing based three-dimensional terrain map generation
4.4.3 Deep-learning based on-board object detection
4.4.4 Coded distributed computing
5. Demonstration and hands-on exercises
5.1 Demonstration of a networked airborne computing prototype
5.1.1 Processor and carrier board
5.1.2 Directional antenna system
5.1.3 Virtualization technologies
5.2 Demonstration of advanced UAV functions
5.2.1 Reinforcement-learning based antenna heading control
5.2.2 Image processing based three-dimensional terrain map generation
5.2.3 Deep-learning based on-board object detection
5.2.4 Coded distributed computing
5.3 Hands-on exercises
6. Summary, discussion, and feedback
Primary Audience
Students, researchers, and developers interested in the development of advanced UAV functions and novel UAV applications, with a background in aerospace, control, communication, networking, or computing.
Novelty
Over the past decade, UAV has been a very hot topic in multiple domains, such as aerospace, control, communications, and networking. While most recent designs require advanced on-board computing capabilities, they generally consider computing separately in different domains. Clearly, there is a lack of a general framework to exploit airborne computing for all on-board UAV functions. In this tutorial, we will address this timely and important issue by presenting a comprehensive tutorial for UAV-based airborne computing.
Biography
Dr. Kejie Lu is a professor in the Department of Computer Science and Engineering, University of Puerto Rico at Mayagüez (UPRM). He received his Ph.D. degree in Electrical Engineering from the University of Texas at Dallas in 2003. Since July 2005, he has been a faculty member in UPRM. His research interests include architecture and protocol design for computer and communication networks, cyber-physical system, network-based computing, and network testbed development.
Dr. Yan Wan is currently an Associate Professor in the Electrical Engineering Department at the University of Texas at Arlington. She received her Ph.D. degree in Electrical Engineering from Washington State University in 2009. From 2009 to 2016, she was an assistant professor and then an associate professor at the University of North Texas. Her research interests lie in developing fundamental theories and tools for the modeling, evaluation, and control tasks in large-scale dynamic networks and cyber-physical systems.
Dr. Shengli Fu is currently a professor and the Chair in the Department of Electrical Engineering, University of North Texas (UNT), Denton, TX. He received his Ph.D. degree in Electrical Engineering from the University of Delaware, Newark, DE, in 2005, before he joined UNT. His research interests include coding and information theory, wireless communications and sensor networks, aerial networks, and drone systems design.
Dr. Junfei Xie is an Assistant Professor at the Department of Computing Sciences of Texas A&M University - Corpus Christi. She received her Ph.D. degree in Computer Science and Engineering in 2016 from University of North Texas. Her current research interests include airborne networks, unmanned systems, spatiotemporal data mining, dynamical system modeling and control, and complex information systems.
9:00–12:30, Room: TBA
T7: Orbital Angular Momentum for Wireless Communications: Theory, Challenges, and Progress
Presented by: Wenchi Cheng (Xidian University) and Wei Zhang (University of New South Wales)
Time: 9:00–12:30
Room: TBA
Abstract—It is now very difficult to use the traditional plane-electromagnetic (PE) wave based wireless communications to satisfy the ever-lasting capacity demand growing. Fortunately, the electromagnetic (EM) wave possesses not only linear momentum, but also angular momentum, which includes the orbital angular momentum (OAM). The orbital angular momentum (OAM), which is a kind of wave front with helical phase and has not been well studied yet, is another important property of EM wave. The OAM-based vortex wave has different topological charges, which are independent and orthogonal to each other, bridging a new way to significantly increase the capacity of wireless communications. This proposal will be discussing the fundamental theory of using orbital angular momentum (OAM) for wireless communications. This proposal would start with the background introduction on what is OAM based wireless communication and how OAM is important in current and future wireless communications. Then, the fundamental theory of OAM will be elaborated on in details, including OAM versus MIMO, OAM signal generation/reception, and OAM beam converging. Moreover, we would also like to share our latest research progress regarding how to apply OAM into wireless communications, including mode modulations, OAM mode convergence, mode hopping, OAM based MIMO, orthogonal mode division multiplexing, concentric UCAs based low-order OAM transmission, degree of freedom in mode domain as well as orthogonality of OAM mode. More important, the new results regarding how to solve the beam-hollow problem and support the misaligned UCA transceiver will also be studied. Finally, the applications of OAM based wireless communication are also discussed.
Tutorial Objectives
In the future wireless communications, the amount of traffic becomes larger and larger than ever. Thus, the existing crowded spectrum will face higher pressure than ever. Although OAM has the potential to increase the spectrum efficiency, we observe that OAM has not been received sufficient efforts when we study the new vortex wireless communication techniques. Towards this end, this tutorial will highlight the importance, modeling, and solutions of our latest research progress for OAM based radio vortex wireless communication.
Tutorial Outline
Part I: Background of OAM
1. What is OAM based wireless communication: back ground and motivation
2. Mode domain versus frequency/time domain
Part II: Fundamental Theory of Using OAM for Wireless Communications
1. High Spectrum efficiency radio vortex wireless communication: multiple-mode OAM signal generation/adaptation/reception;
2. Non-hollow-OAM based wireless transmissions;
3. Long distance radio vortex wireless communications: OAM beam converging;
4. Mobility issues regarding radio vortex wireless communication;
5. OAM versus MIMO: degree of freedom, orthogonality, and capacity;
6. Anti-Jamming: Mode hopping;
7. Orthogonal mode division multiplexing;
8. Concentric UCAs based low-order OAM.
Part III: Application of Using OAM for Wireless Communications
1. Practical OAM Based Wireless Communications With Non-Aligned Transceiver;
2. Mode-Division-Multiple-Access Based MAC Protocol for Radio-Vortex Wireless Networks;
3. OAM for ultra-dense wireless networks.
Primary Audience
As the wireless communications networks move from 5G to 5G-beyond or even 6G, it is very urgent to develop some fundamental technologies for next negation wireless networks. OAM based wireless communications, although facing critical challenges, can offer spectrum efficiency enhancement for LOS transmission, ultra-reliability with different modes, and anti-jamming with new dimensions. The VTC audiences, who concerns the wireless transmissions and wireless networks, can learn how to use OAM based wireless communication for future wireless communications.
Novelty
Although with promising capacity enhancement capability, it is very challenging to develop the OAM based
wireless communications. The first problem is that regular OAM beams are centrally hollow and divergent,
which means that ultra-low energy exists in the center of OAM beams. The second problem is that
most existing schemes are designed for the transceiver-aligned scenario where different OAM-modes can
be easily distinguished at the receiver. In this tutorial, not only previous focused problems, but also these practical-communications-driven problems will also be discussed.
Biography
Wenchi Cheng (M’14-SM’18) received the B.S. and Ph.D. degrees in telecommunication engineering from Xidian University, Xian, China, in 2008 and 2014, respectively, where he is an Associate Professor. He was a Visiting Scholar with Networking and Information Systems Laboratory, Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX, USA, from 2010 to 2011. His current research interests include 5G wireless networks and orbital-angular-momentum based wireless communications. He has published more than 70 international journal and conference papers in IEEE Journal on Selected Areas in Communications, IEEE Magazines, IEEE INFOCOM, GLOBECOM, and ICC, etc. He received the Young Elite Scientist Award of CAST, the Best Paper Award for IEEE/CIC ICCC 2018, the Best Paper Nomination for IEEE GLOBECOM 2014, and the Outstanding Contribution Award for Xidian University. He has served or serving as the Associate Editor for IEEE Access, the IoT Session Chair for IEEE 5G Roadmap, the Wireless Communications Symposium Co-Chair for IEEE GLOBECOM 2020, the Publicity Chair for IEEE ICC 2019, the Next Generation Networks Symposium Chair for IEEE ICCC 2019, the Workshop Chair for IEEE ICC 2019 Workshop on Intelligent Wireless Emergency Communications Networks, the Workshop Chair for IEEE ICCC 2017 Workshop on Internet of Things.
Wei Zhang (S’01–M’06–SM’11–F’15) received the Ph.D. degree in electronic engineering from the Chinese University of Hong Kong, Hong Kong, in 2005. In May 2008, he joined the School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, NSW, Australia, where he is currently a full Professor. His current research interests include cognitive radio, energy harvesting communications, and massive multiple-input multiple-output. He is the Editor-in-Chief of the IEEE WIRELESS COMMUNICATIONS LETTERS from January 2016. He is an Editor of the IEEE TRANSACTIONS ON COMMUNICATIONS and an Editor for the IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING. He is currently the Chair of the IEEE Communications Society Wireless Communications Technical Committee. He is the Vice-Director of the IEEE Communications Society Asia Pacific Board. He is also an Elected Member of IEEE Signal Processing Society SPCOM Technical Committee. He is a Distinguished Lecturer of the IEEE Communications Society. He is a Fellow of the IET.
T8: V2X Communications and Security
Presented by: Yi Qian (University of Nebraska-Lincoln)
Time: 14:00–17:30
Room: TBA
Abstract—A wide variety of work has been down in vehicle-to-everything (V2X) communications to enable various applications for road safety, traffic efficiency and passenger infotainment. Although IEEE 802.11p used to be considered as the main technology for V2X, new research trends nowadays are considering cellular technology as the future of V2X due to its rapid development and ubiquitous presence. This tutorial surveys the recent development and challenges on 4G LTE and 5G mobile wireless networks to support efficient V2X communications & security for V2X communications. In the first part, we highlight the 4G LTE V2X architecture and operating scenarios for V2X communications. In the second part, we discuss the challenges and the new trends in 4G and 5G for supporting V2X communications such as physical layer structure, synchronization, resource allocation, multimedia broadcast multicast services (MBMS), as well as possible solutions to these challenges. In the third part, we survey the state-of-the-art solutions for security in V2X communications. Finally, we discuss some open research issues for future 5G based V2X communications and security.
Tutorial Objectives
There have been many recent research activities to address the communication capabilities in vehicles and transportation infrastructure, which mainly include vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-pedestrian (V2P) and vehicle-to-network (V2N) communications collectively termed as vehicle-to-everything (V2X) communications. This V2X communications can improve the efficiency and safety of transportation systems. V2X communications together with existing vehicle sensing capabilities provide support for enhanced safety use cases, passenger infotainment and vehicle traffic optimization. V2X communications should support variety of use cases like forward collision warning, do not pass warning, queue warning, parking discovery, optimal speed advisory, curve speed warning, etc.
Currently there exists two main technologies to support V2X communications: dedicated short-range communications (DSRC) and cellular network technologies. DSRC technology is mainly considered to support intelligent transportation system (ITS) applications in V2V scenarios. LTE based V2X communications can make use of high capacity, large cell coverage range and widely deployed infrastructure to support vehicular communications. Due to which the 3rd Generation Partnership Project (3GPP) is currently working on cellular technology based V2X service and aims to provide a variety of V2X services. 3GPP has already completed its Release 14 with LTE based V2X service as one of the main features including other features like license assisted access, machine type communications, massive MIMO. Cellular-V2X Release 14 provides highly reliable, real time communications for automotive safety use cases. It will continue to evolve to Release 15 along with 5G to provide complementary and new capabilities like sensor sharing while maintaining backward compatibility. The technical organizations like 3GPP and Qualcomm have already prepared the roadmap towards 5G based V2X services. There is also active research being conducted in interworking between DSRC and cellular technology to support efficient V2X communications.
In this tutorial, we provide a comprehensive survey on state-of-the-art of various works on 4G LTE and 5G to support V2X communications, and security for V2X communications. We show that several challenges lie ahead before LTE can be massively deployed in vehicular environment. The main challenge identified in supporting V2X services will be high relative mobility causing Doppler Effect and dense UEs. LTE systems need to be enhanced especially physical layer structure to address the problem of this Doppler Effect. Resource allocation will be another challenge where resources being used by the vehicular system should not conflict with the resources being used by cellular users. Interference from vehicular user to the existing cellular user need to be taken care while assigning resources. Another main challenge will be the security. As V2X network will be controlled by operator, operator can easily track the vehicular users. Several solutions have been proposed in 3GPP to address this security problem. Broadcast system MBMS should be enhanced in order to better support the safety message dissemination. 3GPP has already completed work for Release 14 and is currently working for further LTE evolution and new air interface design to support vehicular communication based on 5G.
Tutorial Outline
Syllabus:
1. Motivation for 4G LTE based V2X Communications (10 minutes)
a. DSRC based V2X communications
b. LTE based V2X communications and the advantages
2. LTE V2X infrastructure and operating scenarios (60 minutes)
a. 4G LTE V2X communication model
b. 3GPP LTE V2X communication architecture
c. Operating scenarios
i. Multiple operators for a given area with each UE using spectrum of its own operator
ii. Multiple operators for a given area with dedicated spectrum for V2X
iii. Single operator for a given area
iv. Out of cellular coverage
3. Challenges and solutions in 4G and 5G for supporting V2X communications (40 minutes)
a. Physical layer structure
b. Synchronization
c. Resource allocation
d. Multimedia broadcast multicast services
4. Security and privacy for V2X communications (60 minutes)
a. Attacks and security services in V2X communications
b. State-of-the-art security solutions
i. Group signature based schemes
ii. Identity based schemes
iii. Hybrid schemes
iv. Trust based schemes
v. Solutions for privacy issues
5. Open research issues for future 5G based V2X communications (30 minutes)
a. Emerging 5G technologies and V2X communications
b. Vehicular cloud computing
c. Vehicular fog computing
d. Security and privacy in 5G V2X communications
6. Conclusion (10 minutes)
Primary Audience
Graduate students, professors, researchers, scientists, practitioners, engineers, industry managers, consultants, and government agencies.
Novelty
This tutorial not only covers the current research and development on 4G LTE based V2X communications and security, but also the latest development on V2X communications and security for 5G mobile wireless systems, and the unique discussions on the challenges and open research issues in the area, based on the tutorial speaker’s own research experience and comprehensive surveys on the subject.
Biography
Yi Qian received a Ph.D. degree in electrical engineering from Clemson University, South Carolina. He is currently a professor in the Department of Electrical and Computer Engineering, University of Nebraska-Lincoln (UNL). Prior to joining UNL, he worked in the telecommunications industry, academia, and government. Some of his previous professional positions include serving as a senior member of scientific staff and a technical advisor at Nortel Networks, a senior systems engineer and a technical advisor at several startup companies, an assistant professor at the University of Puerto Rico at Mayaguez, and a senior researcher at the National Institute of Standards and Technology. His research interests include communications and systems, and information and communication network security.
Prof. Yi Qian is a Fellow of IEEE. He was previously Chair of the IEEE Technical Committee for Communications and Information Security. He was the Technical Program Chair for IEEE International Conference on Communications 2018. He serves on the Editorial Boards of several international journals and magazines, including as the Editor-in-Chief for IEEE Wireless Communications. He was a Distinguished Lecturer for IEEE Vehicular Technology Society. He is currently a Distinguished Lecturer for IEEE Communications Society.
Prof. Qian received the Henry Y. Kleinkauf Family Distinguished New Faculty Teaching Award in 2011, the Holling Family Distinguished Teaching Award in 2012, Holling Family Distinguished Teaching Award for Innovative Use of Instructional Technology in 2018, and Holling Family Distinguished Teaching/Advising/Mentoring Award in 2018, all from University of Nebraska-Lincoln. In the recent years, he has been a frequent speaker on many topics in his research areas in various venues and forums, as a keynote speaker, a tutorial presenter, and an invited lecturer.
14:00–17:30, Room: TBA
T10: Reinforcement Learning for Optimization of Wireless Systems: Methods, Exploration and Sensing
Presented by: Haris Gacanin (Department Head, Nokia Bell Labs)
Time: 14:00–17:30
Room: TBA
Abstract—This tutorial discusses technology and opportunities to embrace artificial intelligence (AI) in the design of autonomous wireless systems. We aim to provide readers with motivation and general AI methodology of autonomous agents in the context of self-organization in real time unifying sensing, perception, reasoning and learning. We discuss differences between training-based and training-free AI methodology for both matching and dynamic problems, respectively. Finally, we introduce the conceptual functions of autonomous agent with knowledge management. Finally, a practical case study is given to illustrate the application and potential gains.
Tutorial Objectives
Design of autonomous wireless systems with simultaneous service delivery cannot be accomplished by incremental changes to the present deterministic control and optimization methodologies. It requires a fundamental leap in the system’s thinking by embedding active learning and sensing strategies into temporal wireless infrastructure itself. This means that the infrastructure will become aware of the way it is being used to anticipate actual requirements at the specific moment and what it is likely to be required at later time. As such it will facilitate wireless as a true application-aware platform for a plethora of novel applications. The above issues capture new coming challenges and unveil necessary future applications of AI in wireless systems with design challenges. We address the shortcomings of contemporary rule-based optimization protocols and re-thinking our wireless operations for boosting the system autonomy. We discuss about paradigm shift from data-driven knowledge-discovery with ML toward fully inspired knowledge-driven wireless operation with AI in real time.
It is highly expected that different participants may find their interests within this tutorial. The tutorial provides a diverse viewpoint of this topic, as well as facilitate discussions that would enable individuals to think beyond the technology itself.
Tutorial Outline
1) Introduction (15 minutes)
a. The relevance and rise of autonomous systems
b. User experience systems
c. Wireless systems complexity
2) The System-of-Systems (SoS) (35 minutes)
a. General theory (systems, interactions, environment)
b. Machine learning vs. artificial intelligence
c. ML-driven knowledge discovery vs. AI-driven knowledge management
3) Traditional optimization theory vs. machine learning (35 minutes)
a. Convex optimization and iterative algorithms
b. Deep neural networks
c. Reinforcement learning
d. Use case study: bit loading, channel estimation, nonlinear distortion
4) Intelligent agent design (40 minutes)
a. Type of intelligent agents and environments
b. Sensing and percept design
c. Reasoning, planning and searching with uncertainty
d. Supervised and unsupervised learning
e. Learning to act (Reinforcement learning)
5) Case studies: Self-optimization (40 minutes)
a. Reinforcement learning optimization framework
b. Knowledge Base Representation
c. Exploitation vs. exploration dilemma
6) Conclusions and Future Directions (15 minutes)
Primary Audience
Entry level graduate students (PhD-level) who are seeking to pursue dissertation research in 5G and B5G systems, as well as industry practitioners who need to upgrade their skill sets and rethink how to view future communications from network operations perspective. Hence, it is ideally suited for attendees of the conference. No knowledge of communication protocols is required to attend the tutorial.
Novelty
The above addressed issues provoke new coming challenges and unveil necessary directions across multi-disciplinary research areas related to applications of AI with ML in future wireless systems. The following aspects are important:
1) We present wireless system's requirements and relate those to machine learning properties
2) We present comparison of training-based and training-free AI methods for solving fundamental problems in wireless communications. As example, we show comparison between traditional optimization, deep learning and reinforcement learning
3) We discuss about importance of training-free AI methods for time-sensitive comm.
Biography
Haris Gacanin received his Dipl.-Ing. degree in Electrical engineering from University of Sarajevo, Bosnia and Herzegovina, in 2000. In 2005 and 2008, he received M.E.E. and Ph.D. from Tohoku University, Japan. He was with Tohoku University from April 2008 until May 2010 first as Japan Society for Promotion of Science postdoctoral fellow and then, as Assistant Professor. Since 2010, he is with Alcatel-Lucent (now Nokia), where he is currently Department Head at Nokia Bell Labs leading research activities related to application of artificial intelligence in network optimization with focus on mobile/wireless/wireline physical (L1) and media access (L2) layer technologies and network architectures. He has more than 200+ publications (journals, conferences and patens) and invited/tutorial talks. He organized and hosted several tutorials and industry panels at IEEE conferences. He is senior member of the Institute of Electrical and Electronics Engineers (IEEE) and the Institute of Electronics, Information and Communication Engineering (IEICE).