All tutorials will be held on 28 April 2019.
9:00–12:30, Room: Starhill 8
T1: Embracing Non-Orthogonal Multiple Access in Future Wireless Networks
Presented by: Zhiguo Ding, Univ. of Manchester, Wei Liang Northwesten Polytechnical Univ., Jia Shi, Xidian Univ.
Time: 9:00–12:30
Room: Starhill 8
Abstract—Non-orthogonal multiple access (NOMA) is an essential enabling technology for future wireless networks to meet the heterogeneous demands on low latency, high reliability, massive connectivity, improved fairness, and high throughput. The key idea behind NOMA is to serve multiple users in the same resource block, such as a time slot, subcarrier, or spreading code. The NOMA principle provides a general framework, where various recently proposed 5G multiple access techniques can be viewed as special cases. Recent demonstrations by industry show that the use of NOMA can significantly improve the spectral efficiency of mobile networks. Because of its superior performance, NOMA has been also recently proposed for downlink transmission in 3rd generation partnership project long-term evolution (3GPP-LTE) systems and included into the next generation digital TV standard, e.g. ATSC (Advanced Television Systems Committee) 3.0. This tutorial is to provide an overview of the latest research results and innovations in NOMA technologies as well as their applications. Future research challenges regarding NOMA in 5G and beyond are also presented.
Tutorial Objectives
1. Review of the overall 5G requirements regarding the support of massive connectivity and realize spectrally and energy efficient communications.
2. Single-carrier NOMA and multi-carrier (MC) NOMA are introduced first, where their special cases, including power-domain NOMA, cognitive-radio (CR) inspired NOMA, SCMA, PDMA, etc, are discussed and compared.
3. The combination of orthogonal MIMO technologies and NOMA will be investigated. A few MIMO-NOMA designs with different trade-offs between system performance and complexity will be discussed.
4. The design of cooperative NOMA schemes will be explained. In a NOMA system, successive interference cancellation is used, which means that some users know the other users’ information perfectly. Such a priori information can be exploited, and a few examples of cooperative NOMA protocols will be introduced and their advantages/disadvantages will be illustrated.
5. The application of NOMA in mmWave networks will be investigated. We will show that the use of NOMA is still important in mmWave networks, in order to fully exploit the bandwidth resources available at very high frequencies.
6. The practical implementation of NOMA will be discussed. The existing coding and modulation designs for NOMA will be described first, where another practical form of NOMA based on lattice coding, termed lattice partition multiple access (LPMA), will be introduced. The impact of imperfect channel state information (CSI) on the design of NOMA will then be investigated.
7. The application of NOMA to wireless caching will be illustrated as well. Particularly, several NOMA assisted caching strategies will be described, and their capabilities to improve the spectral efficiency of the two caching phases, content pushing and content delivery, will be demonstrated.
8. The combination between mobile edge computing (MEC) and NOMA is introduced. A few recently developed NOMA assisted MEC offloading strategies are described and the impact of NOMA on the delay and the energy consumption of MEC offloading is illustrated by using OMA-MEC as a benchmark.
9. Grant free NOMA transmission is described. We will show that grant free NOMA transmission provides a more efficient way to reduce the system signalling and ensures that users are served in a more timely manner.
10. Recent standardization activities related to NOMA will be reviewed as well. Particularly the tutorial will focus on the implementation of multi-user superposition transmission (MUST), a technique which has been included into 3GPP LTE Release 13. Different designs of MUST and their relation to the basic form of NOMA will be illustrated. In addition, the application of NOMA in the digital TV standard ATSC 3.0 will also be illustrated.
11. Finally, challenges and open problems for realizing spectrally efficient NOMA communications in the next generation of wireless networks will be discussed.
Tutorial Outline
- Overview and Motivation
- A General Framework
- Single-Carrier NOMA
- Multi-carrier NOMA
- Cooperative NOMA
- User Cooperation
- Employing Dedicated Relays
- MIMO-NOMA
- General Principles
- Practical Designs
- When Users’ Channels Are Similar
- MmWave-NOMA
- General Principles
- FRAB-mmWave-NOMA
- NOMA Assisted Caching
- Motivation and Introduction
- NOMA Caching Strategies
- NOMA Assisted MEC
- Performance analysis
- Optimization
- Grant-Free NOMA
- Practical Implementation Issues
- Coding and Modulation
- Imperfect CSI
- SWIPT + NOMA
- Security Provisioning for NOMA
- Future Research Directions
Primary Audience
Graduate students, Senior undergraduate students, researchers and engineers in telecommunications
Novelty
This tutorial presents a timely overview on how to achieve high spectral efficiency, the holy grail of modern wireless communications, particularly for emerging 5G networks. The tutorial will shed light on some fundamental challenges in the design of such spectrally efficient networks from the system engineering perspective. In presenting the tutorial, all concepts are built up from basic concepts. Therefore, the audience only requires a moderate level of prior knowledge in communications and signal processing.
Biography
Zhiguo Ding is a Professor in Communications at the University of Manchester. From Oct. 2012 to Sept. 2019, he has also been an academic visitor in Princeton University. Dr Ding’ research interests are 5G networks, game theory, cooperative and energy harvesting networks and statistical signal processing. He is serving as an Editor for IEEE TCOM and TVT, and was an Editor for IEEE WCL and CL from 2013 to 2016. He received the best paper award in IET ICWMC-2009 and IEEE WCSP-2014, the EU Marie Curie Fellowship 2012-2014, the Top IEEE TVT Editor 2017, IEEE Heinrich Hertz Award 2018, IEEE Jack Neubauer Memorial Award 2018.
Wei Liang received the M.Sc. and Ph.D. degrees in wireless communication group from the University of Southampton, Southampton, U.K., in 2010 and 2015, respectively. From 2015 to 2018, she was currently a Research Fellow with Lancaster University, and from 2018, she is an associate professor with Northwesten Polytechnical University. Her research interests include adaptive coded modulation, network coding, matching theory, game theory, cooperative communication, cognitive radio network, non-orthogonal multiple access scheme, and machine learning.
Jia Shi received the M.Sc. and Ph.D. degrees from University of Southampton, U.K., in 2010 and 2015, respectively. He was a Research Associate with Lancaster University, U.K., from 2015 to 2017. He was a Research Fellow with 5GIC, University of Surrey, U.K., from 2017 to 2018. Since 2018, he has been with Xidian University, China, where he is currently an Associate Professor with the State Key Laboratory of Integrated Services Networks. His current research interests include mm-wave communications, non-orthogonal multiple access (NOMA) techniques, artificial intelligence, resource allocation in wireless systems, covert communications, physical layer security, etc. He is now serving as an editor for Electronics Letters, and is also serving as a guest editor for China Communications.
14:00–17:30, Room: Starhill 8
T4: Machine Learning Radio Resource Management for Future Mobile Networks
Presented by: Li-Chun Wang, Chair Professor, Dept. of Electrical and Computer Engineering, National Chiao Tung Un
Time: 14:00–17:30
Room: Starhill 8
Abstract—We are witnessing the transition into the fifth generation (5G) cellular mobile systems. Is there any need for beyond 5G? A significant change in recent wireless networks is that much more data are collected from various sources, including channels, locations, radio access options, and network states. The availability of this large amount and various types of data can potentially transform the current knowledge-driven mobile network into a more powerful data-driven cognitive and learning-assisted mobile network.
Tutorial Objectives
In this tutorial, we first discuss various types of learning algorithms: supervised learning, unsupervised learning, and reinforcement learning.
Second, we examine how these learning algorithms can address the performance issues of high-capacity ultra-dense small cells in an environment with dynamical traffic patterns and time-varying channel conditions. In particular, we introduce a data-driven bi-adaptive self organizing network (Bi-SON) which can exploit the power of data-driven resource management to address the performance issues of ultra-dense small cells (UDSC).
On top of the Bi-SON framework, we will examine how a polynomial regression supervised learning, and an affinity propagation unsupervised learning algorithm can improve energy efficiency and reduce interference in system operator deployed small cells and customer plug-and-play small cells, respectively. We will also examine how reinforcement learning can improve the performance of UDSC further.
Finally, we discuss deep reinforcement learning (DRL) approaches proposed to address various challenges in modern networks which are more decentralized, ad-hoc, and autonomous in nature, such as Internet of things (IoT), vehicle-to-vehicle network, unmanned aerial vehicle (UAV) network, and heterogeneous networks (HetNet).
Tutorial Outline
- Introduction to Data-Driven Resource Management
- Background of Machine Learning
- Supervised Learning for Operator Deployed Ultra-Dense Small Cells
- Unsupervised Learning for Customer Plug-and-Play Ultra-Dense Small Cells
- Reinforce Learning for Ultra-Dense Small Cells
- Deep Reinforcement Learning for Mobile 5G and Beyond
Primary Audience
This tutorial is targeted on both academic researchers and industrial engineers in the field of cellular mobile communications.
Novelty
This tutorial provides a new aspect of future mobile network when AI meets mobile big data from the viewpoint of cellular architecture and radio resource management.
Biography
Li-Chun Wang (M'96 -- SM'06 -- F'11) received Ph. D. degree from the Georgia Institute of Technology, Atlanta, in 1996. From 1996 to 2000, he was with AT&T Laboratories, where he was a Senior Technical Staff Member in the Wireless Communications Research Department. Since August 2000, he has joined the Department of Electrical and Computer Engineering of National Chiao Tung University in Taiwan and is jointly appointed by Department of Computer Science and Information Engineering of the same university.
Dr. Wang was elected to the IEEE Fellow in 2011 for his contributions to cellular architectures and radio resource management in wireless networks. He won the Distinguished Research Award of National Science Council, Taiwan (2012). He was the co-recipients of IEEE Communications Society Asia-Pacific Board Best Award (2015), Y. Z. Hsu Scientific Paper Award (2013), and IEEE Jack Neubauer Best Paper Award (1997). His current research interests are in the areas of software-defined mobile networks, heterogeneous networks, and data-driven intelligent wireless communications. He holds 19 US patents, and have published over 200 journal and conference papers, and co-edited a book, “Key Technologies for 5G Wireless Systems,” (Cambridge University Press 2017).
9:00–12:30, Room: Bintang 7
T5: Artificial Intelligence in Wireless Signal Processing: from Compressive Sensing to Deep Learning
Presented by: Yue Gao: Queen Mary University of London, London, UK
Time: 9:00–12:30
Room: Bintang 7
Abstract—The tutorial aims to discuss the sparse signal processing in wireless communications, with particular focus on the most recent developments on compressive sensing, embedded Artificial Intelligent (AI) and Deep Learning (DL) enabled approaches from theory to practice. With sparsity property, sub-Nyquist sampling can be achieved by adopting compressive sensing. Moreover, DL has rebooted intelligent signal processing in wireless communications, such as signal detection and channel estimation. This tutorial starts from a brief introduction and general framework of sparse signal processing in wireless communications. The second part of the tutorial presents a framework for compressive spectrum sensing, which guarantees noise robustness, low-complexity, and security. Moreover, the real-world signals and data collected by the in-field tests carried out during the TV white space and millimetre-wave pilot trial will be presented to verify the algorithm designs and provide significant insights on the potential of bringing compressive spectrum sensing from theory to practice through an embedded AI approach. Then, the third part of the tutorial shows the power of DL in sparse signal processing in physical layer communications, with particular focus on signal detection and channel estimation. Finally, this tutorial will identify challenges in the DL-enabled signal processing via example of AI-enabled. cognitive radio framework. We believe this tutorial will provide audiences a clear picture on how to exploit the DL, embedded AI and compressive sensing to process wireless signals.
Tutorial Objectives
Sparse representation can efficiently model signals using different number of parameters to facilitate processing. It has been widely used in different applications, such as image processing, audio signal processing, and wireless signal processing. In this tutorial, we will discuss the sparse signal processing in wireless communications, with focus on the most recent compressive sensing (CS) and deep learning (DL) enabled sparse representation. This tutorial starts from the general framework of sparse representation including the CS and the DL-based approaches. Then we will present the recent research progress on applying CS to address the major issues and challenges in wireless communications, where the wideband spectrum sensing is provided as a sub-Nyquist example. Particularly, both the latest theoretic contributions and practical implementation platforms will be discussed through a GHz bandwidth sensing system as an embedded artificial intelligence (AI) approach. The third part of this tutorial will talk about the AI-enabled cognitive radio framework, with particular emphasis on DL-enabled wireless signal detection.
This tutorial will benefit researchers looking for cross-pollination of their experience with other areas, such as data-driven based channel estimation and wideband spectrum sensing and provide audience a clear picture of how to exploit the sparse properties, DL and embedded AI to process wireless signals for different scenarios.
• The first objective is to provide a general introduction to different tools used for sparse representation in wireless signal processing, such as deep learning and compressive sensing. We will demonstrate the theoretic background of sparse representation and the research challenges faced by wireless communications.
• The second objective is to demonstrate the basic framework of compressive spectrum sensing, which is used to reduce the sampling cost and to improve the spectrum utilization for GHz bandwidth sensing system with an embedded AI approach. It includes the most advanced developments of compressive spectrum sensing from theory to practice.
• The third objective is to illustrate the power of DL in wireless signal processing, with particular focus on artificial intelligence enabled (AI-enabled) cognitive radio framework. The most advanced developments of interference mitigation in wideband radios using spectrum correlation and neural network will be introduced.
Tutorial Outline
- Background and general framework of sparse representation in wireless communications
- Compressive spectrum sensing
- Robust compressive spectrum sensing: introducing compressive spectrum sensing with robust- ness to channel noise and low complexity during sparse signal recovery
- Data -driven compressive spectrum sensing: presenting compressive spectrum sensing by utilizing prior information from geo-location database for performance enhancement
- Secure compressive spectrum sensing: discussing malicious user detection in compressive spectrum sensing to enhance network security
- GHz bandwidth sensing system from compressive sensing to embedded AI
- Research directions in sparse signal processing from theory to practice: introducing the key challenges in theory as well as implementation
- Framework from compressive sensing to embedded artificial intelligence: discussing examples to illustrate a framework from compressive sensing to learning as an embedded AI approach
- Deep learning based sparse signal recovery: presenting deep learning solutions for signal recovery in compressive sensing
- DL in sparse signal processing
- Power of DL for signal processing in communications.
- Data-driven DL-based channel estimation and signal detection: introducing data-driven DL enabled channel estimation and signal detection
- Model-driven DL-based signal compression: presenting model-driven signal compression and detection in millimetre-wave systems.
- Research challenges in DL-enabled sparse signal processing
Primary Audience
Whilst this overview is ambitious in terms of providing a research-oriented outlook, potential attendees require only a modest background in signal processing and wireless communications. The mathematical contents are kept to a minimum and a conceptual approach is adopted. Postgraduate students, researchers and signal processing practitioners as well as managers looking for cross-pollination of their experience with other topics may find the coverage of the presentation beneficial. The participants will receive the set of slides as supporting material and they may find the detailed mathematical analysis from the referenced papers and books.
Novelty
This tutorial will explain the sparse signal processing principles, AI and machine learning techniques in various wireless applications at both sub-6GHz and millimetre-wave frequency bands. We will also explain the implementable sparse signal processing and machine learning techniques. Moreover, the field trails carried out in the UK and Italy for data collection will be introduced. Moreover, the collected real-world dataset will be used as examples to explain principles of the sparse signal processing and machine learning techniques in wireless communications.
Biography
Dr. Yue Gao has rich experience on signal processing in wireless communication and the hardware system implementation over the past 15 years. Dr Gao is leading a team developing fundamental research into practice in the interdisciplinary area of embedded artificial intelligence by using smart antennas, sparse signal processing for spectrum sharing, Internet of Things and millimetre-wave systems. He is an Engineering and Physical Sciences Research Council Fellow from 2018 to 2023. He was a co-recipient of the EU Horizon Prize Award on Collaborative Spectrum Sharing in 2016.
Dr. Zhijin Qin has been working on sparse signal recovery, with particular focus on compressive sensing and matrix completion in wireless communications, for more than six years since her PhD. She has carried out extensive research work in the related areas from theory to practice. Many of her published journal papers in this area have been ranked as one of the most popular articles in the related journal and four of them have been ranked as the ESI highly cited paper.
Dr. Geoffrey Li has performed research in machine learning and statistical signal processing for wireless communications in the past two decades. His recent work includes sparse signal compression for channel estimation and feedback in massive MIMO networks.
14:00–17:30, Room: Bintang 7
T6: Adversarial ML and Vehicular Networks: Strategies for Attack and Defense
Presented by: Junaid Qadir, ITU, Pakistan; Muhammad Shafique, TU Wien, Austria; Ala Al Fuqaha, HBKU, Qatar
Time: 14:00–17:30
Room: Bintang 7
Abstract—Machine learning (ML) has seen a lot of recent success in a wide variety of applications and industries. But despite their great success, researchers have shown that ML algorithms are easy to fool and susceptible to well-known security attacks. In particular, researchers have shown that many modern algorithms (particularly those based on deep neural networks or DNNs) are susceptible to adversarial attacks (such as a targeted misclassification attack on a self-driving car that aims to misclassify traffic signs). The increased importance of ML and AI, and the broad uptake and incorporation of these technologies in modern autonomous vehicles and vehicular networking places a premium on building robust and secure AI and ML algorithms.
Our experience with the Internet has shown that it is very difficult to retroactively embed security in systems that are not designed with security in the first place. Although ML vulnerabilities in domains such as vision, image, audio are now well-known, little attention has focused on adversarial attacks on vehicular networking ML models. For the practical success of vehicular networking, it is extremely important that the underlying technology has to be robust to all kinds of potential problems---be they accidental, intentional, or adversarial.
Tutorial Objectives
Our tutorial at VTC will serve to caution practitioners by discussing how the use of AI and ML in vehicular networking opens up potential security risks while also motivating the development of appropriate defenses and identifying promising directions of future work.
By the end of the tutorial, we expect that the attendees will be able to:
— Understand the major components of the ML pipeline, which will likely be used in future VANETs;
— Identify common attacks on ML models;
— Understand the dangers of adversarial machine learning in general;
— Understand the dangers of adversarial machine learning for self-driving cars and VANETS;
— Identify some promising directions for enabling robust ML
Tutorial Outline
- Introduction to adversarial ML (~30 minutes)
- Taxonomy of attacks on ML and DNN models (~30 minutes)
- Introduction to the ML pipeline in self-driving cars and vehicular networks and the associated security risks (~30 minutes)
- Potential Robust ML defenses (~30 minutes)
- Open issues and future work (~30 minutes)
Primary Audience
The tutorial will be useful for graduate students, researchers, as well as practitioners. The tutorial will be self-contained and attendees with basic understanding of VANETs and ML will be able to benefit from this tutorial.
Novelty
The major contributions of this tutorial will be in highlighting the vulnerability of ML-based functionality in modern vehicular networks to adversarial attacks and to provide useful insights for developing robust ML-based vehicular networking applications. This will be the first tutorial to focused on adversarial ML for the particular setting of vehicular networks, which is the principal audience of IEEE VTC.
Biography
Junaid Qadir is an Associate Professor at the Information Technology University (ITU)—Punjab, Lahore. He is the Director of the IHSAN (ICTD; Human Development; Systems; Big Data Analytics; Networks Lab) Research Lab at ITU (http://ihsanlab.itu.edu.pk/). His primary research interests are in the areas of computer systems and applied ML in computer systems and networks and using ICT for development (ICT4D). He has published more than 100 peer-reviewed scholarly papers, and has given invited talks at multiple conferences (e.g., IWCMC 2017, IEEE ICOSST 2017, IEEE INTELLECT 2017).
Muhammad Shafique is a full professor (Univ. Prof.) at the Vienna University of Technology (TU Wien) since Nov. 2016. Dr. Shafique has given several Invited Talks, Tutorials, and Keynotes. He is a senior member of the IEEE and IEEE Signal Processing Society (SPS), and a member of the ACM, SIGARCH, SIGDA, and SIGBED. He holds one US patent and has (co-)authored 4 Books, 4 Book Chapters, and over 180 papers in premier journals and conferences.
Ala Al-Fuqaha is a professor with the information and Computing Technology (ICT) Division, College of Science and Engineering (CSE), Hamad Bin Khalifa University and Department of Computer Science, Western Michigan University, Kalamazoo, MI, 49008, USA. His research interests include the use of machine learning in general and deep learning in particular in support of the data-driven and self-driven management of large-scale deployments of IoT and smart city infrastructure and services, Wireless Vehicular Networks (VANETs), cooperation and spectrum access etiquette in cognitive radio networks, and management and planning of software defined networks (SDN). He is a senior member of the IEEE and an ABET Program Evaluator (PEV). He serves on editorial boards and technical program committees of multiple international journals and conferences.