Document Type
|
:
|
BL
|
Record Number
|
:
|
860821
|
Main Entry
|
:
|
Yu, F. Richard
|
Title & Author
|
:
|
Deep reinforcement learning for wireless networks /\ F. Richard Yu, Ying He.
|
Publication Statement
|
:
|
Cham, Switzerland :: Springer,, [2019]
|
Series Statement
|
:
|
SpringerBriefs in Electrical and Computer Engineering
|
Page. NO
|
:
|
1 online resource (78 pages)
|
ISBN
|
:
|
3030105466
|
|
:
|
: 3030105474
|
|
:
|
: 9783030105464
|
|
:
|
: 9783030105471
|
|
:
|
3030105458
|
|
:
|
9783030105457
|
Contents
|
:
|
Intro; Preface; A Brief Journey Through D̀̀eep Reinforcement Learning for Wireless Networks''; Contents; 1 Introduction to Machine Learning; 1.1 Supervised Learning; 1.1.1 k-Nearest Neighbor (k-NN); 1.1.2 Decision Tree (DT); 1.1.3 Random Forest; 1.1.4 Neural Network (NN); Random NN; Deep NN; Convolutional NN; Recurrent NN; 1.1.5 Support Vector Machine (SVM); 1.1.6 Bayes' Theory; 1.1.7 Hidden Markov Models (HMM); 1.2 Unsupervised Learning; 1.2.1 k-Means; 1.2.2 Self-Organizing Map (SOM); 1.3 Semi-supervised Learning; References; 2 Reinforcement Learning and Deep Reinforcement Learning.
|
|
:
|
2.1 Reinforcement Learning2.2 Deep Q-Learning; 2.3 Beyond Deep Q-Learning; 2.3.1 Double DQN; 2.3.2 Dueling DQN; References; 3 Deep Reinforcement Learning for Interference Alignment Wireless Networks; 3.1 Introduction; 3.2 System Model; 3.2.1 Interference Alignment; 3.2.2 Cache-Equipped Transmitters; 3.3 Problem Formulation; 3.3.1 Time-Varying IA-Based Channels; 3.3.2 Formulation of the Network's Optimization Problem; System State; System Action; Reward Function; 3.4 Simulation Results and Discussions; 3.4.1 TensorFlow; 3.4.2 Simulation Settings; 3.4.3 Simulation Results and Discussions.
|
|
:
|
3.5 Conclusions and Future WorkReferences; 4 Deep Reinforcement Learning for Mobile Social Networks; 4.1 Introduction; 4.1.1 Related Works; 4.1.2 Contributions; 4.2 System Model; 4.2.1 System Description; 4.2.2 Network Model; 4.2.3 Communication Model; 4.2.4 Cache Model; 4.2.5 Computing Model; 4.3 Social Trust Scheme with Uncertain Reasoning; 4.3.1 Trust Evaluation from Direct Observations; 4.3.2 Trust Evaluation from Indirect Observations; Belief Function; Dempster's Rule of Combining Belief Functions; 4.4 Problem Formulation; 4.4.1 System State; 4.4.2 System Action; 4.4.3 Reward Function.
|
|
:
|
4.5 Simulation Results and Discussions4.5.1 Simulation Settings; 4.5.2 Simulation Results; 4.6 Conclusions and Future Work; References.
|
Abstract
|
:
|
This Springerbrief presents a deep reinforcement learning approach to wireless systems to improve system performance. Particularly, deep reinforcement learning approach is used in cache-enabled opportunistic interference alignment wireless networks and mobile social networks. Simulation results with different network parameters are presented to show the effectiveness of the proposed scheme. There is a phenomenal burst of research activities in artificial intelligence, deep reinforcement learning and wireless systems. Deep reinforcement learning has been successfully used to solve many practical problems. For example, Google DeepMind adopts this method on several artificial intelligent projects with big data (e.g., AlphaGo), and gets quite good results. Graduate students in electrical and computer engineering, as well as computer science will find this brief useful as a study guide. Researchers, engineers, computer scientists, programmers, and policy makers will also find this brief to be a useful tool.
|
Subject
|
:
|
Reinforcement learning.
|
Subject
|
:
|
Wireless communication systems.
|
Subject
|
:
|
Reinforcement learning.
|
Subject
|
:
|
Wireless communication systems.
|
Dewey Classification
|
:
|
006.3/1
|
LC Classification
|
:
|
Q325.6.Y84 2019
|
Added Entry
|
:
|
He, Ying
|