2016
DOI: 10.1007/s10825-016-0842-1
|View full text |Cite
|
Sign up to set email alerts
|

Prior knowledge input neural network method for GFET description

Abstract: For circuit design, various compact models for graphene field-effect transistors (GFETs) have been developed. However, the consistency between them is poor, since study on the mechanism of operation of GFETs is still immature and the models were derived based on different understandings. Herein, we propose another approach for circuit-level description of GFETs based on a prior knowledge input neural network modeling method. By virtue of the neural network's learning ability, it can accurately describe differe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…It has been claimed as the best function generating weights and biases that increase the speed of the training [44]. It is also claimed that a better starting point to the algorithm used can be created by using the generated weights and biases for input to hidden layer nodes, obtained from this function [45].…”
Section: Weights Initializationmentioning
confidence: 99%
“…It has been claimed as the best function generating weights and biases that increase the speed of the training [44]. It is also claimed that a better starting point to the algorithm used can be created by using the generated weights and biases for input to hidden layer nodes, obtained from this function [45].…”
Section: Weights Initializationmentioning
confidence: 99%
“…Therefore it is important to reduce the computational time required for NEGF device simulations; for instance, by applying information-scientific viewpoint [11,12,13,14]. As a representative example, consider double-gate MOSFET (DG-MOSFET) [15,16,17,18,19,20,21].…”
Section: Introductionmentioning
confidence: 99%