2020 IEEE 8th International Conference on Information, Communication and Networks (ICICN) 2020
DOI: 10.1109/icicn51133.2020.9205073
|View full text |Cite
|
Sign up to set email alerts
|

A Knowledge Reasoning Algorithm Based on Network Structure and Representation Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 4 publications
0
3
0
Order By: Relevance
“…By actively engaging with the environment and utilizing feedback, extrospective reasoning offers a more flexible and responsive approach to generating plans, which is particularly suitable for complex and dynamic situations where the ability to adapt and learn from experience is crucial. Several related works in the field of extrospective reasoning with LLMs include Self-Ask (Press et al, 2023), ReAct (Yao et al, 2023c), ToolFormer (Schick et al, 2023), and LLM-Planner (Song et al, 2023a). Self-Ask (Press et al, 2023) proactively generates and responds to its own follow-up queries before addressing the original question.…”
Section: Extrospective Reasoningmentioning
confidence: 99%
See 2 more Smart Citations
“…By actively engaging with the environment and utilizing feedback, extrospective reasoning offers a more flexible and responsive approach to generating plans, which is particularly suitable for complex and dynamic situations where the ability to adapt and learn from experience is crucial. Several related works in the field of extrospective reasoning with LLMs include Self-Ask (Press et al, 2023), ReAct (Yao et al, 2023c), ToolFormer (Schick et al, 2023), and LLM-Planner (Song et al, 2023a). Self-Ask (Press et al, 2023) proactively generates and responds to its own follow-up queries before addressing the original question.…”
Section: Extrospective Reasoningmentioning
confidence: 99%
“…Self-Ask (Press et al, 2023) proactively generates and responds to its own follow-up queries before addressing the original question. Meanwhile, ReAct (Yao et al, 2023c) leverages large language models to concurrently produce reasoning traces and task-specific actions. This dual approach enhances the interaction between these elements, with reasoning traces aiding in the development, monitoring, and modification of action plans, as well as managing unexpected situations.…”
Section: Extrospective Reasoningmentioning
confidence: 99%
See 1 more Smart Citation