2023
DOI: 10.1007/978-3-031-18950-0_14
|View full text |Cite
|
Sign up to set email alerts
|

Mobile-Assisted Language Assessment for Adult EFL Learners: Recommendations from a Systematic Review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 34 publications
0
0
0
Order By: Relevance
“…Furthermore, our approach is based on a linear prompt execution without feedback or revision opportunities. Iterative prompting strategies, such as chain-of-thought [ 109 ] or graph-of-thoughts prompting [ 110 ], self-debugging [ 111 ] or self-adapting approaches [ 112 ], can potentially enhance results. These strategies offer dynamic interaction approaches with LLMs, enabling continuous improvement and adaptation of responses through successive refinements.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, our approach is based on a linear prompt execution without feedback or revision opportunities. Iterative prompting strategies, such as chain-of-thought [ 109 ] or graph-of-thoughts prompting [ 110 ], self-debugging [ 111 ] or self-adapting approaches [ 112 ], can potentially enhance results. These strategies offer dynamic interaction approaches with LLMs, enabling continuous improvement and adaptation of responses through successive refinements.…”
Section: Discussionmentioning
confidence: 99%
“…Instruction-following language models have emerged as a promising direction to enable LLMs to generate targeted and controlled outputs based on user instructions. Recent studies have explored various methods for finetuning LLMs with instruction data, enhancing their performance on specific tasks and domains (Koleva et al, 2022;Qiao et al, 2022;Wu et al, 2023;Chen et al, 2023). Prior research on Vietnamese language processing has been carried out to pre-train Vietnamese monolingual language models (Duong et al, 2021), with downstream application to tasks such as question answering (Phan et al, 2022;Tran et al, 2023), named entity recognition (Vu et al, 2019;Tran et al, 2023), and text summarization (Phan et al, 2022), exploring challenges specific to Vietnamese language.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, some have fine-tuned Llama2 (Touvron et al, 2023) and achieved superior performance in SQL generation tasks compared to GPT-4 14 . Additionally, there are techniques like Self-Debug (Chen et al, 2023) that optimize inference during the prompting phase. The security and robustness of these approaches deserve further investigation.…”
Section: Limitationsmentioning
confidence: 99%
“…The emergence of powerful large language models (LLMs) has recently enabled the development of highly effective parsers with minimal demostration examples (Chen et al, 2023), indicating the potential for LLM-based parsers to serve as novel interfaces for databases (Li et al, 2023). Nevertheless, the exponential growth of LLM-based applications coupled with inadequate regulation creates an environment in which certain malicious service providers (MSPs) could exploit the invisibility of the prompt engineering process to offer users services that contain hidden backdoors.…”
Section: Introductionmentioning
confidence: 99%