Structured knowledge grounding (SKG) leverages structured knowledge to complete user requests, such as semantic parsing over databases and question answering over knowledge bases. Since the inputs and outputs of SKG tasks are heterogeneous, they have been studied separately by different communities, which limits systematic and compatible research on SKG. In this paper, we overcome this limitation by proposing the UNIFIEDSKG framework, which unifies 21 SKG tasks into a text-to-text format, aiming to promote systematic SKG research, instead of being exclusive to a single task, domain, or dataset. We use UNIFIEDSKG to benchmark T5 with different sizes and show that T5, with simple modifications when necessary, achieves state-of-the-art performance on almost all of the 21 tasks. We further demonstrate that multi-task prefix-tuning improves the performance on most tasks, largely improving the overall performance. UNIFIEDSKG also facilitates the investigation of zero-shot and fewshot learning, and we show that T0, GPT-3, and Codex struggle in zero-shot and fewshot learning for SKG. We also use UNI-FIEDSKG to conduct a series of controlled experiments on structured knowledge encoding variants across SKG tasks. UNIFIEDSKG is easily extensible to more tasks, and it is open-sourced at https://github.com/ hkunlp/unifiedskg. 1
Recently, a method utilizing pulsed power technology for disintegration of rocks arouses great interest of many researchers. In this paper, an improved method based on magnetic switch and the results shown that the uniform dielectrics like plastic can be broken down in water is presented, and the feasible mechanism explaining the breakdown of solid is proposed and proved experimentally. A high voltage pulse of 120 kV, rise time 0.2 μs was used to ignite the discharging channel in solids. When the plasma channel is formed in the solid, the resistance of the channel is quiet small; even if a relatively low voltage is applied on the channel on this occasion, it will produce high current to heat the plasma channel rapidly, and eventually disintegrate the solids. The feasibility of promising industrial application in the drilling and demolition of natural and artificial solid materials by the method we presented is verified by the experiment result in the paper.
Though end-to-end neural approaches have recently been dominating NLP tasks in both performance and ease-of-use, they lack interpretability and robustness. We propose BINDER, a training-free neural-symbolic framework that maps the task input to a program, which (1) allows binding a unified API of language model (LM) functionalities to a programming language (e.g., SQL, Python) to extend its grammar coverage and thus tackle more diverse questions, (2) adopts an LM as both the program parser and the underlying model called by the API during execution, and (3) requires only a few in-context exemplar annotations. Specifically, we employ GPT-3 Codex as the LM. In the parsing stage, with only a few incontext exemplars, Codex is able to identify the part of the task input that cannot be answered by the original programming language, correctly generate API calls to prompt Codex to solve the unanswerable part, and identify where to place the API calls while being compatible with the original grammar. In the execution stage, Codex can perform versatile functionalities (e.g., commonsense QA, information extraction) given proper prompts in the API calls. BINDER achieves state-of-the-art results on WIKITABLEQUESTIONS and TABFACT datasets, with explicit output programs that benefit human debugging. Note that previous best systems are all finetuned on tens of thousands of task-specific samples, while BINDER only uses dozens of annotations as in-context exemplars without any training. Our code is available at https://github.com/hkunlp/binder.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.