In a blockchain system, there is an entity called an ordering service or miner that is responsible for ordering transactions, building a new block, and broadcasting the new block to all the blockchain peers. In a distributed system, the safety property is violated when an ordering service sends inconsistent blocks to peers and the liveness property is not guaranteed when the ordering service stops sending blocks to peers. Therefore, the ordering service must behave correctly to guarantee safety and liveness in blockchain operations. Hyperledger Fabric, a permissioned blockchain platform assumes the correct behavior of the ordering service. It also adopts an endorsement policy to deal with malicious behaviors of peers. However, a malicious ordering service can easily make the endorsement policy ineffective. Therefore, Hyperledger Fabric is not reliable in an environment where an ordering service may be hacked. In this paper, we propose PeerBFT that enables peers to handle Byzantine faults in the ordering service of Hyperledger Fabric, assuming that n ≥ 3f + 1, where n is the total number of peers and f is the maximum number of malicious peers. In PeerBFT, each peer audits the behavior of the ordering service. When a group of peers detects any incorrect operations in the ordering service, they collectively migrate to a new ordering service. We have built a prototype of PeerBFT in Hyperledger Fabric 1.2. It is shown that PeerBFT can handle the Byzantine faults of the ordering service. The experimental evaluation shows that PeerBFT achieves approximately 90.8% of TPS (Transactions Per Second) of Hyperledger Fabric with Solo ordering service.
Recent works have shown that attaching prompts to the input is effective at conditioning Language Models (LM) to perform specific tasks. However, prompts are always included in the input text during inference, thus incurring substantial computational and memory overhead. Also, there is currently no straightforward method of utilizing prompts that are longer than the maximum input length of the LMs without incurring additional costs during inference. We propose Prompt Injection (PI), a novel formulation of injecting the prompt into the parameters of an LM to be an efficient alternative to attaching fixed prompts to the input. We show that in scenarios with long fixed prompts, PI can be up to 280 times more efficient in terms of total FLOPs than previous approaches. We further explore methodologies for PI and show promising results in persona-dependent conversation, semantic parsing, and zero-shot learning with task instructions. Through these explorations, we show that PI can be a promising direction for conditioning language models, especially in scenarios with long and fixed prompts 1 .1 Code used for the experiments is available at this link Preprint. Under review.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.