Abstract-FPGA overlays are commonly implemented as coarse-grained reconfigurable architectures with a goal to improve designers' productivity through balancing flexibility and ease of configuration of the underlying fabric. To truly facilitate full application acceleration, it is often necessary to also include a highly efficient processor that integrates and collaborates with the accelerators while maintaining the benefits of being implemented within the same overlay framework.This paper presents an open-source soft processor that is designed to tightly-couple with FPGA accelerators as part of an overlay framework. RISC-V is chosen as the instruction set for its openness and portability, and the soft processor is designed as a 4-stage pipeline to balance resource consumption and performance when implemented on FPGAs. The processor is generically implemented so as to promote design portability and compatibility across different FPGA platforms.Experimental results show that integrated software-hardware applications using the proposed tightly-coupled architecture achieve comparable performance as hardware-only accelerators while the proposed architecture provides additional run-time flexibility. The processor has been synthesized to both low-end and high-performance FPGA families from different vendors, achieving the highest frequency of 268.67 MHz and resource consumption comparable to existing RISC-V designs.
Worst-case delay analysis is important for avionics full duplex switched Ethernet (AFDX) standardised as ARINC 664. The flow model of virtual link (VL) in the worst-case delay analysis of AFDX is inaccurate, which depends only on the parameters of VL and ignores the impact from the network, so it makes the worst-case delay analysis of AFDX impractical. A worst-case flow model of VL, which takes the worst impact from the network into account, is proposed to mend the worst-case delay analysis of AFDX. This worst-case flow model of VL is applied in one of the main theoretical approaches for worst-case delay analysis, the network calculus approach. It assists the network calculus approach to get the real upper bound of delay for VL.Introduction: Avionics full duplex switched Ethernet (AFDX) [1] is an upgrade from Ethernet for avionics demand. It is a time-critical network for aerospace applications, and worst-case delay analysis is significant for this network. The way to perform the worst-case delay analysis of AFDX is to calculate the upper bound of delay for virtual link (VL) by theoretical calculation. The main approaches for worst-case delay analysis include the trajectory approach [2], the network calculus approach [3] and so on. The network calculus approach is the most mature one, which was first proposed in [3] and applied to AFDX in [2,4,5].The flow model of VL in the worst-case delay analysis of AFDX is constructed according to the parameters of VL. This flow model ignores the impact from the network, so it cannot represent the worst situation of flow, and the worst-case delay analysis of AFDX cannot be rigorous. To overcome this problem, a worst-case flow model of VL is proposed in this Letter, which takes the worst impact from the network into account. This worst-case flow model of VL is applied to the network calculus approach. With this assistance, the network calculus approach can get the real upper bound of delay for VL.
Parallel TCP, which opens multiple TCP connections over a single direct path, and Multi-Pathing, which concurrently uses multiple disjointed paths to transfer data, have both been proved to be effective methods to improve endto-end throughput. How much throughput can we ultimately achieve between a source and a destination if we use multiple overlay paths and open multiple TCP connections on each used path?In order to find all possible overlay paths of good quality between a source and a destination, a path probing process similar to the path discovery protocol of IEEE 802.5 is started by the destination. A probing packet(a TCP connection request followed by padding data) is flooded across an overlay between the destination and the source. Intermediate overlay nodes selectively accept and forward probing packets. If a probing pack is accepted, a corresponding TCP connection is created. Trade-offs then are made between reducing the probing traffic and keeping multiple TCP connections on each path. The source strips data into small packets and adaptively assigns them to selected overlay paths according to the changing quality of each path.This proposed data transfer technology is evaluated within an overlay that consists of 15 servers on the Internet in China, across 3 different autonomous systems. Experiments show that with this technology, 54% of the measured samples yield a throughput larger than 60Mb/s, which is 60% of the bandwidth that could be possibly obtained(the access bandwidth is 100Mb/s for all servers). Comparing with direct path and Parallel TCP, only less than 1% and 25% of the measured samples reach the same level of throughput respectively.
The covariance matrix associated with multiple financial returns plays foundational roles in many empirical applications, for example, quantifying risks and managing investment portfolios. Such covariance matrices are well known to be dynamic, that is, their structures change with the underlying market conditions. To incorporate such dynamics in a setting with high-frequency noisy data contaminated by measurement errors, we propose a new approach for estimating the covariances of a high-dimensional return series. By utilizing an appropriate localization, our approach is designed upon exploiting generic variables that are informative in accounting for the dynamic changes. We then investigate the properties and performance of the high-dimensional minimal-variance sparse portfolio constructed from employing the proposed dynamic covariance estimator. Our theory establishes the validity of the proposed covariance estimation methods in handling high-dimensional, high-frequency noisy data. The promising applications of our methods are demonstrated by extensive simulations and empirical studies showing the satisfactory accuracy of the covariance estimation and the substantially improved portfolio performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.