In time-interleaved analog-to-digital converter (ADC) architecture, offset mismatch, gain mismatch, and timing error between channels degrade the performance of time-interleaved ADCs. This paper focuses on the timing error, and proposes a simple calibration algorithm based on Hilbert transform estimate and then correct the timing error. With a cosine input, it could efficiently and accurately estimate the timing error. Fractional delay filters are developed to correct the timing errors. This simplifies the design and decreases the cost. Numerical simulations are used to verify the proposed estimation and correction algorithm.
The frequency of a weak signal is used for fault diagnosis and target identification in various fields. By introducing particle swarm optimization (PSO) and spectral entropy (SE), an automated weak signal frequency estimation method based on the Duffing oscillator is proposed. The proposed method uses the differential structure to enhance the timing difference of the Duffing oscillator between the chaotic and large-scale periodic states, which is quantitatively distinguished by SE. Then, the frequency of the internal driving force is adaptively adjusted by the PSO to allow the SE to reach a minimum value where the driving frequency equals the weak signal frequency. A group of weak signals with different frequencies has been tested. The maximum relative frequency error is only 0.68%. Unlike other chaotic oscillator-based frequency estimation methods, the proposed method does not need to determine the phase state manually. A rough initial frequency search range is sufficient for automatic frequency measurement of the proposed method in this paper.
Random demodulation based compressive sampling technique is a novel approach that it can break through the Shannon sampling theorem for the sparse signal capturing. A major challenge in the random demodulation based sampling system is the random sequence generation. In this paper, we introduce an approach to generate the high-speed random sequence that meets the incoherence of compressive sampling. The proposed technique employs a field programmable gate array (FPGA). First, the random sequence is parallel stored in the memory of FPGA, and it is read out byte by byte using a low speed clock. Second, the low-speed byte sequence is converted to a high-speed bit sequence by a circuitry. This proposed approach can program the random sequence dynamically without making any change to the circuitry system. Experimental results indicate that, the random sequence generated by the proposed approach is feasible to sensing the signal, and the constructed system can compressively sample and reconstruct the sparse signal.
In the high-resolution analog circuit, the performance of chips is an important part. The performance of the chips needs to be determined by testing. According to the test requirements, stimulus signal with better quality and performance is necessary. The main research direction is how to generate high-resolution and high-speed analog signal when there is no suitable high-resolution and high-speed digital-to-analog converter (DAC) chip available. In this paper, we take the high-resolution analog-to-digital converter (ADC) chips test as an example; this article uses high-resolution DAC chips and multiplexers to generate high-resolution high-speed signals that can be used for testing high-resolution ADC chips based on the principle of time-alternating sampling. This article explains its method, analyzes its error and proposes a digital pre-processing method to reduce the error. Finally, the actual circuit is designed, and the method is verified on the circuit. The test results prove the effectiveness of this method for generating high-resolution ADC test signals.
For most high-precision power analyzers, the measurement accuracy may be affected due to the nonlinear relationship between the input and output signal. Therefore, calibration before measurement is important to ensure accuracy. However, the traditional calibration methods usually have complicated structures, cumbersome calibration process, and difficult selection of calibration points, which is not suitable for situations with many measurement points. To solve these issues, a nonlinear calibration method based on sinusoidal excitation and DFT transformation is proposed in this paper. By obtaining the effective value data of the current sinusoidal excitation from the calibration source, the accurate calibration process can be done, and the calibration efficiency can be improved effectively. Firstly, through Fourier transform, the phase value at the initial moment of the fundamental frequency is calculated. Then, the mapping relationship between the sampling value and the theoretical calculation value is established according to the obtained theoretical discrete expression, and a cubic spline interpolation method is used to further reduce the calibration error. Simulations and experiments show that the calibration method presented in this paper achieves high calibration accuracy, and the results are compensation value after calibration with a deviation of ± 3 × 10 − 4 .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.