U-statistics are fundamental in modeling statistical measures that involve responses from multiple subjects. They generalize the concept of the empirical mean of a random variable X to include summations over each m-tuple of distinct observations of X. W. Stute introduced conditional U-statistics, extending the Nadaraya–Watson estimates for regression functions. Stute demonstrated their strong pointwise consistency with the conditional expectation r(m)(φ,t), defined as E[φ(Y1,…,Ym)|(X1,…,Xm)=t] for t∈Xm. This paper focuses on estimating functional single index (FSI) conditional U-processes for regular time series data. We propose a novel, automatic, and location-adaptive procedure for estimating these processes based on k-Nearest Neighbor (kNN) principles. Our asymptotic analysis includes data-driven neighbor selection, making the method highly practical. The local nature of the kNN approach improves predictive power compared to traditional kernel estimates. Additionally, we establish new uniform results in bandwidth selection for kernel estimates in FSI conditional U-processes, including almost complete convergence rates and weak convergence under general conditions. These results apply to both bounded and unbounded function classes, satisfying certain moment conditions, and are proven under standard Vapnik–Chervonenkis structural conditions and mild model assumptions. Furthermore, we demonstrate uniform consistency for the nonparametric inverse probability of censoring weighted (I.P.C.W.) estimators of the regression function under random censorship. This result is independently valuable and has potential applications in areas such as set-indexed conditional U-statistics, the Kendall rank correlation coefficient, and discrimination problems.