Date of Award


Publication Type


Degree Name



Mathematics and Statistics


Asymptotic theory, Change-points, Neuro-imaging, Shrinkage estimators, Tensor regression, Tensors


S. Nkurunziza





Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.


In this dissertation, we consider two estimation problems in some tensor regression models. The first estimation problem is about the tensor coefficient in a tensor regression model with multiple and unknown change-points. We generalize some recent findings in five ways. First, the problem studied is more general than the one in context of a matrix parameter with multiple change-points. Second, we develop asymptotic results of the tensor estimators in the context of a tensor regression with unknown change-points. Third, we construct a class of shrinkage tensor estimators that encompasses the unrestricted estimator (UE) and the restricted estimator (RE). Fourth, we generalize some identities which are crucial in deriving the asymptotic distributional risk (ADR) of the tensor estimators. Fifth, we show that the proposed shrinkage estimators (SEs) perform better than the UE. Finally, the theoretical results are corroborated by the simulation findings and by applying our methods to a real data analysis of MRI and fMRI datasets.

The second estimation problem is about the tensor regression coefficient in the context of a generalized tensor regression model with multi-mode covariates. We generalize the main results in recent literature in four ways. First, we weaken assumptions underlying the main results of the previous works. In particular, the dependence structure of the error and covariates are as weak as an L2-mixingale array, and the error term does not need to be uncorrelated with regressors. Second, we consider a more general constraint than the one considered in previous works. Third, we establish the asymptotic properties of the tensor estimators. Specifically, we derive the joint asymptotic distribution of the unrestricted tensor estimator (UE) and the restricted tensor estimator (RE). Fourth, we propose a class of shrinkage-type estimators in the context of tensor regression, and under a general loss function, we derive sufficient conditions for which the shrinkage estimators dominate the UE. In addition to these interesting contributions, we derive a kind of functional central limit theorem for vector-valued mixing ales and we establish some identities which are useful in studying the risk dominance of shrinkage-type tensor estimators. Finally, to illustrate the application of the proposed methods, we corroborate the results by some simulation studies of binary, Normal and Poisson data and we analyze a multi-relational network and neuro-imaging datasets.