It is reasonably well-known that you can get a statistical significance test "for free" by constructing a confidence interval around an obtained statistic and seeing whether or not the corresponding hypothesized parameter is "captured" by the interval. If it is not inside the 95% confidence interval, for example, reject it at the .05 significance level and conclude that the sample finding is statistically significant. If it is, do not reject it; the sample finding is not statistically significant at that level. Therefore, if you want a significance test you can either carry it out directly or get it indirectly via the corresponding confidence interval. If you want a confidence interval you can carry it out directly (the usual way) or you can get it indirectly by carrying out significance tests for all of the possible "candidates" for the hypothesized parameter (not very practical, as there is an infinite number of them). But should you ever carry out a hybrid combination of hypothesis testing and interval estimation, for example, by reporting the 95% confidence interval and also reporting the actual p value that "goes with" the obtained statistic, even if it is greater than or less than .05? Some researchers do that. Some journals (e.g., The Journal of Managed Care and Specialty Pharmacy) require it. At least one journal (Basic and Applied Social Psychology) has banned both. It is also reasonably well-known that if you do not have a random sample, you really should not make any statistical inferences. Instead, get the descriptive statistic(s) and make any nonstatistical inferences warranted. Exception: If you have random assignment but not random sampling for an experiment, randomization tests (permutation tests) are fine, but the inference is to all