Functional connectome-based predictive models continue to grow in popularity and predictive performance. As these models become more widely used, researchers have begun to question the idea of bias in the models, which is a crucial component of ethics in artificial intelligence. However, we show that model trustworthiness is a more important but vastly overlooked component of the ethics of functional connectome-based predictive models. In this work, we define “trust” as robustness to adversarial attacks, or data alterations designed to trick a model. We show that typical implementations of connectome-based models are untrustworthy and can easily be manipulated through adversarial attacks. We use classification of self-reported biological sex in three datasets (Adolescent Brain Cognitive Development Study, Human Connectome Project, and Philadelphia Neurodevelopmental Cohort) and for three types of predictive models (support vector machine (SVM), logistic regression, kernel SVM) as a benchmark to show that many forms of adversarial attacks are effective against connectome-based models. The attacks include changing the prediction by altering the data at test time, real-world changes at the time of scanning, and improving performance by injecting a pattern into the data. Despite drastic changes in prediction performance after adversarial attacks, the corrupted connectomes appear nearly identical to the original ones and perform similarly in downstream analyses. These findings demonstrate a need to evaluate the trustworthiness and ethics of connectome-based models before we can apply them broadly, as well as a need to develop methods that are robust to a wide range of adversarial attacks.