Unchecked sex bias in machine learning (ML) algorithms used in healthcare can exacerbate disparities in care and treatment. We aimed to assess the acknowledgment and mitigation of sex bias within studies using supervised ML for improving clinical outcomes in Rheumatoid Arthritis (RA). For this systematic review, we searched PUBMED and EMBASE for original, English language studies published between 2018 to November 2023. We scored papers on whether the authors reported, attempted to mitigate or successfully mitigated the following types of bias: training data bias, test data bias, input variable bias, output variable bias, analysis bias, and assessed the quality of ML research in all papers. This study is registered on PROSPERO with ID CRD42023431754. We identified 52 papers to include in our review. All but one had a female skew in their study participants, yet 42 papers did not acknowledge any potential sex bias. Three papers assessed bias in model performance by sex disaggregating their results. One paper acknowledged potential sex bias in input variables, and six papers in their output variables, predominantly disease activity scores. No paper attempted to mitigate for any type of sex bias. The findings demonstrate the requirement for increased promotion of inclusive and equitable ML practices in healthcare.