Background/Objectives: The goal of this study was to create an Enco-Standardization technique that would produce accurate data and improve the diagnosis of Autism Spectrum Disorder (ASD).This method uses mean values to replace missing values in a dataset and improves them by combining label encoding and conventional scaling techniques. Methods: The ASD dataset, which has 704 instances and 21 attributes, is used in this study. Training and testing are divided by the dataset (80%-20%). As an imputation strategy in this dataset, missing values are located and replaced with the mean value. Attributes are encoded using the Enco-Standardization methodology using a label encoding technique that changes non-numeric variables into numeric ones. After that, the data were scaled into a machine-readable format to standardise it. Different machine learning classifier models are compared to the hybrid strategy of encoding and scaling techniques. Based on the accuracy found using machine learning classifier models, the dataset acquired using the Enco-Standardization technique is assessed. Findings: The dataset needs to be accurate and relevant in order to increase accuracy and decrease computing time. The findings of the Enco-Standardization methodology showed a good pre-processing method with accuracy values of 98% for Naive Bayes (NB), 71% for K Nearest Neighbour (KNN), 74% for Support Vector Machine (SVM), 97% for Linear Regression (LR), 100% for Decision Tree (DT), and 100% for Random Forest (RF). The deletion of missing values improves performance in KNN (94%), SVM (95.9%), LR, DT, and RF (100%) but decreases the number of instances in the dataset, rendering the model ineffective. Novelty: The data in a dataset are transformed and encoded using the proposed Enco-Standardization pre-processing technique, which increases the precision of the data analysis process in ASD prediction. Data discrepancies are avoided by using this eco-standardization technique.