In order to train a machine learning model, you need a dataset. This dataset fits the model, which means finding the mathematical representation of the relationships between the variables in the data. The more data you have, the better your model will be at finding these relationships.
However, not all datasets are created equal. Some datasets are much larger than others, and some contain more information than others. When you’re training a machine learning model, it’s important to use a large enough dataset to find all the relationships in the data, but not so large that it takes too long to train the model.
It’s also important to make sure that all the models are fitted to the same size of the dataset. If one model is fitted to a dataset with 100 variables and another model is fitted to a dataset with 1000 variables, the second model will always have an advantage over the first model. In order to fairly compare different machine learning models, you need to make sure they are all fitted to datasets of the same size.
To fairly compare the effectiveness of different models, it is important to ensure that they are all fitted to the same size of dataset. This allows for a more accurate comparison, as any differences in performance can be attributed to the model itself, rather than the size of the dataset.
One way to do this is to randomly select a subset of data from each model’s training set, and fit the model to this subset. The model can then be evaluated on the test set in the usual way. This approach has the advantage of being simple to implement, but it can be difficult to know how large a subset to select, and there is a risk that the results will be biased if the chosen subsets are not representative of the full datasets.
Another approach is to use cross-validation, which fits the model multiple times on different subsets of the data (called ‘folds’). For each fold, the model is trained on a different subset of data, and evaluated on the remaining data. This approach provides a more accurate estimate of how well a model will perform on unseen data, but it is computationally more expensive than fitting a model just once.
The results of the study showed that the models were not all fitted to the same size of dataset, with the exception of the KNN model. The R2 values for the models ranged from 0.72 to 0.96, with the lowest R2 value belonging to the KNN model. The RMSE values for the models ranged from 2.89 to 4.56, with the lowest RMSE value belonging to the SVR model.
It is clear from the above results that some of the models were not all fitted to the same size of dataset. TheRidge and Lasso models were both only fitted to 75% of the data, whereas the Random Forest was fitted to 100%. This is likely to have had an impact on the performance of each model, with the Random Forest most likely having an advantage due to having more data to train on.
The results of this study showed that the models were not all fitted to the same size of dataset, which can potentially lead to inaccurate results and overfitting. In addition, the data used in this study was unbalanced, which can also lead to issues with model accuracy.