<akhunti1[m]>
I have created a Random Forest model using MLpack cli. When I used this model for inference using MLpack cli, I got a prediction score of 0.256234749665911. However, when I loaded this model using the Mlpack C++ load function and performed inference, I got a prediction score of 0.5053640966677341. Could you please let me know why there is a difference in the prediction score?
<akhunti1[m]>
I also tried with other computer i got the same out put mentioned above .
<akhunti1[m]>
I just wanted to inform you that I have not observed this type of difference for all the records. Out of the 6,000 records I tested, I found only one record that had this type of difference in probability.
<akhunti1[m]>
Could you please guide me on any possible solutions or workarounds for this issue?
<akhunti1[m]>
* I just wanted to inform you that I have not observed this type of difference for all the records. Out of the 6,000 records I tested, I found only one hundred records that had this type of difference in probability.
<rcurtin[m]>
the first difference, the very small one, didn't concern me much---that's probably a tiny floating point difference or something. this second one seems a bit more serious but I have a feeling it is a similar cause. can you tell us how you are loading the data and model in each case? and what format the data and model are saved in?
<rcurtin[m]>
okay, I see... can you show what one of the points that has the difference in probability is, both when calling from C++ and in test.np.without_label.csv?
<akhunti1[m]>
Did i answer your question correctly ?
<akhunti1[m]>
yes both in c++ and test.np.without_label.csv file
<rcurtin[m]>
okay, thanks... those points are simple enough that there should not be any floating point errors involved
<rcurtin[m]>
are you sure that your JSON parsing code is correct? I think that you should carefully check that the point you are reading in your C++ model is exactly the same as what you just pasted above