F1 Score Calculation using sklearn.metrices in different from theory
I had created my first AI model, and I need to test the accuracy of the data model. I used the inbuilt confusion_matrix() function and then the classification_report() to get the accuracy related fields for the developed model. Just to satisfy my curiosity, i calculated F1 Score using the f1_score() and to my astonishment it was not same as received using the classification_report(). I am not sure whether f1_score() is wrongly calculated or whether there is any issue in the classification report details from the python code. Need someone's help to understand this difference.
Confusion Matrix (From Python Code): 94 13 15 32
Classification Report : precision recall f1-score support 0 0.86 0.88 0.87 107 1 0.71 0.68 0.70 47 accuracy 0.82 154 macro avg 0.79 0.78 0.78 154 weighted avg 0.82 0.82 0.82 154 F1 Score (Calculated using sklearn.metrics.f1_score()): 0.6956