Skip to content Skip to sidebar Skip to footer

Dealing With Negative Values In Sklearn Multinomialnb

I am normalizing my text input before running MultinomialNB in sklearn like this: vectorizer = TfidfVectorizer(max_df=0.5, stop_words='english', use_idf=True) lsa = TruncatedSVD(n_

Solution 1:

I recommend you that don't use Naive Bayes with SVD or other matrix factorization because Naive Bayes based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Use other classifier, for example RandomForest

I tried this experiment with this results:

vectorizer = TfidfVectorizer(max_df=0.5, stop_words='english', use_idf=True)
lsa = NMF(n_components=100)
mnb = MultinomialNB(alpha=0.01)

train_text = vectorizer.fit_transform(raw_text_train)
train_text = lsa.fit_transform(train_text)
train_text = Normalizer(copy=False).fit_transform(train_text)

mnb.fit(train_text, train_labels)

This is the same case but I'm using NMP(non-negative matrix factorization) instead SVD and got 0,04% accuracy.

Changing the classifier MultinomialNB for RandomForest i got 79% accuracy.

Therefore change the classifier or don't apply a matrix factorization.

Solution 2:

Try to do this in fit()

train_text.np.todense() 

Solution 3:

I had the same isse running on NB, and indeed using sklearn.preprocessing.MinMaxScaler() suggested by gobrewers14 works. But it actually reduced the performance accuracy on my Decision Tree, Random Forest and KNN by 0.2% from the same standardized dataset.

Solution 4:

Try creating a pipeline with Normalization as the first step and model fitting as the second step.

from sklearn.preprocessing importMinMaxScalerp= Pipeline([('Normalizing',MinMaxScaler()),('MultinomialNB',MultinomialNB())])
p.fit(X_train,y_train) 

Post a Comment for "Dealing With Negative Values In Sklearn Multinomialnb"