我正在根据粉丝推文对Twitter帐户关注者进行分类(正面/负面),
收集数据
哪一个是正确的方法?如果没有......有没有更好的方法?
我的方法1用于用户分类:
我的方法1的代码
users=set(classify_followers['users'])
user_to_classify=[]
classify=[]
for user in users:
user_to_classify.append(user)
temp=classify_followers[(classify_followers['users']==user)]
if(temp.shape[0]>1):
if(('positive' in set(temp['sentiment']))
and ('negative' in set(temp['sentiment'])) ):
positive_count=temp[(temp['sentiment']=='positive')]['sentiment'].count()
negetive_count=temp[(temp['sentiment']=='negative')]['sentiment'].count()
positive_percent=(positive_count/temp.shape[0])*100
negetive_percent=(negetive_count/temp.shape[0])*100
if(negetive_percent>=34):
classify.append('negative')
else:
classify.append('positive')
else:
if('positive' in set(temp['sentiment'])):
classify.append('positive')
else:
classify.append('negative')
else:
if('positive' in set(temp['sentiment'])):
classify.append('positive')
else:
classify.append('negative')
我的方法2用于用户分类:
我的方法2的代码
df=data[((data['sentiment']=='negative') | (data['sentiment']=='positive'))]
vectorizer = TfidfVectorizer(stop_words='english')
X = vectorizer.fit_transform(df['tweets'])
true_k = 2
model = KMeans(n_clusters=true_k, init='k-means++'
, max_iter=10000, n_init=1)
model.fit(X)
print("Top terms per cluster:")
order_centroids = model.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(true_k):
print("Cluster %d:" % i),
for ind in order_centroids[i, :20]:
print(' %s' % terms[ind]),
print
labels=model.labels_
print( np.bincount(labels))
d=pd.DataFrame()
d['labels']=labels
d['senti']=df['sentiment']
d['labels'].value_counts()
#to know which label having more data(more data is positive less is negative)
a=[i for i in range(0,len(labels)) if((d['senti'][i]=='positive')
and d['labels'][i]==1)]
b=[i for i in range(0,len(labels)) if((d['senti'][i]=='positive')
and d['labels'][i]==0)]
c=[i for i in range(0,len(labels)) if((d['senti'][i]=='negative')
and d['labels'][i]==1)]
d=[i for i in range(0,len(labels)) if((d['senti'][i]=='negative')
and d['labels'][i]==0)]
print(len(a),len(b),len(c),len(d))
users=set(df['users'])
prediction=[]
for user in users:
temp=df[(df['users']==user)]
temp=temp['tweets']
Y = vectorizer.transform(temp)
tweet_predictions=model.predict(Y)
no_one=np.count_nonzero(tweet_predictions==1)
no_zero=np.count_nonzero(tweet_predictions==0)
if(no_one>no_zero):
prediction.append('positive')
else:
prediction.append('negative')
这是根据推文对用户进行分类的正确方法吗?如果没有......有更好的方法吗?
答案 0 :(得分:0)
正确的方法是第三种方法:
一些评委会评估您的一部分用户,并根据他们的推文为他们分配情感评分。然后使用一些机器学习技术(我建议SVM),你可以根据这些标记的例子训练一个模型,提供他们的推文内容,情绪评分,肯定和否定的绝对数量,负面百分比或其他(聚合)相关的输入特征。最后,您应该将模型应用于看不见的用户,以了解他们的极性。
我也不想引入训练验证测试,但这应该是这样的。您的方法不使用机器学习来区分用户,因为您只是将其用作黑盒子来收集推文的极性。