我有一个数据框列,其中包含大约4000条包含“ID”和“描述”的记录,这些记录被标记为一个单词。
>>df[:,0:1]
Output:
Id one_word_tokenize
1952043 [Swimming, Pool, in, the, roof, top,…
1918916 [Luxury, Apartments, consisting, 11, towers, B...
1645751 [Flat, available, sale, Medavakkam, Modular, k…
1270503 [Toddler, Pool, with, Jogging, Tracks, for people…
1495638 [near, medavakkam, junction, calm, area, near,...
如何遍历行并从Categories中查找匹配值。 Categories.py文件包含以下单词分类。
category = [('Luxury', 'IN', 'Recreation_Ammenities'),
('Swimming', 'IN','Recreation_Ammenities'),
('Toddler', 'IN', 'Recreation_Ammenities'),
('Pool', 'IN', 'Recreation_Ammenities')]
Recreation = [e1 for (e1, rel, e2) in category if e2=='Recreation_Ammenities']
我试过指定行号。但是我希望它检查每一行。
example = df['one_word_tokenize'].ix[1]
for val in example:
for am in Categories.Recreation:
if am==val:
print(am,"~","Recreation")
我想要的输出是:
Id one_word_tokenized_text Recreation_Ammenities
1952043 [Swimming, Pool, in, the, roof, top,… Swimming, Pool
1918916 [Luxury, Apartments B... Luxury
1645751 [Flat, available, sale, k…
1270503 [Toddler, Pool, with, Jogging, Tracks,… Toddler,Pool,Jogging
1495638 [near, medavakkam, junction,...
请帮忙。
答案 0 :(得分:1)
目前尚不清楚您是否希望["Swimming", "Pool"]
与"Swimming Pool"
类别相匹配。如果是这样,您手头的操作就要昂贵得多,因为您需要指定每个令牌列表中需要评估的n-gram级别。
如果您只想将单个令牌与类别匹配,则可以使用extractall()
进行长格式输出,或使用count()
进行宽幅输出。</ p>
extractall
import numpy as np
import pandas as pd
# Note: "Swimming" and "Pool" from OP is combined in first row for example purposes
# Additionally, one "Luxury" is added to the first entry, to consider repeat matches
tokens = pd.Series([["Swimming Pool", "in", "Luxury", "roof", "top", "Luxury"],
["Luxury", "Apartments", "consisting", "11", "towers"],
["near", "medavakkam", "junction", "calm", "area", "near"]])
category = [('Luxury', 'IN', 'Recreation_Ammenities'),
('Swimming Pool', 'IN','Recreation_Ammenities'),
('Toddler Pool', 'IN', 'Recreation_Ammenities'),
('Pool Table', 'IN', 'Recreation_Ammenities')]
recreation = [e1 for (e1, rel, e2) in category if e2=='Recreation_Ammenities']
# check for matches from any element in recreation, for each token set
matches = tokens.apply(lambda x: pd.Series(x).str.extractall("|".join(["({})".format(cat) for cat in recreation])))
# report results
match_list = [[m for m in match.values.ravel() if isinstance(m, str)] for match in matches]
match_df = pd.DataFrame({"tokens":tokens, "matches":match_list})
长match_df
:
matches tokens
0 [Swimming Pool, Luxury, Luxury] [Swimming Pool, in, Luxury, roof, top, Luxury]
1 [Luxury] [Luxury, Apartments, consisting, 11, towers]
2 [] [near, medavakkam, junction, calm, area, near]
count
matches = {cat:tokens.apply(lambda x: pd.Series(x).str.count("{}".format(cat)).sum()) for cat in recreation}
match_df = pd.DataFrame(matches)
match_df["tokens"] = tokens
广match_df
:
Luxury Pool Table Swimming Pool Toddler Pool tokens
0 2 0 1 0 [Swimming Pool, in, Luxury, roof, top, Luxury]
1 1 0 0 0 [Luxury, Apartments, consisting, 11, towers]
2 0 0 0 0 [near, medavakkam, junction, calm, area, near]
答案 1 :(得分:-1)
使用apply的布尔切片不会在这里做诀窍吗?
df[df['one_word_tokenize'].apply(lambda ls: 'Recreation_Ammenities' in ls)]