我希望在优化方面挑选你的大脑。我仍然在学习越来越多关于python的知识,并将它用于我的日常运营分析师职位。我的任务之一是整理大约6万个唯一记录标识符,并搜索具有大约120k交互记录的另一个数据帧,创作交互的员工及其发生的时间。
对于Reference,此时的两个数据帧如下所示:
main_data =仅限唯一标识符 nok_data =按名称编写,唯一标识符(称为案例文件标识符),注释文本,创建时间。
我的设置目前运行它大约按每分钟2500行排序和匹配我的数据,因此运行大约需要25-30分钟左右。我很好奇的是我执行的任何步骤是:
以下是我的代码:
nok_data = pd.read_csv("raw nok data.csv") #Data set from warehouse
main_data = pd.read_csv("exampledata.csv") #Data set taken from iTx ids from referral view
row_count = 0
error_count = 0
print(nok_data.columns.values.tolist())
print(main_data.columns.values.tolist()) #Commented out, used to grab header titles if needed.
data_length = len(main_data) #used for counting how many records left.
earliest_nok = {}
nok_data["Created On"] = pd.to_datetime(nok_data["Created On"]) #convert all dates to datetime at beginning.
for row in main_data["iTx Case ID"]:
list_data = []
nok = nok_data["Case File Identifier"] == row
matching_dates = nok_data[["Created On", "Authored By Name"]][nok == True] #takes created on date only if nok shows row was true
if len(matching_dates) > 0:
try:
min_dates = matching_dates.min(axis=0)
earliest_nok[row] = [min_dates[0], min_dates[1]]
except ValueError:
error_count += 1
earliest_nok[row] = None
row_count += 1
print("{} out of {} records").format(row_count, data_length)
with open('finaloutput.csv','wb') as csv_file:
writer = csv.writer(csv_file)
for key, value in earliest_nok.items():
writer.writerow([key, value])
寻找像这样执行代码的人的任何建议或专业知识比我更长。我感谢所有甚至花时间阅读本文的人。周二快乐,
Andy M。
****编辑要求显示数据 很抱歉我的新手移动不包括任何数据类型。
main_data示例
ITX Case ID
2017-023597
2017-023594
2017-023592
2017-023590
nok_data又名“raw nok data.csv”
Authored By: Case File Identifier: Note Text: Authored on
John Doe 2017-023594 Random Text 4/1/2017 13:24:35
John Doe 2017-023594 Random Text 4/1/2017 13:11:20
Jane Doe 2017-023590 Random Text 4/3/2017 09:32:00
Jane Doe 2017-023590 Random Text 4/3/2017 07:43:23
Jane Doe 2017-023590 Random Text 4/3/2017 7:41:00
John Doe 2017-023592 Random Text 4/5/2017 23:32:35
John Doe 2017-023592 Random Text 4/6/2017 00:00:35
答案 0 :(得分:1)
您希望对Case File Identifier
进行分组,并获得最短约会和相应的作者。
# Sort the data by `Case File Identifier:` and `Authored on` date
# so that you can easily get the author corresponding to the min date using `first`.
nok_data.sort_values(['Case File Identifier:', 'Authored on'], inplace=True)
df = (
nok_data[nok_data['Case File Identifier:'].isin(main_data['ITX Case ID'])]
.groupby('Case File Identifier:')['Authored on', 'Authored By:'].first()
)
d = {k: [v['Authored on'], v['Authored By:']] for k, v in df.to_dict('index').iteritems()}
>>> d
{'2017-023590': ['4/3/17 7:41', 'Jane Doe'],
'2017-023592': ['4/5/17 23:32', 'John Doe'],
'2017-023594': ['4/1/17 13:11', 'John Doe']}
>>> df
Authored on Authored By:
Case File Identifier:
2017-023590 4/3/17 7:41 Jane Doe
2017-023592 4/5/17 23:32 John Doe
2017-023594 4/1/17 13:11 John Doe
使用df.to_csv(...)
可能更容易。
main_data ['ITX Case ID']中没有匹配记录的项目已被忽略,但如果需要可以包括在内。