我有一个数据框: df_input1 ,具有1000万行。列名称之一是“地理位置” 。对于所有记录,我必须从地理位置中找到州名称,并填写另一个数据框的“州” 列: df_final 。为此,我创建了一个函数 convert_to_state ,如下所示:
df_final['State'] = df_input1['geolocations'].apply(convert_to_state)
有没有更快的方法来实现这一目标,因为这会花费很多时间。
样本数据: df_input1
vehicle-no start end geolocations
123 10/12/2019 09:00:12 10/12/2019 11:00:78 fghdrf3245@bafd
456 12/10/2019 06:09:12 10/10/2019 09:23:12 {098ddc76yhfbdb7877]
自定义功能:
import reverse_geocoder as rg
import polyline
def convert_to_state(geoloc):
long_lat = polyline.decode(geoloc)[0]
state_name= rg.search(long_lat)[0]["admin1"]
return state_name
答案 0 :(得分:1)
我建议使用numpy创建矢量化函数
import numpy as np
import pandas as pd
import reverse_geocoder as rg
import polyline
def convert_to_state(geoloc):
long_lat = polyline.decode(geoloc)[0]
state_name= rg.search(long_lat)[0]["admin1"]
return state_name
convert_to_state = np.vectorize(convert_to_state) # vectorize the method
col = df_input1['geolocations'].values # A numpy array of the column
df_final['State'] = pd.Series(convert_to_state(col))
在numpy数组上运行的矢量化函数将大大提高性能,然后将其转换回pandas Series。
我强烈建议在ipython中使用.apply
装饰器对这种方法和常规%timeit
方法进行计时,并报告较小子集上的运行时间
In [1]: import pandas as pd
In [2]: import numpy as np
In [3]: x = pd.DataFrame(
...: [
...: [1,2,"Some.Text"],
...: [3,4,"More.Text"]
...: ],
...: columns = ["A","B", "C"]
...: )
In [4]: x
Out[4]:
A B C
0 1 2 Some.Text
1 3 4 More.Text
In [5]: def foo_split(t):
...: return t.split(".")[0]
...:
In [6]: %timeit y = x.C.apply(foo_split)
248 µs ± 4.09 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [7]: c = x.C.values # numpy array of the column
In [8]: foo_split_vect = np.vectorize(foo_split)
In [9]: %timeit z = pd.Series(foo_split_vect(c))
159 µs ± 624 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
在这种情况下,您可能会发现速度基本上翻了一番。
答案 1 :(得分:0)
由于子例程实际上是纯函数(每一行的处理不受另一行的影响),因此我们可以利用多线程来使其运行更快
您可以使用以下
Command Prompt : pip install swifter
import swifter
df_final['State'] = df_input1['geolocations'].swifter.apply(convert_to_state)