我需要更新空间数据库中的每条记录,其中我有一个覆盖多边形数据集的点数据集。对于每个点要素,我想指定一个键,使其与其所在的面要素相关联。因此,如果我的观点'纽约市'位于多边形美国,而美国多边形'GID = 1',我将为我的点纽约市分配'gid_fkey = 1'。
为此,我创建了以下查询。
procQuery = 'UPDATE city SET gid_fkey = gid FROM country WHERE ST_within((SELECT the_geom FROM city WHERE wp_id = %s), country.the_geom) AND city_id = %s' % (cityID, cityID)
目前我从另一个查询中获取cityID信息,该查询只选择gid_fkey为NULL的所有cityID。基本上我只需要遍历这些并运行前面显示的查询。由于查询在理论上仅依赖于另一个表中的静态信息,因此所有这些过程都可以立即运行。我已经实现了下面的线程程序,但我似乎无法迁移到多处理
import psycopg2, pprint, threading, time, Queue
queue = Queue.Queue()
pyConn = psycopg2.connect("dbname='geobase_1' host='localhost'")
pyConn.set_isolation_level(0)
pyCursor1 = pyConn.cursor()
getGID = 'SELECT cityID FROM city'
pyCursor1.execute(getGID)
gidList = pyCursor1.fetchall()
class threadClass(threading.Thread):
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
def run(self):
while True:
gid = self.queue.get()
procQuery = 'UPDATE city SET gid_fkey = gid FROM country WHERE ST_within((SELECT the_geom FROM city WHERE wp_id = %s), country.the_geom) AND city_id = %s' % (cityID, cityID)
pyCursor2 = pyConn.cursor()
pyCursor2.execute(procQuery)
print gid[0]
print 'Done'
def main():
for i in range(4):
t = threadClass(queue)
t.setDaemon(True)
t.start()
for gid in gidList:
queue.put(gid)
queue.join()
main()
我甚至不确定多线程是否是最佳的,但它肯定比逐个进行更快。
我将使用的机器有四个核心(四核)和一个没有GUI,PostgreSQL,PostGIS和Python的最小Linux操作系统,如果这有所不同。
我需要更改什么才能启用这个非常简单的多处理任务?
答案 0 :(得分:4)
好的,这是我自己的帖子的答案。干得好我= D
我的系统从单核心线程到四核多处理的速度提高了约150%。
import multiprocessing, time, psycopg2
class Consumer(multiprocessing.Process):
def __init__(self, task_queue, result_queue):
multiprocessing.Process.__init__(self)
self.task_queue = task_queue
self.result_queue = result_queue
def run(self):
proc_name = self.name
while True:
next_task = self.task_queue.get()
if next_task is None:
print 'Tasks Complete'
self.task_queue.task_done()
break
answer = next_task()
self.task_queue.task_done()
self.result_queue.put(answer)
return
class Task(object):
def __init__(self, a):
self.a = a
def __call__(self):
pyConn = psycopg2.connect("dbname='geobase_1' host = 'localhost'")
pyConn.set_isolation_level(0)
pyCursor1 = pyConn.cursor()
procQuery = 'UPDATE city SET gid_fkey = gid FROM country WHERE ST_within((SELECT the_geom FROM city WHERE city_id = %s), country.the_geom) AND city_id = %s' % (self.a, self.a)
pyCursor1.execute(procQuery)
print 'What is self?'
print self.a
return self.a
def __str__(self):
return 'ARC'
def run(self):
print 'IN'
if __name__ == '__main__':
tasks = multiprocessing.JoinableQueue()
results = multiprocessing.Queue()
num_consumers = multiprocessing.cpu_count() * 2
consumers = [Consumer(tasks, results) for i in xrange(num_consumers)]
for w in consumers:
w.start()
pyConnX = psycopg2.connect("dbname='geobase_1' host = 'localhost'")
pyConnX.set_isolation_level(0)
pyCursorX = pyConnX.cursor()
pyCursorX.execute('SELECT count(*) FROM cities WHERE gid_fkey IS NULL')
temp = pyCursorX.fetchall()
num_job = temp[0]
num_jobs = num_job[0]
pyCursorX.execute('SELECT city_id FROM city WHERE gid_fkey IS NULL')
cityIdListTuple = pyCursorX.fetchall()
cityIdList = []
for x in cityIdListTuple:
cityIdList.append(x[0])
for i in xrange(num_jobs):
tasks.put(Task(cityIdList[i - 1]))
for i in xrange(num_consumers):
tasks.put(None)
while num_jobs:
result = results.get()
print result
num_jobs -= 1
现在我在这里发布了另一个问题:
Create DB connection and maintain on multiple processes (multiprocessing)
希望我们可以摆脱一些开销,让这个宝宝更加快速。
答案 1 :(得分:0)
在普通的SQL中,可以执行以下操作:
UPDATE city ci
SET gid_fkey = co.gid
FROM country co
WHERE ST_within(ci.the_geom , co.the_geom)
AND ci.city_id = _some_parameter_
;
如果某个城市适合多个国家(导致对同一目标行进行多次更新)可能会出现问题,但您的数据可能并非如此。