我必须从 20000 行的电子表格中读取数据。在尝试使用 gspread 获取记录时,出现超时错误。为了避免超时,我尝试将必须读取的列拆分为块:
def update_items_spreadsheet(spreadsheet_id, sheet, column_source, column_dest):
sht = gc.open_by_key(spreadsheet_id)
worksheet = sht.worksheet(sheet)
rows_count = worksheet.row_count
ranges = []
offset = 2
chunk_size = 5000
while offset < rows_count:
offset += chunk_size
if offset > rows_count:
ranges.append([offset - chunk_size, rows_count])
else:
ranges.append([offset - chunk_size, offset - 1])
for rows_range in ranges:
list_title = worksheet.range(f'{column_source}{rows_range[0]}:{column_source}{rows_range[1]}')
list_items = []
for item in list_items:
value_to_store = get_dest_value(item)
llist_items.append([value_to_store])
range_to_update = f'{column_dest}{rows_range[0]}:{column_dest}{rows_range[1]}'
worksheet.update(range_to_update, list_items)
然而,超时问题仍然存在。如果有人对如何在没有超时的情况下阅读具有大量记录的电子表格有任何建议,我将非常感激