为什么只插入1000个元素?

时间:2019-07-16 01:41:37

标签: python python-3.x postgresql csv

我想用此代码在数据库中插入100万行,但只插入1000行,我不知道为什么。

我有2个具有1000行的csv文件,如下所示:

  

Katherina,Rasmus,82-965-3140,29 / 09/1962,krasmus8thetimescouk

import psycopg2
import csv
print("\n")


csv_file1=open('/home/oscarg/Downloads/base de datos/archivo1.csv', "r")
csv_file2=open('/home/oscarg/Downloads/base de datos/archivo2.csv', "r")

try:
    connection = psycopg2.connect(user = "oscar",
                                  password = "",
                                  host = "127.0.0.1",
                                  port = "5432",
                                  database = "challenge6_7")
    cursor = connection.cursor()
    csv_reader1 = csv.reader(csv_file1, delimiter=',')
    for row in csv_reader1:
        csv_reader2 = csv.reader(csv_file2, delimiter=',')
        contador=+1
        for row2 in csv_reader2:
            nombre=row[0]+" "+row2[0]
            apellido=row[1]+" "+row2[1]
            cedula_id=row[2]+row2[2]
            if not(contador%1000):
                fecha_nacimiento="'"+row[3]+"'"
            else:
                fecha_nacimiento="'"+row2[3]+"'"
            if not (contador%3):
                email=row[4]+"@hotmail.com"
            else:
                email=row2[4]+"@gmail.com"
            postgres_insert_query = " INSERT INTO cliente (nombre, apellido, cedula_id,fecha_nacimiento, cliente_email) VALUES (%s,%s, %s, %s,%s)"
            record_to_insert = (nombre, apellido, cedula_id, fecha_nacimiento, email)
            cursor.execute(postgres_insert_query, record_to_insert)
            connection.commit()
            if (contador==1000):
                contador=0

except (Exception, psycopg2.Error) as error :
    print(error.pgerror)

finally:
    #closing database connection.
    if(connection):
        cursor.close()
        connection.close()
        print("PostgreSQL connection is closed")

csv_file1.close()
csv_file2.close()

插入1000行然后停止,这是我的代码,psycopg或我的数据库有问题吗?

1 个答案:

答案 0 :(得分:1)

在第二个csv文件的第二次迭代中,读取器指针可能会过期(文件末尾),因此未读取任何内容。

您可能想先将行存储在列表中,然后对其进行迭代。

请参阅:Python import csv to list

编辑:这是问题。我自己做了一点测试。

import csv

csv_file1=open("a.csv", "r")
csv_file2=open("1.csv", "r")

csv_reader1 = csv.reader(csv_file1, delimiter=',')
for row in csv_reader1:
    csv_file2=open("1.csv", "r") # Removing this line makes the code run N times
                                 # Instead of N x N (a million in your example.)
    csv_reader2 = csv.reader(csv_file2, delimiter=',')
    for row2 in csv_reader2:
        print(row, row2)

我在第一个循环中通过打开文件(而非读取器)对其进行了测试。但是,一次又一次地打开文件似乎不是最佳实践。如果没有内存限制,则应将其存储在列表中。