有没有一种方法可以将熊猫数据框存储到Teradata表中

时间:2019-01-10 22:43:03

标签: python dataframe teradata teradata-sql-assistant

我已经创建了一个熊猫数据框“ df”,我正尝试使用Teradata-SQL助手将其存储在“表”中。

连接字符串-

conn = pyodbc.connect(
         "DRIVER=Teradata;DBCNAME=tdprod;Authentication=LDAP;UID=" + username + ";PWD=" + password + ";QUIETMODE=YES",
        autocommit=True, unicode_results=True)

cursor = conn.cursor().execute(sql)

尝试使用:df.to_sql('table', con =conn)

这不起作用。

是否有一种将数据框存储到表中的简便方法。

感谢您的帮助。

谢谢。

Traceback (most recent call last):

 File "C:\Users\tripata\PycharmProjects\NLP\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2158, in _wrap_pool_connect
return fn()
 File "C:\Users\tripata\PycharmProjects\NLP\venv\lib\site-packages\sqlalchemy\pool.py", line 410, in connect
return _ConnectionFairy._checkout(self, self._threadconns)
 File "C:\Users\tripata\PycharmProjects\NLP\venv\lib\site-packages\sqlalchemy\pool.py", line 788, in _checkout
fairy = _ConnectionRecord.checkout(pool)
  File "C:\Users\tripata\PycharmProjects\NLP\venv\lib\site-packages\sqlalchemy\pool.py", line 529, in checkout
rec = pool._do_get()
  File "C:\Users\tripata\PycharmProjects\NLP\venv\lib\site-packages\sqlalchemy\pool.py", line 1096, in _do_get
c = self._create_connection()
  File "C:\Users\tripata\PycharmProjects\NLP\venv\lib\site-packages\sqlalchemy\pool.py", line 347, in _create_connection
return _ConnectionRecord(self)
  File "C:\Users\tripata\PycharmProjects\NLP\venv\lib\site-packages\sqlalchemy\pool.py", line 474, in __init__
self.__connect(first_connect_check=True)
 File "C:\Users\tripata\PycharmProjects\NLP\venv\lib\site-packages\sqlalchemy\pool.py", line 671, in __connect
connection = pool._invoke_creator(self)
 File "C:\Users\tripata\PycharmProjects\NLP\venv\lib\site-packages\sqlalchemy\engine\strategies.py", line 106, in connect
 return dialect.connect(*cargs, **cparams)
 File "C:\Users\tripata\PycharmProjects\NLP\venv\lib\site-packages\sqlalchemy\engine\default.py", line 412, in connect
return self.dbapi.connect(*cargs, **cparams)
  File "C:\Users\tripata\PycharmProjects\NLP\venv\lib\site-packages\teradata\tdodbc.py", line 454, in __init__
checkStatus(rc, hDbc=self.hDbc, method="SQLDriverConnectW")
 File "C:\Users\tripata\PycharmProjects\NLP\venv\lib\site-packages\teradata\tdodbc.py", line 231, in checkStatus
raise DatabaseError(i[2], u"[{}] {}".format(i[0], msg), i[0])
teradata.api.DatabaseError: (8017, '[28000] [Teradata][ODBC Teradata Driver][Teradata Database] The UserId, Password or Account is invalid. , [Teradata][ODBC Teradata Driver][Teradata Database] The UserId, Password or Account is invalid. ')

3 个答案:

答案 0 :(得分:0)

来自to_sql的文档:

Parameters
----------
name : string
    Name of SQL table.
con : sqlalchemy.engine.Engine or sqlite3.Connection
    Using SQLAlchemy makes it possible to use any DB supported by that
    library. Legacy support is provided for sqlite3.Connection objects.

您可以看到需要sqlalchemy sqlite3,但不需要pyodbc。

您需要执行以下操作为Teradata创建引擎:

from sqlalchemy import create_engine

engine = create_engine(f'teradata://{username}:{password}@tdprod:22/')

然后您将使用它

df.to_sql('table', engine)

答案 1 :(得分:0)

create_engine('teradata://'+ user +':'+密码+'@'+ host +':1025 /'+'/'+'?authentication = LDAP') 在连接字符串中添加主机名和身份验证对我来说很有效。

答案 2 :(得分:0)

我已经进行了一些挖掘,此解决方案可以完成工作并快速完成-使用python teradata模块:

import teradata
import numpy as np
import pandas as pd


num_of_chunks = 100  #breaking the data into chunks is optional - use if you have many rows or would like to view status updates

query = 'insert into SomeDB.SomeTeraDataTable'
df = someDataframe

#set host, user, password params
host,username,password = 'hostName_or_IPaddress','username', 'password'

#connet to DB using UdaExec
udaExec = teradata.UdaExec (appName="IMC", version="1.0", logConsole=False)


with udaExec.connect(method="odbc",system=host, username=username,
                            password=password, driver="Teradata") as connect:


    df_chunks = np.array_split(df, num_of_chunks)

    for i,_ in enumerate(df_chunks):

        data = [tuple(x) for x in df_chunks[i].to_records(index=False)]

        connect.executemany(query, data,batch=True)

解决方案基于:此stackoverflow post