奇怪的SQL错误Python Flask - 与DB的多个连接

时间:2018-03-25 22:04:20

标签: python sql flask

使用多个连接到同一个数据库时出现奇怪的错误。我的应用程序接收.csv文件并创建2个sql表,并允许用户从此数据集中收藏项目。

问题: 当我只使用一个连接进行Dataframe上传时,我就能执行(csv-df-sql)。但是,当我在收藏夹中使用当前的另一个连接时,我无法执行df到sql。

如果我取消注释#cnx3.close(),那么一切正常,但我无法再检索收藏所需的数据。

相同的数据源用于用户收藏夹。我找到了一个临时工作,但我想要一些关于如何解决问题的指导,以便我可以使用我的应用程序的两个元素。

相关代码:

进口:

# Imported Modules
from flask import Flask, render_template, flash, redirect, url_for, session, logging, request, jsonify
from flask_wtf import RecaptchaField
from wtforms import Form,StringField, TextAreaField, PasswordField, validators
from functools import wraps
#### SQL connection
from sqlalchemy import create_engine
## Useful - Needed for Python 3 as MySQLDB Does Not Support This !
import pymysql
pymysql.install_as_MySQLdb()

import numpy as np
import json
from werkzeug import secure_filename
import pandas as pd
import tempfile



import mysql.connector

SQL连接器:

config3 = {
'user':'root',
'password':'', 
'host':'localhost', 
'raise_on_warnings':True,

}

cnx = mysql.connector.connect(**config)
cnx2 = mysql.connector.connect(**config2)
cnx3 = mysql.connector.connect(**config3)
engine = create_engine('mysql://root:@localhost/tableau_data?charset=utf8' ,encoding='utf-8')

数据帧上传:

@app.route('/upload', methods =['GET', 'POST'])
@auth
def csv_input():
    tempfile_path = tempfile.NamedTemporaryFile().name
    #file.save(tempfile_path)
    #sheet = pd.read_csv(tempfile_path)
    if request.method == 'POST':
        file = request.files['file']
        if file:
            try:
                #allowed_filename(file.filename):
                #filename = secure_filename(file.filename)
                file.save(tempfile_path)
                input_csv = pd.read_csv(tempfile_path,sep=",", engine='python')

                #### Data Cleansing From Uploded Data
                col_titles = ['id','title','vote_average','w_average','vote_count','year','runtime',
                  'budget','revenue','profit']
                # Only Keep Data where the Original Language is English
                input_csv = input_csv[input_csv['original_language']=='en']
                # New Dataframe that only contains data with vote count > 10 
                input_csv = input_csv[input_csv['vote_count'] >= 10]
                # Fill all NA values to 0 - Needed to set datatypes
                input_csv = input_csv.fillna(0)
                # Remove all Rows with no Runtime
                input_csv = input_csv[input_csv['runtime']!=0]
                # Revmove all duplciate Rows
                input_csv = input_csv.drop_duplicates()

                input_csv['vote_average'] = input_csv.vote_average.astype(float).round(1)
                input_csv.vote_average.round(1)
                input_csv['runtime'] = input_csv.runtime.astype(int)
                input_csv['vote_count'] = input_csv.vote_count.astype(int)
                input_csv['revenue'] = input_csv.revenue.astype('int64')
                input_csv['budget'] = input_csv.budget.astype('int64')

                profit_cal(input_csv,'revenue','budget','profit')

                input_csv['profit']=input_csv.profit.astype('int64')
                input_csv['profit']=input_csv.profit.replace(0,'No Data')

                #reorder_data = pd.DataFrame(input_csv)
                # Year Cleaning
                input_csv['year'] = pd.to_datetime(input_csv['release_date'], errors='coerce').apply(lambda x: str(x).split('-')[0] if x != np.nan else np.nan)
                #C = reorder_data['vote_average'].mean()
                #m = reorder_data['vote_count'].quantile(0.10)
                #w_average = org_data.copy().loc[reorder_data['vote_count'] >= m]

                #### IMDB Data Calculation
                V = input_csv['vote_count']
                R = input_csv['vote_average']
                C = input_csv['vote_average'].mean()
                m = input_csv['vote_count'].quantile(0.10)
                input_csv['w_average'] = (V/(V+m) * R) + (m/(m+V) * C)

                input_csv = input_csv[input_csv['vote_count'] >m]

                #C = input_csv['vote_average'].mean()
                #m = input_csv['vote_count'].quantile(0.10)

                #input_csv['w_average'] = input_csv.apply(weighted_rating, axis = 1)
                input_csv['w_average'] = input_csv.w_average.astype(float).round(1)

                #cursor = cnx3.cursor(dictionary=True,buffered=True)
                #cnx3.close()

                reorder_data = input_csv[col_titles]
                reorder_data.to_sql(name='title_data', con=engine, if_exists = 'replace', index=False)    
                # Reorder the data and output in the correct order

                ##### Genre Loads == DataFrame 2
                df = input_csv
                v = df.genres.apply(json.loads)

                df = pd.DataFrame(
                {
                    'id' : df['id'].values.repeat(v.str.len(), axis=0),
                    'genre' : np.concatenate(v.tolist())
                })

                df['genre'] = df['genre'].map(lambda x: x.get('name'))

                genre_data = df.genre.str.get_dummies().sum(level=0)

                genre_data = df.loc[(df!=0).any(1)]
                #genre_data = genre_data.set_index('id')

                genre_order = ['id','genre']

                ## Dataframw to SQL
                genre_data[genre_order].to_sql(name='genre_data', con=engine, if_exists = 'replace', index=False) 
                ####### Keyword Search ### Dataframe

                #genre_data.to_csv("genre_data.csv")

                #return genre_data[genre_order].to_html()

                flash('Database has been updated successfully','success')
                #return reorder_data[col_titles].to_html()
                #stream = io.StringIO(file.stream.read().decode("UTF8"), newline=None)
                #csv_input = csv.reader(stream)
                #return reorder_data.to_html(index=False)
                #flash('File Uploaded Successfully')
                #return redirect(url_for('index'))
            except pd.errors.EmptyDataError as ex:
                flash('No File Selected','danger')
            except pd.errors.ParserError as ex:
                flash('Invalid File Format','danger')
            except Exception as ex:
                flash('Invalid File Format','danger')
    return render_template('upload.html')

收藏夹

@app.route('/my_f')
def my_f():
# Create Cursor
cursor = cnx3.cursor(dictionary=True)

cursor.execute("SELECT favourites.id,favourites.rating,title_data.title,title_data.w_average,title_data.runtime,title_data.vote_count,title_data.year from tableau_data.title_data inner join webapp.favourites on webapp.favourites.film_id = tableau_data.title_data.id WHERE webapp.favourites.username = %s",([session['username']]))


## Fetch all Results - Need to figure out why this is not displaying
results = cursor.fetchall()

if results is not None:
        flash('Data Found','success')
        cursor.close()
        #cnx3.close()
        #Here I can close this connection, but then this function does not work
        return render_template('my_f.html', results=results)

        #cnx3.close() #/// Need to sort out the dual connection - When Updating the data

else:
    # Message if the sql query does not return a value
    flash('Nothing Found', 'danger')
    return render_template('my_f.html')
return render_template('my_f.html')

1 个答案:

答案 0 :(得分:0)

考虑在每个方法中处理打开/关闭连接,而不是在可能保持连接打开的全局范围内,例如pandas操作中使用的sqlAlchemy引擎。此外,对于HTML模板呈现,因为空光标提取将返回空列表而不是None,所以有条件地将结果重新分配给实际None

数据框操作

@app.route('/upload', methods =['GET', 'POST'])
@auth
def csv_input():
   # ... same code in method

   # OPEN ENGINE
   engine = create_engine('mysql://root:@localhost/tableau_data?charset=utf8', encoding='utf-8')

   # RUN REPLACE AND APPEND
   reorder_data.to_sql(name='title_data', con=engine, if_exists = 'replace', index=False) 

   # CLOSE ENGINE
   engine.dispose()

   # ... same code in method

模板渲染

@app.route('/my_f')
def my_f():
    # OPEN CONNECTION
    cnx3 = pymysql.connect(***)

    # Create Cursor
    my_cursor = cnx3.cursor(dictionary=True)

    sql = """SELECT f.id, f.rating, t.title, t.w_average, t.runtime, 
                    t.vote_count, t.year
             FROM tableau_data.title_data t
             INNER JOIN webapp.favourites f on f.film_id = t.id 
             WHERE f.username = %s"""

    my_cursor.execute(sql, (session['username'],))

    ## Fetch all results into local list
    results = my_cursor.fetchall()

    # CLOSE CURSOR AND CONNECTION
    my_cursor.close()
    cnx3.close() 

    if len(results) > 0:
        flash('Data Found', 'success')    
    else:
        # Message if the sql query does not return a value
        flash('Nothing Found', 'danger')
        results = None

    return render_template('my_f.html', results=results)