张量流定义的精确小批量

时间:2018-07-22 10:36:33

标签: tensorflow

我有一个数据库,其中用numpy剪切了变量X和要猜测或“预测”的变量:Y

我的问题是关于训练数据批次的定义。确实,

我在数据库中有一列称为“班级”的列,通知该学生属于该班级。

如果我想让张量流按类别进行预测,我必须将本列中呈现的不同值定义为``训练小批量''吗?

(例如:class_1,class_2,class_3),但是这些类的大小显然永远不会相同:/因此,如何告诉他只取一行“ classes”列的值并按类剪切,以便他可以最终接受了整个班级的学生,他可以根据班级的规模做出预测,例如(哪个学生在化学方面的进步最快,有望实现文学的未来等等)。

我希望能用我大概的英语清楚地表达,并感谢您能抽出宝贵的时间来解释一些事情。

如果澄清可以帮助您有效地回答我,请随时联系。

有代码,我也把你的准确性也告诉你,这是不会动的:(:

        #coding:utf-8
    # Import les librairies necessaires:
    import statsmodels as stat
    import seaborn as sbrn
    import pandas as pd 
    import matplotlib.pyplot as plt
    import numpy as np
    import sqlite3
    from sqlalchemy import create_engine
    from tensorflow import keras
    import tensorflow as tf

    # Connexion a la base de donnée:
    conn=sqlite3.connect("t15.db")

    engine = create_engine('sqlite:///t15.db')

    def init_variables():

        weights = np.random.normal(size=18)
        bias = 0 
        return weights, bias

    def pre_activation(features,weights,bias):

        return np.dot(features, weights) + bias

    def activation(z):

        return 1 / (1 + np.exp(-z))

    def derivative_activation(z):

        return activation(z) * (1 - activation(z))

    def predict(features, weights, bias):

        z = pre_activation(features, weights, bias)
        y = activation(z)
        return np.round(y)

    def cost(predictions, targets):

        return np.mean((predictions - targets)**2)

    def train(features, targets, weights, bias):

        epochs = 100
        learning_rate = 0.1

        # Imprimer la precision:
        predictions = predict(features, weights, bias)
        print("Accuracy = ", np.mean(predictions == targets))

        for epoch in range(epochs):
            if epoch % 10 == 0:
                predictions = activation(pre_activation(features, weights, bias))
                print("Cost = {0}".format(cost(predictions, targets)))

            # Initialiser les gradients:
            weights_gradients = np.zeros(weights.shape)
            bias_gradients = 0

            # Traverser chaque itération, chaque ligne de la DB:
            for feature, target in zip(features, targets):

                # Calculer la predicton:
                z = pre_activation(feature, weights, bias)
                y = activation(z)
                # Mettre à jour le gradients:
                weights_gradients += (y - target) * derivative_activation(z) * feature
                bias_gradients += (y - target) * derivative_activation(z)
            # Mettre à jour les variables:
            weights = weights - learning_rate * weights_gradients
            bias = bias - learning_rate * bias_gradients
            # Imprimer la precision:
            predictions = predict(features, weights, bias)
            print("Accuracy = ", np.mean(predictions == targets))


    # Creation de notre DataSet:
df = pd.read_sql_table("performances",engine, index_col='id', columns=['nom_cours','name_student','date_cours','prof','frere_soeur','option','parent','age','heure_cours','nb_de_cours','moyenne','meilleur_note','temps_controle','nb_controle','sexe','place'])
    df.loc[:, 'date_cours'] = pd.to_datetime(df.date_cours)

    X = df.iloc[:,-16:-1].values
    Y = df.iloc[:,-1].values

    # Codage de la variable indépendante:
    from sklearn.preprocessing import LabelEncoder, OneHotEncoder
    labEncr_X = LabelEncoder()

    # Passer les variable en Dummy
    X[:,0] = labEncr_X.fit_transform(X[:,0])
    X[:,1] = labEncr_X.fit_transform(X[:,1])
    X[:,2] = labEncr_X.fit_transform(X[:,2])
    X[:,3] = labEncr_X.fit_transform(X[:,3])
    X[:,4] = labEncr_X.fit_transform(X[:,4])
    X[:,12] = labEncr_X.fit_transform(X[:,12])
    X[:,13] = labEncr_X.fit_transform(X[:,13])

    # Passer les variable en Dummy de trap
    onehotEncr = OneHotEncoder(categorical_features=[0])
    onehotEncr = OneHotEncoder(categorical_features=[1])
    onehotEncr = OneHotEncoder(categorical_features=[2])
    onehotEncr = OneHotEncoder(categorical_features=[3])
    onehotEncr = OneHotEncoder(categorical_features=[4])
    onehotEncr = OneHotEncoder(categorical_features=[12])
    onehotEncr = OneHotEncoder(categorical_features=[13])

    X = onehotEncr.fit_transform(X).toarray()

    # Variable dépendante:
    labEncr_Y = LabelEncoder()
    Y = labEncr_Y.fit_transform(Y)

    features = X
    targets = Y.reshape(-1, 1)

    # Construction des echantillons de test et de training:
    from sklearn.model_selection import train_test_split
    X_train,X_test,Y_train,Y_test = train_test_split(X,Y, test_size= 0.2, random_state = 0)

    if __name__ == '__main__':
        # dataset

        weights, bias = init_variables()

        print(features[0])
        print(features[1])
        train(features,targets,weights,bias)

精度:

[0.0000e+00 0.0000e+00 1.0000e+00 0.0000e+00 5.1000e+01 3.1000e+01
 0.0000e+00 1.8900e+02 2.2100e+02 2.1000e+03 7.4100e+01 7.0000e+00
 1.3470e+01 1.0000e+00 4.8000e+04 1.7024e+05 1.0000e+00 1.8000e+01]
[0.0000e+00 0.0000e+00 0.0000e+00 1.0000e+00 5.1000e+01 6.0000e+01
 0.0000e+00 1.6700e+02 6.8000e+01 2.1000e+03 7.3600e+01 7.0000e+00
 1.3470e+01 2.0000e+00 4.8000e+04 1.7436e+05 1.0000e+00 1.8000e+01]
Accuracy =  0.20898258478460127
Cost = 0.7910174152153987
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Cost = 0.7910174152153987
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Accuracy =  0.20898258478460127
Cost = 0.7910174152153987

非常感谢您对我们感兴趣的人! :)

0 个答案:

没有答案