训练测试分裂而不使用scikit学习

时间:2017-11-09 12:43:40

标签: python

我有房价预测数据集。我必须将数据集拆分为traintest 我想知道是否可以使用numpyscipy来执行此操作? 我现在无法使用scikit学习。

5 个答案:

答案 0 :(得分:4)

我知道您的问题只是用numpyscipy进行train_test_split,但是实际上有一种非常简单的方法可以用Pandas来做到这一点:

import pandas as pd 

# Shuffle your dataset 
shuffle_df = df.sample(frac=1)

# Define a size for your train set 
train_size = int(0.7 * len(df))

# Split your dataset 
train_set = shuffle_df[:train_size]
test_set = shuffle_df[train_size:]

适合那些想要快速简便的解决方案的人。

答案 1 :(得分:0)

import numpy as np
import pandas as pd

X_data = pd.read_csv('house.csv')
Y_data = X_data["prices"]
X_data.drop(["offers", "brick", "bathrooms", "prices"], 
            axis=1, inplace=True) # important to drop prices as well

# create random train/test split
indices = range(X_data.shape[0])
num_training_instances = int(0.8 * X_data.shape[0])
np.random.shuffle(indices)
train_indices = indices[:num_training_indices]
test_indices = indices[num_training_indices:]

# split the actual data
X_data_train, X_data_test = X_data.iloc[train_indices], X_data.iloc[test_indices]
Y_data_train, Y_data_test = Y_data.iloc[train_indices], Y_data.iloc[test_indices]

这假设您需要随机拆分。会发生的是,只要您拥有的数据点数量,即X_data(或Y_data)的第一个轴,我们就会创建一个索引列表。然后我们将它们按随机顺序排列,只需将前80%的随机索引作为训练数据,其余的用于测试。 [:num_training_indices]只从列表中选择第一个num_training_indices。之后,您只需使用随机索引列表从数据中提取行,您的数据就会被拆分。如果您希望拆分具有可重现性(开始时为np.random.seed(some_integer)),请记住从X_data中删除价格并设置种子。

答案 2 :(得分:0)

此代码应该有效(假设X_data是一个pandas DataFrame):

import numpy as np

print (X_data.shape) #Take the first number from X_data.shape and store it in num_of_rows
num_of_rows = (enter your number here) * 0.8

np.random_shuffle(X_data) #shuffles data to make it random
train_data = X_data.iloc[:num_of_rows] #indexes rows for training data
test_data = X_data.iloc[num_of_rows:] #indexes rows for test data
train_data.sort() # sorts data
test_data.sort()

希望这有帮助!

答案 3 :(得分:0)

此解决方案仅使用pandas和numpy

def split_train_valid_test(data,valid_ratio,test_ratio):
    shuffled_indcies=np.random.permutation(len(data))
    valid_set_size= int(len(data)*valid_ratio)
    valid_indcies=shuffled_indcies[:valid_set_size]
    test_set_size= int(len(data)*test_ratio)
    test_indcies=shuffled_indcies[valid_set_size:test_set_size+valid_set_size]
    train_indices=shuffled_indcies[test_set_size:]
    return data.iloc[train_indices],data.iloc[valid_indcies],data.iloc[test_indcies]

train_set,valid_set,test_set=split_train_valid_test(dataset,valid_ratio=0.2,test_ratio=0.2)
print(len(train_set),len(valid_set),len(test_set))
##out: (16512, 4128, 4128)

答案 4 :(得分:0)

尽管这是一个古老的问题,但这个答案可能会有所帮助。

这是sklearn实现train_test_split的方式,下面给出的此方法采用与sklearn类似的参数。

import numpy as np
from itertools import chain

def _indexing(x, indices):
    """
    :param x: array from which indices has to be fetched
    :param indices: indices to be fetched
    :return: sub-array from given array and indices
    """
    # np array indexing
    if hasattr(x, 'shape'):
        return x[indices]

    # list indexing
    return [x[idx] for idx in indices]

def train_test_split(*arrays, test_size=0.25, shufffle=True, random_seed=1):
    """
    splits array into train and test data.
    :param arrays: arrays to split in train and test
    :param test_size: size of test set in range (0,1)
    :param shufffle: whether to shuffle arrays or not
    :param random_seed: random seed value
    :return: return 2*len(arrays) divided into train ans test
    """
    # checks
    assert 0 < test_size < 1
    assert len(arrays) > 0
    length = len(arrays[0])
    for i in arrays:
        assert len(i) == length

    n_test = int(np.ceil(length*test_size))
    n_train = length - n_test

    if shufffle:
        perm = np.random.RandomState(random_seed).permutation(length)
        test_indices = perm[:n_test]
        train_indices = perm[n_test:]
    else:
        train_indices = np.arange(n_train)
        test_indices = np.arange(n_train, length)

    return list(chain.from_iterable((_indexing(x, train_indices), _indexing(x, test_indices)) for x in arrays))

当然sklearn的实现支持分层的k折,熊猫系列的拆分等。这仅适用于拆分列表和numpy数组,我认为这将适合您的情况。