从pyspark中的数据帧随机采样

时间:2019-09-26 05:33:08

标签: pyspark pyspark-sql pyspark-dataframes

在我的数据集中,我有730亿行。我想对其应用分类算法。我需要原始数据中的样本,以便测试模型。

我想进行一次火车测试。

数据框看起来像-

id    age   gender    salary    bonus  area   churn
1      38    m        37654      765    bb     1
2      48    f        3654       365    bb     0
3      33    f        55443      87     uu     0
4      27    m        26354      875    jh     0
5      58    m        87643      354    vb     1

如何使用pyspark进行随机抽样,以使我的因数(客户流失率)可变比率不发生变化。 有什么建议吗?

2 个答案:

答案 0 :(得分:0)

要查看原始数据中的样本,我们可以在spark中使用样本:

df.sample(fraction).show()

分数应该在[0.0,1.0]

之间

示例:

df.sample(0.2).show(10)->重复运行此命令,它将显示原始数据的不同样本。

答案 1 :(得分:0)

您将在链接的文档中找到示例。

Spark支持Stratified Sampling

# an RDD of any key value pairs
data = sc.parallelize([(1, 'a'), (1, 'b'), (2, 'c'), (2, 'd'), (2, 'e'), (3, 'f')])

# specify the exact fraction desired from each key as a dictionary
fractions = {1: 0.1, 2: 0.6, 3: 0.3}

approxSample = data.sampleByKey(False, fractions)

您也可以使用TrainValidationSplit

例如:

from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import LinearRegression
from pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit

# Prepare training and test data.
data = spark.read.format("libsvm")\
    .load("data/mllib/sample_linear_regression_data.txt")
train, test = data.randomSplit([0.9, 0.1], seed=12345)

lr = LinearRegression(maxIter=10)

# We use a ParamGridBuilder to construct a grid of parameters to search over.
# TrainValidationSplit will try all combinations of values and determine best model using
# the evaluator.
paramGrid = ParamGridBuilder()\
    .addGrid(lr.regParam, [0.1, 0.01]) \
    .addGrid(lr.fitIntercept, [False, True])\
    .addGrid(lr.elasticNetParam, [0.0, 0.5, 1.0])\
    .build()

# In this case the estimator is simply the linear regression.
# A TrainValidationSplit requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
tvs = TrainValidationSplit(estimator=lr,
                           estimatorParamMaps=paramGrid,
                           evaluator=RegressionEvaluator(),
                           # 80% of the data will be used for training, 20% for validation.
                           trainRatio=0.8)

# Run TrainValidationSplit, and choose the best set of parameters.
model = tvs.fit(train)

# Make predictions on test data. model is the model with combination of parameters
# that performed best.
model.transform(test)\
    .select("features", "label", "prediction")\
    .show()
相关问题