我正在尝试 预测 以下内容:
list( [
close price (current day)
- open price (current day)
] )
使用以下作为输入:
list( [
open price (current day)
- close price (yesterday)
] )
然而,我的test_prediction
结果是对不正确事物的预测。
来自sklearn
和statsmodels
线性回归模型的预测显示输入数据(test_data
)和prediction results
之间的相关性约为100%,而prediction results
应该与test_result
相关联。
我做错了什么(或在这里丢失了),我该如何解决?
代码将生成4个图,显示不同列表之间的相关性。
###### Working usable example and code below ######
import numpy as np
from plotly.offline import plot
import plotly.graph_objs as go
from sklearn import linear_model
import statsmodels.api as sm
def xy_corr( x, y, fname ):
trace1 = go.Scatter( x = x,
y = y,
mode = 'markers',
marker = dict( size = 6,
color = 'black',
),
showlegend = False
)
layout = go.Layout( title = fname )
fig = go.Figure( data = [trace1],
layout = layout
)
plot( fig, filename = fname + '.html' )
open_p = [23215, 23659, 23770, 23659, 23659, 23993, 23987, 23935, 24380, 24271, 24314, 24018, 23928, 23240, 24193, 23708, 23525, 23640, 23494, 23333, 23451, 23395, 23395, 23925, 23936, 24036, 24008, 24248, 24249, 24599, 24683, 24708, 24510, 24483, 24570, 24946, 25008, 24880, 24478, 24421, 24630, 24540, 24823, 25090, 24610, 24866, 24578, 24686, 24465, 24225, 24526, 24645, 24780, 24538, 24895, 24921, 24743, 25163, 25163, 25316, 25320, 25158, 25375, 25430, 25466, 25231, 25103, 25138, 25138, 25496, 25502, 25610, 25625, 25810, 25789, 25533, 25785, 25698, 25373, 25558, 25594, 25026, 24630, 24509, 24535, 24205, 24465, 23847, 24165, 23840, 24216, 24355, 24158, 23203, 23285, 23423, 23786, 23729, 23944, 23637]
close_p = [23656, 23758, 23663, 23659, 23989, 23978, 24142, 24152, 24279, 24271, 24393, 23942, 23640, 24102, 23710, 23708, 23705, 23693, 23561, 23441, 23395, 23395, 23990, 23900, 24158, 24188, 24241, 24248, 24699, 24678, 24715, 24523, 24486, 24483, 24947, 24904, 24923, 24478, 24434, 24421, 24409, 24705, 25047, 24642, 24875, 24866, 24698, 24463, 24262, 24396, 24633, 24645, 24528, 24895, 24895, 24839, 25178, 25163, 25315, 25323, 25149, 25387, 25375, 25469, 25231, 25073, 25138, 25138, 25448, 25611, 25705, 25623, 25813, 25798, 25560, 25518, 25743, 25305, 25654, 25579, 25315, 24783, 24508, 24532, 24208, 24176, 24047, 24148, 24165, 24159, 24286, 24249, 23635, 23128, 23438, 23869, 23420, 23756, 23705, 24018]
open_prev_close_diff = np.array( [ open_p[i] - close_p[i-1] for i in range( 1, len( open_p ) )] )[np.newaxis].T
open_current_close_diff = np.array( [close_p[i] - open_p[i] for i in range( 1, len( open_p ) )] )
train_data = open_prev_close_diff[ :80]
test_data = open_prev_close_diff[80:]
train_result = open_current_close_diff[ :80]
test_result = open_current_close_diff[80:]
regressor = linear_model.LinearRegression()
regressor.fit( train_data, train_result )
test_prediction = np.array( [int(i) for i in regressor.predict( test_data )] )
xy_corr( [int(i) for i in test_result], test_prediction, 'known_result_and_prediction_result_sklearn')
xy_corr( [int(i) for i in test_data], test_prediction, 'input_data_and_prediction_result_sklearn' )
olsmod = sm.OLS( train_result, train_data )
olsres = olsmod.fit()
test_prediction = np.array( [int(i) for i in olsres.predict( test_data )] )
xy_corr( [int(i) for i in test_result], test_prediction, 'known_result_and_prediction_result_smOLS')
xy_corr( [int(i) for i in test_data], test_prediction, 'input_data_and_prediction_result_smOLS' )
答案 0 :(得分:1)
希望没有人会认为这种不礼貌和/或有害,
我让我从“{{3}”的基本假设中引用一个可爱的观点。 “:许多当代数量金融建模师忽视或抽象:
对于任何两个相关事件, A
和 B
,以下关系是可能:
A
导致B
; (直接因果关系)
B
导致A
; (反向因果关系)
A
和B
是常见原因的后果,但不会相互引起;
A
导致B
和B
导致A
(双向或循环因果关系);
A
导致C
导致B
(间接因果关系);
A
和B
之间没有任何关联;相关性为Correlation does not imply Causation。
因此对于因果关系的存在或方向,不能得出结论事实 A
和 B
相关。
LinearRegressor()
除了一条线之外不会产生任何其他东西 - 也就是说它的每一个预测确实都在模型的线。因此,对于这种预测器,一半的图表显然是必须做的
的 Q.E.D.
强>
另一半没有显示任何其他东西,除了过度简化的线性模型使用尽管可观察的现实。
当然,所经历的真实行为是 NOT 线性(但不要将预测因素归咎于" 不适合",就是它&# 39;有责任制定 MSE
- 最小化驱动的线性模型,它在DataSET
的训练部分找不到任何更好的线性拟合,如果训练过的话 y = x^2
合成DataSET
(,其中一个人具有抛物线形状的先验知识),< strong>再次它只生成一个线性模型,MSE
的训练部分分数最小 DataSET
,我们都是事先确定,任何一条线因此会产生完全有缺陷的预测 OoS
,但不是由于它未能正常工作,而是由于委托人在外部灌输中尝试使用线性模型预测器(有意识的二次)环境中的无意义,它不遵循(已知的)现实。
作为一个基本的定量观点,比严格的 Kolmogorov-Smirnov
测试简单得多,对于未表达的假设,检查百分比日内差距( Open[i] - Close[i-1] )
[ 75% ]
的负差异来自仅100个样本的相当浅DataSET
与负面差异蜡烛体( Close[i] - Open[i] )
只有 [ 55% ]
来自100个样品中相当浅的DataSET
无论如何,即使在更好的工程预测模型上,只需80天样本的训练就有很差的概括,并且不仅应该关注更好的泛化能力,还应该关注避免季节性偏见等。
为了了解ML进入这个领域的位置,我最好的预成型AI / ML模型具有
0k3
特征(许多非常非线性的)合成特征)并通过30k+ DataSETs
进行深入培训,仔细研究过度拟合和搜索学习者引擎的广阔空间的风险。超参数StateSPACE
|
|>>> QuantFX.get_LDF_GDF_fromGivenRANGE( [ open_PRICE[i] - close_PRICE[i-1] for i in range( 1, len( close_PRICE ) ) ], nBINs_ = 31, aPrefixTEXT_ = "" )
0: ~ -432.00 LDF = 1 |____ 1.0 % _||____ 1 %
1: ~ -408.10 LDF = 1 |____ 1.0 % _||____ 2 %
2: ~ -384.19 LDF = 1 |____ 1.0 % _||____ 3 %
3: ~ -360.29 LDF = 0 |____ 0.0 % _||____ 3 %
4: ~ -336.39 LDF = 1 |____ 1.0 % _||____ 4 %
5: ~ -312.48 LDF = 1 |____ 1.0 % _||____ 5 %
6: ~ -288.58 LDF = 1 |____ 1.0 % _||____ 6 %
7: ~ -264.68 LDF = 0 |____ 0.0 % _||____ 6 %
8: ~ -240.77 LDF = 1 |____ 1.0 % _||____ 7 %
9: ~ -216.87 LDF = 3 |____ 3.0 % _||___ 10 %
10: ~ -192.97 LDF = 2 |____ 2.0 % _||___ 12 %
11: ~ -169.06 LDF = 1 |____ 1.0 % _||___ 13 %
12: ~ -145.16 LDF = 1 |____ 1.0 % _||___ 14 %
13: ~ -121.26 LDF = 2 |____ 2.0 % _||___ 16 %
14: ~ -97.35 LDF = 5 |____ 5.1 % _||___ 21 %
15: ~ -73.45 LDF = 3 |____ 3.0 % _||___ 24 %
16: ~ -49.55 LDF = 5 |____ 5.1 % _||___ 29 %
17: ~ -25.65 LDF = 18 |___ 18.2 % _||___ 47 %
18: ~ -1.74 LDF = 28 |___ 28.3 % _||___ 75 %
19: ~ 22.16 LDF = 5 |____ 5.1 % _||___ 80 %
20: ~ 46.06 LDF = 5 |____ 5.1 % _||___ 85 %
21: ~ 69.97 LDF = 2 |____ 2.0 % _||___ 87 %
22: ~ 93.87 LDF = 1 |____ 1.0 % _||___ 88 %
23: ~ 117.77 LDF = 4 |____ 4.0 % _||___ 92 %
24: ~ 141.68 LDF = 1 |____ 1.0 % _||___ 93 %
25: ~ 165.58 LDF = 1 |____ 1.0 % _||___ 94 %
26: ~ 189.48 LDF = 1 |____ 1.0 % _||___ 95 %
27: ~ 213.39 LDF = 1 |____ 1.0 % _||___ 96 %
28: ~ 237.29 LDF = 0 |____ 0.0 % _||___ 96 %
29: ~ 261.19 LDF = 1 |____ 1.0 % _||___ 97 %
30: ~ 285.10 LDF = 2 |____ 2.0 % _||__ 100 %
+0:00:06.234000
|
|
|>>> QuantFX.get_LDF_GDF_fromGivenRANGE( [ close_PRICE[i] - open_PRICE[i] for i in range( 1, len( close_PRICE ) ) ], nBINs_ = 31, aPrefixTEXT_ = "" )
0: ~ -523.00 LDF = 2 |____ 2.0 % _||____ 2 %
1: ~ -478.32 LDF = 1 |____ 1.0 % _||____ 3 %
2: ~ -433.65 LDF = 3 |____ 3.0 % _||____ 6 %
3: ~ -388.97 LDF = 1 |____ 1.0 % _||____ 7 %
4: ~ -344.29 LDF = 1 |____ 1.0 % _||____ 8 %
5: ~ -299.61 LDF = 2 |____ 2.0 % _||___ 10 %
6: ~ -254.94 LDF = 7 |____ 7.1 % _||___ 17 %
7: ~ -210.26 LDF = 3 |____ 3.0 % _||___ 20 %
8: ~ -165.58 LDF = 2 |____ 2.0 % _||___ 22 %
9: ~ -120.90 LDF = 5 |____ 5.1 % _||___ 27 %
10: ~ -76.23 LDF = 6 |____ 6.1 % _||___ 33 %
11: ~ -31.55 LDF = 22 |___ 22.2 % _||___ 55 %
12: ~ 13.13 LDF = 7 |____ 7.1 % _||___ 62 %
13: ~ 57.81 LDF = 5 |____ 5.1 % _||___ 67 %
14: ~ 102.48 LDF = 4 |____ 4.0 % _||___ 71 %
15: ~ 147.16 LDF = 8 |____ 8.1 % _||___ 79 %
16: ~ 191.84 LDF = 6 |____ 6.1 % _||___ 85 %
17: ~ 236.52 LDF = 2 |____ 2.0 % _||___ 87 %
18: ~ 281.19 LDF = 3 |____ 3.0 % _||___ 90 %
19: ~ 325.87 LDF = 2 |____ 2.0 % _||___ 92 %
20: ~ 370.55 LDF = 2 |____ 2.0 % _||___ 94 %
21: ~ 415.23 LDF = 3 |____ 3.0 % _||___ 97 %
22: ~ 459.90 LDF = 0 |____ 0.0 % _||___ 97 %
23: ~ 504.58 LDF = 0 |____ 0.0 % _||___ 97 %
24: ~ 549.26 LDF = 0 |____ 0.0 % _||___ 97 %
25: ~ 593.94 LDF = 1 |____ 1.0 % _||___ 98 %
26: ~ 638.61 LDF = 0 |____ 0.0 % _||___ 98 %
27: ~ 683.29 LDF = 0 |____ 0.0 % _||___ 98 %
28: ~ 727.97 LDF = 0 |____ 0.0 % _||___ 98 %
29: ~ 772.65 LDF = 0 |____ 0.0 % _||___ 98 %
30: ~ 817.32 LDF = 1 |____ 1.0 % _||__ 100 %
+0:01:13.172000