这是一个初学者的问题,因为我没有使用NEventStore的经验,因此可以尝试一下。
这个问题的重点与Greg Young in this document所设想的乐观并发检查的概念有关,并给出了一个实际的例子here。
因此,在我的应用程序中,我具有以下接口,这是我将在存储库实现中使用的事件存储抽象:
Rimozione feature non utilizzabili
[[130 82 17 9]
[ 62 339 113 50]
[ 19 129 175 165]
[ 5 39 148 342]]
precision recall f1-score support
0 0.60 0.55 0.57 238
1 0.58 0.60 0.59 564
2 0.39 0.36 0.37 488
3 0.60 0.64 0.62 534
accuracy 0.54 1824
macro avg 0.54 0.54 0.54 1824
weighted avg 0.54 0.54 0.54 1824
Confusion matrix, without normalization
[[130 82 17 9]
[ 62 339 113 50]
[ 19 129 175 165]
[ 5 39 148 342]]
Fitting 3 folds for each of 5 candidates, totalling 15 fits
/home/andrea/.local/lib/python3.7/site-packages/sklearn/model_selection/_split.py:657: Warning: The least populated class in y has only 1 members, which is too few. The minimum number of members in any class cannot be less than n_splits=3.
% (min_groups, self.n_splits)), Warning)
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[CV] min_samples_split=[ 1 23 45 67 89 111 133 155 177 200], n_estimators=10
[CV] min_samples_split=[ 1 23 45 67 89 111 133 155 177 200], n_estimators=10
[CV] min_samples_split=[ 1 23 45 67 89 111 133 155 177 200], n_estimators=90
[CV] min_samples_split=[ 1 23 45 67 89 111 133 155 177 200], n_estimators=90
/home/andrea/.local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:528: FutureWarning: From version 0.22, errors during fit will result in a cross validation score of NaN by default. Use error_score='raise' if you want an exception raised or error_score=np.nan to adopt the behavior from version 0.22.
FutureWarning)
[Parallel(n_jobs=-1)]: Done 8 out of 15 | elapsed: 2.1s remaining: 1.8s
/home/andrea/.local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:528: FutureWarning: From version 0.22, errors during fit will result in a cross validation score of NaN by default. Use error_score='raise' if you want an exception raised or error_score=np.nan to adopt the behavior from version 0.22.
FutureWarning)
[CV] min_samples_split=[ 1 23 45 67 89 111 133 155 177 200], n_estimators=90
[CV] min_samples_split=[ 1 23 45 67 89 111 133 155 177 200], n_estimators=130
[CV] min_samples_split=[ 1 23 45 67 89 111 133 155 177 200], n_estimators=50
[CV] min_samples_split=[ 1 23 45 67 89 111 133 155 177 200], n_estimators=50
[CV] min_samples_split=[ 1 23 45 67 89 111 133 155 177 200], n_estimators=50
[CV] min_samples_split=[ 1 23 45 67 89 111 133 155 177 200], n_estimators=10
/home/andrea/.local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:528: FutureWarning: From version 0.22, errors during fit will result in a cross validation score of NaN by default. Use error_score='raise' if you want an exception raised or error_score=np.nan to adopt the behavior from version 0.22.
FutureWarning)
/home/andrea/.local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:528: FutureWarning: From version 0.22, errors during fit will result in a cross validation score of NaN by default. Use error_score='raise' if you want an exception raised or error_score=np.nan to adopt the behavior from version 0.22.
FutureWarning)
[CV] min_samples_split=[ 1 23 45 67 89 111 133 155 177 200], n_estimators=130
/home/andrea/.local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:528: FutureWarning: From version 0.22, errors during fit will result in a cross validation score of NaN by default. Use error_score='raise' if you want an exception raised or error_score=np.nan to adopt the behavior from version 0.22.
FutureWarning)
/home/andrea/.local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:528: FutureWarning: From version 0.22, errors during fit will result in a cross validation score of NaN by default. Use error_score='raise' if you want an exception raised or error_score=np.nan to adopt the behavior from version 0.22.
FutureWarning)
/home/andrea/.local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:528: FutureWarning: From version 0.22, errors during fit will result in a cross validation score of NaN by default. Use error_score='raise' if you want an exception raised or error_score=np.nan to adopt the behavior from version 0.22.
FutureWarning)
[CV] min_samples_split=[ 1 23 45 67 89 111 133 155 177 200], n_estimators=130
/home/andrea/.local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:528: FutureWarning: From version 0.22, errors during fit will result in a cross validation score of NaN by default. Use error_score='raise' if you want an exception raised or error_score=np.nan to adopt the behavior from version 0.22.
FutureWarning)
[CV] min_samples_split=[ 1 23 45 67 89 111 133 155 177 200], n_estimators=170
[CV] min_samples_split=[ 1 23 45 67 89 111 133 155 177 200], n_estimators=170
[CV] min_samples_split=[ 1 23 45 67 89 111 133 155 177 200], n_estimators=170
/home/andrea/.local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:528: FutureWarning: From version 0.22, errors during fit will result in a cross validation score of NaN by default. Use error_score='raise' if you want an exception raised or error_score=np.nan to adopt the behavior from version 0.22.
FutureWarning)
/home/andrea/.local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:528: FutureWarning: From version 0.22, errors during fit will result in a cross validation score of NaN by default. Use error_score='raise' if you want an exception raised or error_score=np.nan to adopt the behavior from version 0.22.
FutureWarning)
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/andrea/.local/lib/python3.7/site-packages/joblib/externals/loky/process_executor.py", line 418, in _process_worker
r = call_item()
File "/home/andrea/.local/lib/python3.7/site-packages/joblib/externals/loky/process_executor.py", line 272, in __call__
return self.fn(*self.args, **self.kwargs)
File "/home/andrea/.local/lib/python3.7/site-packages/joblib/_parallel_backends.py", line 567, in __call__
return self.func(*args, **kwargs)
File "/home/andrea/.local/lib/python3.7/site-packages/joblib/parallel.py", line 225, in __call__
for func, args, kwargs in self.items]
File "/home/andrea/.local/lib/python3.7/site-packages/joblib/parallel.py", line 225, in <listcomp>
for func, args, kwargs in self.items]
File "/home/andrea/.local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py", line 514, in _fit_and_score
estimator.fit(X_train, y_train, **fit_params)
File "/home/andrea/.local/lib/python3.7/site-packages/sklearn/ensemble/forest.py", line 330, in fit
for i, t in enumerate(trees))
File "/home/andrea/.local/lib/python3.7/site-packages/joblib/parallel.py", line 934, in __call__
self.retrieve()
File "/home/andrea/.local/lib/python3.7/site-packages/joblib/parallel.py", line 833, in retrieve
self._output.extend(job.get(timeout=self.timeout))
File "/usr/lib/python3.7/multiprocessing/pool.py", line 657, in get
raise self._value
File "/usr/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/andrea/.local/lib/python3.7/site-packages/joblib/_parallel_backends.py", line 567, in __call__
return self.func(*args, **kwargs)
File "/home/andrea/.local/lib/python3.7/site-packages/joblib/parallel.py", line 225, in __call__
for func, args, kwargs in self.items]
File "/home/andrea/.local/lib/python3.7/site-packages/joblib/parallel.py", line 225, in <listcomp>
for func, args, kwargs in self.items]
File "/home/andrea/.local/lib/python3.7/site-packages/sklearn/ensemble/forest.py", line 118, in _parallel_build_trees
tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)
File "/home/andrea/.local/lib/python3.7/site-packages/sklearn/tree/tree.py", line 816, in fit
X_idx_sorted=X_idx_sorted)
File "/home/andrea/.local/lib/python3.7/site-packages/sklearn/tree/tree.py", line 211, in fit
if not 0. < self.min_samples_split <= 1.:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "main_classification.py", line 111, in <module>
main()
File "main_classification.py", line 107, in main
y_pred=RFClassifier(X_train_mean,X_test_mean,Y_train,Y_test)
File "main_classification.py", line 32, in RFClassifier
best_params =tun.tun_RF(classifier , X_train , y_train)
File "/home/andrea/gruppo3/API/scripts_init/modules_and_main/tuning_classifiers.py", line 11, in tun_RF
grid_search.fit(X_train , Y_train)
File "/home/andrea/.local/lib/python3.7/site-packages/sklearn/model_selection/_search.py", line 687, in fit
self._run_search(evaluate_candidates)
File "/home/andrea/.local/lib/python3.7/site-packages/sklearn/model_selection/_search.py", line 1148, in _run_search
evaluate_candidates(ParameterGrid(self.param_grid))
File "/home/andrea/.local/lib/python3.7/site-packages/sklearn/model_selection/_search.py", line 666, in evaluate_candidates
cv.split(X, y, groups)))
File "/home/andrea/.local/lib/python3.7/site-packages/joblib/parallel.py", line 934, in __call__
self.retrieve()
File "/home/andrea/.local/lib/python3.7/site-packages/joblib/parallel.py", line 833, in retrieve
self._output.extend(job.get(timeout=self.timeout))
File "/home/andrea/.local/lib/python3.7/site-packages/joblib/_parallel_backends.py", line 521, in wrap_future_result
return future.result(timeout=timeout)
File "/usr/lib/python3.7/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/usr/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
我的目标是使用NEventStore库提供public interface IEventStore
{
void SaveEvents(Guid aggregateId, IEnumerable<Event> events, int expectedVersion);
List<Event> GetEventsForAggregate(Guid aggregateId);
}
的实现。这是一个幼稚的实现,在这里我执行一种穷人乐观并发检查:
IEventStore
这是我的问题:
是否可以避免手动检查预期的汇总 通过使用以下功能保存新提交之前的版本 NEventStore?
我在这里假设NEventStore能够在
检查预期的聚合版本,以便在另一个线程
或节点在对public class EventStore : IEventStore
{
private readonly IEventStream stream;
public EventStore(IEventStream stream)
{
this.stream = stream ?? throw new ArgumentNullException(nameof(stream));
}
public List<Event> GetEventsForAggregate(Guid aggregateId)
{
// implementation omitted because I'm only interested in understanding how to imeplement save method right now...
}
public void SaveEvents(Guid aggregateId, IEnumerable<Event> events, int expectedVersion)
{
using (var stream = this.store.OpenStream(aggregateId, 0, int.MaxValue))
{
// here, by following Greg Young's paper, I should perform the optimistic concurrency check...
int currentAggregateRevision = stream.StreamRevision;
bool isNewStream = currentAggregateRevision == 0;
if(!isNewStream && expectedVersion != currentAggregateRevision)
{
// DANGER: optimistic concurrency check failed !
throw new ConcurrencyException("The guy that issued the command did not work on the latest version of the aggregate. We cannot commit these events.")
}
foreach(var @event in events)
{
stream.Add(new EventMessage { Body = @event });
}
stream.CommitChanges(Guid.NewGuid()); // Is there a best practice to generate a commit id ? Is it ok to use a new guid ?
}
}
}
的调用与
OpenStream
和CommitChanges
会在
ConcurrencyException
被调用。这个假设正确吗?
更新2019年6月3日
对于所有对此主题感兴趣的读者,我在NEventStore github repo上问了同样的问题。如果您对讨论感兴趣,请查看the issue I opened。
更新2019年6月7日
以下是我从github问题获得的建议得出的最终代码版本:
CommitChanges