字符串结尾时的PHP字符串比较可以包含随机数

时间:2016-06-30 06:09:22

标签: php

我需要让PHP在下面工作。我基本上需要取一个存储在$sshCommandResponseString中的字符串,并检查它是否包含4个可能的字符串中的1个。

start: Job is already running: mysql
mysql start/running, process 8019
mysql stop/waiting mysql start/running, process 8348
stop: Unknown instance: mysql start/running, process 8693

最后3个字符串结尾,带有4位数字,始终为不同的值。所以我不能像下面的函数那样做一个简单的if / else检查,因为字符串有时会以4位数字结尾!

那么检查我的字符串是否与这4个字符串中的1个匹配的最佳方法是什么?

function didMySqlDbRestart($sshCommandResponseString){

    if( $sshCommandResponseString == 'start: Job is already running: mysql' ){
        return true;
    }else if( $sshCommandResponseString == 'mysql start/running, process 8019' ){
        return true;
    }else if( $sshCommandResponseString == 'mysql stop/waiting mysql start/running, process 8348' ){
        return true;
    }else if( $sshCommandResponseString == 'stop: Unknown instance: mysql start/running, process 8693' ){
        return true;
    }else{
        return false;
    }
}

if(didMySqlDbRestart($sshCommandResponseString)){
    echo 'SUCCESS: MySQL Database Rebooted';
}else{
    echo 'ERROR: MySQL Database Did Not Reboot';
}

更新

添加到用户激光的答案或许这样的事情就可以了......

  function didMySqlDbRestart($sshCommandResponseString){
    if(preg_match('/^.*\d{4}$/', trim($sshCommandResponseString)) > 0){
        return true;
    }else if($sshCommandResponseString == 'start: Job is already running: mysql'){
        return true;
    }else{
        return false;
    }
  }

2 个答案:

答案 0 :(得分:4)

使用strpos检查字符串的常量部分,或使用preg_match使用正则表达式对其进行测试

答案 1 :(得分:0)

我在这里正在考虑正则表达式。

from itertools import product

import numpy as np
import multiprocessing
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB, MultinomialNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis

from sklearn.datasets import make_classification
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split

names = ["Linear SVM", "Decision Tree",
     "Random Forest", "AdaBoost", "Naive Bayes", "Linear Discriminant Analysis",
     "Quadratic Discriminant Analysis"]

# def mp_handler():
#     p = multiprocessing.Pool(8)
#     p.map(mp_worker, data) 

def mp_worker((name, clf, X_train, y_train, X_test, y_test, num_features_to_remove)):
    if num_features_to_remove == False:
        clf.fit(X_train, y_train)
        return ('score1', clf.score(X_test, y_test))

    clf.fit(X_train[:,:-num_feats_to_remove], y_train)
    return ('score2', clf.score(X_test[:,:-num_feats_to_remove], y_test))

def griddy_mcsearchface(num_samples, num_feats, num_feats_to_remove): 
    classifiers = [
        SVC(kernel="linear", C=0.025),
        DecisionTreeClassifier(max_depth=5),
        RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
        AdaBoostClassifier(), GaussianNB(),
        LinearDiscriminantAnalysis(),
        QuadraticDiscriminantAnalysis()]

    classifiers2 = [
        SVC(kernel="linear", C=0.025),
        DecisionTreeClassifier(max_depth=5),
        RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
        AdaBoostClassifier(), GaussianNB(),
        LinearDiscriminantAnalysis(),
        QuadraticDiscriminantAnalysis()]

    X, y = make_classification(n_samples=num_samples, n_features=num_feats, n_redundant=0, n_informative=2,
                           random_state=1, n_clusters_per_class=1)
    X = StandardScaler().fit_transform(X)
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2)



    for name, clf, clf2 in zip(names, classifiers, classifiers2):

        p = multiprocessing.Pool(2) #set to 2 for using two processors; one processor per classfier
        #The integer parameter you pass to Pool is equal to the number of SETS of classifiers you have
        data = (name, clf, X_train, y_train, X_test, y_test, False), (name, clf, X_train, y_train, X_test, y_test, num_feats_to_remove)
        res =  p.map(mp_worker, data) #this splits the two classification tasks acrpss two separate processors
        for i,j in res: #parse the results
            if i == 'score1':
                score1 = j
            else:
                score2 = j

        yield (num_samples, num_feats, num_feats_to_remove, name, score1, score2)

if __name__ == '__main__':


    _samples = [100, 200]
    _feats = [10, 20]
    _feats_to_rm = [5, 10]
    for num_samples, num_feats, num_feats_to_remove in product(_samples, _feats, _feats_to_rm):
        if num_feats <= num_feats_to_remove:
            continue
        for i in griddy_mcsearchface(num_samples, num_feats, num_feats_to_remove):
            print (i)