如何修改指定字符后的单个字符? (PHP)

时间:2018-01-10 03:19:13

标签: php

我希望在每个开头括号之后将字符大写,即更改字符串,如: " Apple(果汁)"到#Apple; Apple(Juice)"和史蒂夫(滚刀)Ra"到史蒂夫(霍布斯)Ra"

我如何在PHP中执行此操作?

由于

3 个答案:

答案 0 :(得分:4)

您可以使用preg_replace_callback()检查parenthensis中的字词,然后ucfirst将第一个字母更改为大写

$pattern = '/\((.*?)\)/';
$string = '"Apple (juice)" "Steve (hobs) Ra"';

$newstr = preg_replace_callback($pattern, function ($matches) {
    return '(' . ucfirst($matches[1]) . ')';
}, $string);

echo $newstr;

另一种方法是使用while循环和strpos()检查左括号,然后strtoupper()将字母更改为大写

$string = '"Apple (juice)" "Steve (hobs) Ra"';

$pos = 0;
while ($pos = strpos($string, '(', $pos)) {
    $string[++$pos] = strtoupper($string[$pos]);
}

echo $string;

答案 1 :(得分:0)

使用PHP explode函数,然后使用ucfirst函数转换每个第一个字符。请参阅以下代码:

import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
import org.apache.spark.sql.expressions.{MutableAggregationBuffer, UserDefinedAggregateFunction}
import org.apache.spark.sql.Row
import scala.collection.mutable.WrappedArray
import scala.collection.mutable.{ListBuffer, ArrayBuffer}
import org.apache.spark.mllib.stat.KernelDensity


class PDFGetter(var sc: org.apache.spark.SparkContext) extends UserDefinedAggregateFunction {

  // Define the schema of the input data, 
  // intermediate processing (deals with each individual observation within each group) 
  // and return type of the UDAF
  override def inputSchema: StructType = StructType(StructField("result_dbl", DoubleType) :: Nil)

  override def bufferSchema: StructType = StructType(StructField("observations", ArrayType(DoubleType)) :: Nil)

  override def dataType: DataType = StringType


  // The UDAF will always return the same results
  // given the same inputs
  override def deterministic: Boolean = true


  // How to initialize the intermediate processing buffer
  // for each group
  override def initialize(buffer: MutableAggregationBuffer): Unit = {
    buffer(0) = Array.emptyDoubleArray
  }

  // What to do with each new row within the group
  override def update(buffer: MutableAggregationBuffer, input: Row): Unit = {
    var values = new ListBuffer[Double]()
    values.appendAll(buffer.getAs[List[Double]](0))
    val newValue = input.getDouble(0)
    values.append(newValue)
    buffer.update(0, values)
  }

  // How to merge 2 buffers located on 2 separate
  // executor hosts or JVMs
  override def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit = {
    var values = new ListBuffer[Double]()
    values ++= buffer1.getAs[List[Double]](0)
    values ++= buffer2.getAs[List[Double]](0)
    buffer1.update(0, values)
  }


  // What to do with the data once intermediate processing
  // is completed
  override def evaluate(buffer: Row): String = {
    // Get the observations
    val observations = buffer.getSeq[Double](0)     // Or val observations = buffer.getAs[Seq[Double]](0)   // Returns a WrappedArray either way
    //observations.toString

    // Calculate the bandwidth
    val nObs = observations.size.toDouble
    val mean = observations.sum / nObs
    val stdDev = Math.sqrt(observations.map(x => Math.pow(x - mean, 2.0) ).sum / nObs)
    val bandwidth = stdDev / 2.5
    //bandwidth.toString


    // Kernel Density
    // From the example at http://spark.apache.org/docs/latest/api/java/index.html#org.apache.spark.sql.Dataset
    // val sample = sc.parallelize(Seq(0.0, 1.0, 4.0, 4.0))
    // val kd = new KernelDensity()
    //      .setSample(sample)
    //        .setBandwidth(3.0)
    // val densities = kd.estimate(Array(-1.0, 2.0, 5.0))

    // Get the observations as an rdd (required by KernelDensity.setSample)
    sc.toString     // <====   This fails
    val observationsRDD = sc.parallelize(observations)

    // Create a new Kernel density object
    // for these observations
    val kd = new KernelDensity()
    kd.setSample(observationsRDD)
    kd.setBandwidth(bandwidth)

    // Create the points at which
    // the PDF will be estimated
    val minObs = observations.min
    val maxObs = observations.max
    val nPoints = Math.min(nObs/2, 1000.0).toInt
    val increment = (maxObs - minObs) / nPoints.toDouble
    val points = new Array[Double](nPoints)
    for( i <- 0 until nPoints){
      points(i) = minObs + i.toDouble * increment;
    }

    // Estimate the PDF and return
    val pdf = kd.estimate(points)
    pdf.toString
  }
}

答案 2 :(得分:0)

这是一个简单的技巧,你可以试试这个。它可能会对你有帮助。

size of online file : 60243