如何使用多线程加速计算?

时间:2011-06-07 19:27:05

标签: java multithreading performance

我正在尝试计算Pi,但我真正想要实现的是使用多个线程时的效率。算法很简单:我在单位平方中随机生成点,然后计算其中有多少点位于正方形内的圆中。 (更多这里:http://math.fullerton.edu/mathews/n2003/montecarlopimod.html) 我的想法是水平分割方块,并为它的每个部分运行不同的线程。 但不是加速,我得到的只是延迟。有什么想法吗?这是代码:

public class TaskManager {

public static void main(String[] args) {

    int threadsCount = 3;
    int size = 10000000;
    boolean isQuiet = false;

    PiCalculator pi = new PiCalculator(size);   
    Thread tr[] = new Thread[threadsCount];

    long time = -System.currentTimeMillis();

    int i;
    double s = 1.0/threadsCount;
    int p = size/threadsCount;

    for(i = 0; i < threadsCount; i++) {
        PiRunnable r = new PiRunnable(pi, s*i, s*(1.0+i), p, isQuiet);
        tr[i] = new Thread(r);
    }

    for(i = 0; i < threadsCount; i++) {
        tr[i].start();
    }

    for(i = 0; i < threadsCount; i++) {
        try {
            tr[i].join();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }       

    double myPi = 4.0*pi.getPointsInCircle()/pi.getPointsInSquare();
    System.out.println(myPi + " time = " + (System.currentTimeMillis()+time));
}
}

public class PiRunnable implements Runnable {

PiCalculator pi;
private double minX;
private double maxX;
private int pointsToSpread;

public PiRunnable(PiCalculator pi, double minX, double maxX, int pointsToSpread, boolean isQuiet) {
    super();
    this.pi = pi;
    this.minX = minX;
    this.maxX = maxX;
    this.pointsToSpread = pointsToSpread;
}


@Override
public void run() {
    int n = countPointsInAreaInCircle(minX, maxX, pointsToSpread);  
    pi.addToPointsInCircle(n);
}

public int countPointsInAreaInCircle (double minX, double maxX, int pointsCount) {
    double x;
    double y;

    int inCircle = 0;

    for (int i = 0; i < pointsCount; i++) { 
        x = Math.random() * (maxX - minX) + minX;
        y = Math.random();          

        if (x*x + y*y <= 1) {
            inCircle++;
        }
    }

    return inCircle;

}

}


public class PiCalculator {

private int pointsInSquare;
private int pointsInCircle;

public PiCalculator(int pointsInSquare) {
    super();
    this.pointsInSquare = pointsInSquare;
}

public synchronized void addToPointsInCircle (int pointsCount) {
    this.pointsInCircle += pointsCount;
}

public synchronized int getPointsInCircle () {
    return this.pointsInCircle;
}

public synchronized void setPointsInSquare (int pointsInSquare) {
    this.pointsInSquare = pointsInSquare;
}

public synchronized int getPointsInSquare () {
    return this.pointsInSquare;
}

}

一些结果: - 3个主题:“3.1424696 time = 2803” - 1个主题:“3.1416192 time = 2337”

2 个答案:

答案 0 :(得分:10)

你的线程可能正在战斗/等待同步的Math.random(),你应该为每个线程创建一个java.util.Random实例。同样在这种情况下,只有拥有多个核心/ cpu时,才会发生多线程加速。

来自Math.random()的javadoc:

  

此方法已正确同步   允许多个人正确使用   线。但是,如果需要很多线程   在a处生成伪随机数   很高的速度,它可以减少争用   为每个线程都有自己的   伪随机数发生器。

答案 1 :(得分:1)

这是一个替代main方法,它使用java.util.concurrency包而不是手动管理线程并等待它们完成。

public static void main(final String[] args) throws InterruptedException
{

    final int threadsCount = Runtime.getRuntime().availableProcessors();
    final int size = 10000000;
    boolean isQuiet = false;

    final PiCalculator pi = new PiCalculator(size);
    final ExecutorService es = Executors.newFixedThreadPool(threadsCount);

    long time = -System.currentTimeMillis();

    int i;
    double s = 1.0 / threadsCount;
    int p = size / threadsCount;

    for (i = 0; i < threadsCount; i++)
    {
        es.submit(new PiRunnable(pi, s * i, s * (1.0 + i), p, isQuiet));
    }

    es.shutdown();

    while (!es.isTerminated()) { /* do nothing waiting for threads to complete */ }

    double myPi = 4.0 * pi.getPointsInCircle() / pi.getPointsInSquare();
    System.out.println(myPi + " time = " + (System.currentTimeMillis() + time));
}

我还更改了Math.random(),以便为每个Random使用Runnable的本地实例。

final private Random rnd;

...    

x = this.rnd.nextDouble() * (maxX - minX) + minX;
y = this.rnd.nextDouble();

这是我得到的新输出......

3.1419284 time = 235

我认为你可以使用Futures来减少更多时间,而不必在PiCalculator上同步这么多。