如何增加并发异步调用的数量?

时间:2015-08-21 19:49:58

标签: javascript c# node.js asynchronous

我正在尝试增加我可以在C#中进行的并发异步调用的数量。具体来说,我试图匹配我在Node.js中看到的性能。

下面是两个示例程序,一个在C#中,另一个在JavaScript中。两者都从包含随机数的10MB文件中将随机数字节的字节读入缓冲区,然后异步执行此操作2000次。 JavaScript程序可以在我的机器上每秒执行大约60,000次,但C#程序只能执行大约3,000次。据我所知,他们正在做同样的事情,但我发现肯定存在一些差异。

我使用不同的函数运行类似的测试,包括休眠一小段时间,以及使用Datastax Cassandra驱动程序对Cassandra集群执行插入。后者的功能是我真正想要改进的,因为我在C#中每秒只能获得大约80次插入,但在Node中只能获得5000次。

有人可以帮我解释这种差异吗?

// C#

class Program
{
    static Random r = new Random();

    static void Main(string[] args)
    {
        int iterations = 2000;

        Stopwatch s = new Stopwatch();
        s.Start();

        var tasks = Enumerable.Range(1, iterations).Select(i => ReadFile()).ToArray();
        Task.WaitAll(tasks);

        s.Stop();

        Console.WriteLine("Total elapsed milliseconds was {0}.", s.ElapsedMilliseconds);
        Console.WriteLine("Total iterations were {0}.", iterations);
        Console.WriteLine("Total iterations per second was {0}.", iterations / s.Elapsed.TotalSeconds);
    }

    static async Task ReadFile()
    {
        string path = @"C:\Temp\random.txt";
        int readSize = r.Next(512, 10 * 1024);

        using (StreamReader reader = new StreamReader(path))
        {
            await reader.ReadAsync(new char[readSize], 0, readSize);
        }
    }
}
// JavaScript

var fs = require('fs');
var now = require('performance-now');

var iterations = 2000;
var startTime = now();
var runningIterations = 0;

for (var i = 1; i <= iterations; i++) {
    readFile(function () {
        runningIterations++;

        if (iterations == runningIterations) {
            totalTime = (now() - startTime);
            console.log("Total iterations: " + iterations);
            console.log("Total time: " + totalTime);
            console.log("Iterations per second: " + (iterations / (totalTime / 1000)));
        }
    });
}

function readFile(callback) {
    var path = "/Temp/random.txt";

    fs.open(path, 'r', function (err, fd) {
        var readSize = Math.floor(Math.random() * 10 * 1024) + 512;
        var buffer = new Buffer(readSize);
        fs.read(fd, buffer, 0, readSize, 0, function (err, bytesRead, buffer) {
            callback();
        });
    });
};

1 个答案:

答案 0 :(得分:0)

鉴于以下情况,我不确定我认为C#较慢:

运行您的节点脚本,我收到:

Total iterations: 2000
Total time: 1199.8234639999998
Iterations per second: 1666.9118916314178

按原样运行C#脚本,我收到:

Total elapsed milliseconds was 1041.
Total iterations were 2000.
Total iterations per second was 1920.61576477911.

如果我重构它以使用Parallel.For

static Random r = new Random();

void Main()
{
     int iterations = 2000;

    Stopwatch s = new Stopwatch();
    s.Start();

    Parallel.For(0, iterations, ReadFile);

    s.Stop();

    Console.WriteLine("Total elapsed milliseconds was {0}.", s.ElapsedMilliseconds);
    Console.WriteLine("Total iterations were {0}.", iterations);
    Console.WriteLine("Total iterations per second was {0}.", iterations / s.Elapsed.TotalSeconds);
}

// Define other methods and classes here
static async void ReadFile(int x)
{
    string path = @"C:\Temp\random.txt";
    int readSize = r.Next(512, 10 * 1024);
    using (StreamReader reader = new StreamReader(path))
    {
        await reader.ReadAsync(new char[readSize], 0, readSize);
    }
}

我获得了相当可观的性能提升:

Total elapsed milliseconds was 389.
Total iterations were 2000.
Total iterations per second was 5134.14889288497.

我还打赌如果我缓存缓冲区,在所有情况下,都会有另一个重大的问题。但是,我只能通过本地机器的性能。而且,从我看到的情况来看,没有任何东西可以让我思考&#34;哦哇,有些事情已经过时了#34;我也不了解你的环境。 :耸肩: