我有一些代码在一个列表中进行迭代,该列表从数据库中查询并为该列表中的每个元素发出HTTP请求。该列表有时可能是一个相当大的数字(成千上万),我想确保我没有遇到数千个并发HTTP请求的Web服务器。
此代码的缩写版本目前看起来像这样......
function getCounts() {
return users.map(user => {
return new Promise(resolve => {
remoteServer.getCount(user) // makes an HTTP request
.then(() => {
/* snip */
resolve();
});
});
});
}
Promise.all(getCounts()).then(() => { /* snip */});
此代码在Node 4.3.2上运行。重申一下,可以管理Promise.all
,以便在任何给定时间只有一定数量的Promise正在进行中吗?
答案 0 :(得分:33)
请注意Promise.all()
并未触发承诺开始工作,而是创建承诺本身。
考虑到这一点,一个解决方案是检查承诺何时解决是否应该启动新承诺或者您是否已经达到极限。
然而,这里真的没有必要重新发明轮子。 One library that you could use for this purpose is es6-promise-pool
。从他们的例子:
// On the Web, leave out this line and use the script tag above instead.
var PromisePool = require('es6-promise-pool')
var promiseProducer = function () {
// Your code goes here.
// If there is work left to be done, return the next work item as a promise.
// Otherwise, return null to indicate that all promises have been created.
// Scroll down for an example.
}
// The number of promises to process simultaneously.
var concurrency = 3
// Create a pool.
var pool = new PromisePool(promiseProducer, concurrency)
// Start the pool.
var poolPromise = pool.start()
// Wait for the pool to settle.
poolPromise.then(function () {
console.log('All promises fulfilled')
}, function (error) {
console.log('Some promise rejected: ' + error.message)
})
答案 1 :(得分:21)
P限制
我已经将promise并发限制与自定义脚本,bluebird,es6-promise-pool和p-limit进行了比较。我相信p-limit具有最简单,最简化的实现方案。 See their documentation。
要求
要与示例中的异步兼容
我的示例
在此示例中,我们需要为数组中的每个URL运行一个函数(例如API请求)。在这里,这称为fetchData()
。如果我们要处理成千上万个项目,那么并发无疑将有助于节省CPU和内存资源。
const pLimit = require('p-limit');
// Example Concurrency of 3 promise at once
const limit = pLimit(3);
let urls = [
"http://www.exampleone.com/",
"http://www.exampletwo.com/",
"http://www.examplethree.com/",
"http://www.examplefour.com/",
]
// Create an array of our promises using map (fetchData() returns a promise)
let promises = urls.map(url => {
// wrap the function we are calling in the limit function we defined above
return limit(() => fetchData(url));
});
(async () => {
// Only three promises are run at once (as defined above)
const result = await Promise.all(promises);
console.log(result);
})();
控制台日志结果是您已解析的Promise响应数据的数组。
答案 2 :(得分:11)
bluebird' s Promise.map可以采用并发选项来控制并行运行的promises数量。有时它比.all
更容易,因为您不需要创建promise数组。
const Promise = require('bluebird')
function getCounts() {
return Promise.map(users, user => {
return new Promise(resolve => {
remoteServer.getCount(user) // makes an HTTP request
.then(() => {
/* snip */
resolve();
});
});
}, {concurrency: 10}); // <---- at most 10 http requests at a time
}
答案 3 :(得分:6)
不使用promises来限制http请求,而是使用node的内置http.Agent.maxSockets。这消除了使用库或编写自己的池代码的要求,并且具有额外的优势,可以更好地控制您的限制。
agent.maxSockets
默认设置为无限。确定代理可以为每个源打开的并发套接字数。 Origin是一个&#39; host:port&#39;或者&#39; host:port:localAddress&#39;组合
例如:
var http = require('http');
var agent = new http.Agent({maxSockets: 5}); // 5 concurrent connections per origin
var request = http.request({..., agent: agent}, ...);
如果向同一个来源发出多个请求,将keepAlive
设置为true也可能会让您受益(请参阅上面的文档以获取更多信息)。
答案 4 :(得分:4)
如果您知道迭代器如何工作以及如何使用迭代器,则不需要任何额外的库,因为自己构建自己的并发变得非常容易。让我演示一下:
/* [Symbol.iterator]() is equivalent to .values()
const iterator = [1,2,3][Symbol.iterator]() */
const iterator = [1,2,3].values()
// loop over all items with for..of
for (const x of iterator) {
console.log('x:', x)
// notices how this loop continues the same iterator
// and consumes the rest of the iterator, making the
// outer loop not logging any more x's
for (const y of iterator) {
console.log('y:', y)
}
}
我们可以使用相同的迭代器,并在工作进程之间共享它。
如果您使用.entries()
而不是.values()
,则将获得一个带有[index, value]
的2D数组,下面将以2的并发性进行演示
const sleep = n => new Promise(rs => setTimeout(rs,n))
async function doWork(iterator) {
for (let [index, item] of iterator) {
await sleep(1000)
console.log(index + ': ' + item)
}
}
const arr = Array.from('abcdefghij')
const workers = new Array(2).fill(arr.entries()).map(doWork)
// ^--- starts two workers sharing the same iterator
Promise.all(workers).then(() => console.log('done'))
注意:与示例async-pool相比,它的不同之处在于它产生了两个工作程序,因此,如果一个工作程序由于某种原因在索引5上抛出错误,它将不会停止其余的则由其他工人负责。因此,您从进行2个并发下降到1个(因此它不会就此停止)。因此,很难知道所有工作人员何时完成,因为Promise.all
将在失败时提前保释。因此,我建议您在doWork
函数内部捕获所有错误
答案 5 :(得分:3)
信号量是众所周知的并发原语,旨在解决类似的问题。它是非常通用的构造,信号量的实现存在于多种语言中。这是人们如何使用信号量来解决这个问题:
async function main() {
const s = new Semaphore(100);
const res = await Promise.all(
entities.map((users) =>
s.runExclusive(() => remoteServer.getCount(user))
)
);
return res;
}
我正在使用 async-mutex 的 Semaphore 实现,它有不错的文档和 TypeScript 支持。
如果您想深入研究此类主题,可以阅读“信号量小书”一书,该书可作为 PDF here
免费获得答案 6 :(得分:2)
这是我的ES7解决方案,它易于复制粘贴,并具有完整的Promise.all()
/ map()
替代方案,并发限制。
类似于Promise.all()
,它维护着返回顺序以及对非承诺返回值的回退。
我还比较了不同的实现方式,因为它说明了其他一些解决方案所缺少的某些方面。
用法
const asyncFn = delay => new Promise(resolve => setTimeout(() => resolve(), delay));
const args = [30, 20, 15, 10];
await asyncPool(args, arg => asyncFn(arg), 4); // concurrency limit of 4
实施
async function asyncBatch(args, fn, limit = 8) {
// Copy arguments to avoid side effects
args = [...args];
const outs = [];
while (args.length) {
const batch = args.splice(0, limit);
const out = await Promise.all(batch.map(fn));
outs.push(...out);
}
return outs;
}
async function asyncPool(args, fn, limit = 8) {
return new Promise((resolve) => {
// Copy arguments to avoid side effect, reverse queue as
// pop is faster than shift
const argQueue = [...args].reverse();
let count = 0;
const outs = [];
const pollNext = () => {
if (argQueue.length === 0 && count === 0) {
resolve(outs);
} else {
while (count < limit && argQueue.length) {
const index = args.length - argQueue.length;
const arg = argQueue.pop();
count += 1;
const out = fn(arg);
const processOut = (out, index) => {
outs[index] = out;
count -= 1;
pollNext();
};
if (typeof out === 'object' && out.then) {
out.then(out => processOut(out, index));
} else {
processOut(out, index);
}
}
}
};
pollNext();
});
}
比较
// A simple async function that returns after the given delay
// and prints its value to allow us to determine the response order
const asyncFn = delay => new Promise(resolve => setTimeout(() => {
console.log(delay);
resolve(delay);
}, delay));
// List of arguments to the asyncFn function
const args = [30, 20, 15, 10];
// As a comparison of the different implementations, a low concurrency
// limit of 2 is used in order to highlight the performance differences.
// If a limit greater than or equal to args.length is used the results
// would be identical.
// Vanilla Promise.all/map combo
const out1 = await Promise.all(args.map(arg => asyncFn(arg)));
// prints: 10, 15, 20, 30
// total time: 30ms
// Pooled implementation
const out2 = await asyncPool(args, arg => asyncFn(arg), 2);
// prints: 20, 30, 15, 10
// total time: 40ms
// Batched implementation
const out3 = await asyncBatch(args, arg => asyncFn(arg), 2);
// prints: 20, 30, 20, 30
// total time: 45ms
console.log(out1, out2, out3); // prints: [30, 20, 15, 10] x 3
// Conclusion: Execution order and performance is different,
// but return order is still identical
结论
asyncPool()
应该是最好的解决方案,因为它允许新请求在任何先前请求完成后立即开始。
asyncBatch()
作为比较,因为它的实现更易于理解,但是它的性能应较慢,因为必须完成同一批中的所有请求才能启动下一个批处理。
在这个人为设计的示例中,无限制的香草Promise.all()
当然是最快的,而其他的香草在现实的交通拥堵场景中可能会表现出更高的期望。
更新
其他人已经建议过的async-pool库可能是我的实现的一个更好的选择,因为它的工作原理几乎相同,并且通过Promise.race()的巧妙用法具有更简洁的实现:https://github.com/rxaviers/async-pool/blob/master/lib/es7.js >
希望我的回答仍然可以提供教育价值。
答案 7 :(得分:1)
我建议库async-pool:https://github.com/rxaviers/async-pool
npm install tiny-async-pool
说明:
使用本机ES6 / ES7以有限的并发性运行多个承诺返回和异步功能
asyncPool在有限的并发池中运行多个承诺返回和异步功能。只要其中一项承诺被拒绝,它就会立即拒绝。当所有诺言完成时,它就会解决。它会尽快(在并发限制下)调用迭代器函数。
用法:
const timeout = i => new Promise(resolve => setTimeout(() => resolve(i), i));
await asyncPool(2, [1000, 5000, 3000, 2000], timeout);
// Call iterator (i = 1000)
// Call iterator (i = 5000)
// Pool limit of 2 reached, wait for the quicker one to complete...
// 1000 finishes
// Call iterator (i = 3000)
// Pool limit of 2 reached, wait for the quicker one to complete...
// 3000 finishes
// Call iterator (i = 2000)
// Itaration is complete, wait until running ones complete...
// 5000 finishes
// 2000 finishes
// Resolves, results are passed in given array order `[1000, 5000, 3000, 2000]`.
答案 8 :(得分:1)
使用Array.prototype.splice
while (funcs.length) {
// 100 at at time
await Promise.all( funcs.splice(0, 100).map(f => f()) )
}
答案 9 :(得分:1)
这里是流和“ p-limit”的基本示例。它将http读取流传输到mongo db。
const stream = require('stream');
const util = require('util');
const pLimit = require('p-limit');
const es = require('event-stream');
const streamToMongoDB = require('stream-to-mongo-db').streamToMongoDB;
const pipeline = util.promisify(stream.pipeline)
const outputDBConfig = {
dbURL: 'yr-db-url',
collection: 'some-collection'
};
const limit = pLimit(3);
async yrAsyncStreamingFunction(readStream) => {
const mongoWriteStream = streamToMongoDB(outputDBConfig);
const mapperStream = es.map((data, done) => {
let someDataPromise = limit(() => yr_async_call_to_somewhere())
someDataPromise.then(
function handleResolve(someData) {
data.someData = someData;
done(null, data);
},
function handleError(error) {
done(error)
}
);
})
await pipeline(
readStream,
JSONStream.parse('*'),
mapperStream,
mongoWriteStream
);
}
答案 10 :(得分:0)
可以使用递归来解决。
这个想法是,最初您将发送允许的最大请求数,并且这些请求中的每一个都应在完成后递归地继续发送自己。
function batchFetch(urls, concurrentRequestsLimit) {
return new Promise(resolve => {
var documents = [];
var index = 0;
function recursiveFetch() {
if (index === urls.length) {
return;
}
fetch(urls[index++]).then(r => {
documents.push(r.text());
if (documents.length === urls.length) {
resolve(documents);
} else {
recursiveFetch();
}
});
}
for (var i = 0; i < concurrentRequestsLimit; i++) {
recursiveFetch();
}
});
}
var sources = [
'http://www.example_1.com/',
'http://www.example_2.com/',
'http://www.example_3.com/',
...
'http://www.example_100.com/'
];
batchFetch(sources, 5).then(documents => {
console.log(documents);
});
答案 11 :(得分:0)
如果您不想使用外部库,那么递归就是答案
downloadAll(someArrayWithData){
var self = this;
var tracker = function(next){
return self.someExpensiveRequest(someArrayWithData[next])
.then(function(){
next++;//This updates the next in the tracker function parameter
if(next < someArrayWithData.length){//Did I finish processing all my data?
return tracker(next);//Go to the next promise
}
});
}
return tracker(0);
}
答案 12 :(得分:0)
只要有可能,我都会自行开发这种东西,而不是去图书馆。您最终学习了很多以前似乎令人生畏的概念。
你们如何看待这种尝试:
(我考虑了很多,我认为它是可行的,但要指出它是否存在或根本上存在错误)
class Pool{
constructor(maxAsync) {
this.maxAsync = maxAsync;
this.asyncOperationsQueue = [];
this.currentAsyncOperations = 0
}
runAnother() {
if (this.asyncOperationsQueue.length > 0 && this.currentAsyncOperations < this.maxAsync) {
this.currentAsyncOperations += 1;
this.asyncOperationsQueue.pop()()
.then(() => { this.currentAsyncOperations -= 1; this.runAnother() }, () => { this.currentAsyncOperations -= 1; this.runAnother() })
}
}
add(f){ // the argument f is a function of signature () => Promise
this.runAnother();
return new Promise((resolve, reject) => {
this.asyncOperationsQueue.push(
() => f().then(resolve).catch(reject)
)
})
}
}
//#######################################################
// TESTS
//#######################################################
function dbCall(id, timeout, fail) {
return new Promise((resolve, reject) => {
setTimeout(() => {
if (fail) {
reject(`Error for id ${id}`);
} else {
resolve(id);
}
}, timeout)
}
)
}
const dbQuery1 = () => dbCall(1, 5000, false);
const dbQuery2 = () => dbCall(2, 5000, false);
const dbQuery3 = () => dbCall(3, 5000, false);
const dbQuery4 = () => dbCall(4, 5000, true);
const dbQuery5 = () => dbCall(5, 5000, false);
const cappedPool = new Pool(2);
const dbQuery1Res = cappedPool.add(dbQuery1).catch(i => i).then(i => console.log(`Resolved: ${i}`))
const dbQuery2Res = cappedPool.add(dbQuery2).catch(i => i).then(i => console.log(`Resolved: ${i}`))
const dbQuery3Res = cappedPool.add(dbQuery3).catch(i => i).then(i => console.log(`Resolved: ${i}`))
const dbQuery4Res = cappedPool.add(dbQuery4).catch(i => i).then(i => console.log(`Resolved: ${i}`))
const dbQuery5Res = cappedPool.add(dbQuery5).catch(i => i).then(i => console.log(`Resolved: ${i}`))
这种方法提供了一个不错的API,类似于scala / java中的线程池。
使用const cappedPool = new Pool(2)
创建一个池实例后,只需使用cappedPool.add(() => myPromise)
就向池提供承诺。
显然,我们必须确保承诺不会立即开始,这就是为什么我们必须在职能的帮助下“懒惰地提供”。
最重要的是,请注意,方法add
的结果是一个Promise,将以您最初的诺言的价值完成/解决!!这使得使用非常直观。
const resultPromise = cappedPool.add( () => dbCall(...))
resultPromise
.then( actualResult => {
// Do something with the result form the DB
}
)
答案 13 :(得分:0)
不幸的是,无法使用原生Promise.all做到这一点,因此您必须具有创造力。
这是我无需使用任何外部库就可以找到的最快,最简洁的方法。
它利用了称为迭代器的较新的javascript功能。迭代器基本上跟踪已处理的项目和未处理的项目。
为了在代码中使用它,您创建了一个异步函数数组。每个异步函数都向同一迭代器询问需要处理的下一项。每个函数异步处理自己的项目,完成后向迭代器询问新的项目。一旦迭代器的项目用完,所有功能便完成。
感谢@Endless的启发。
var items = [
"https://www.stackoverflow.com",
"https://www.stackoverflow.com",
"https://www.stackoverflow.com",
"https://www.stackoverflow.com",
"https://www.stackoverflow.com",
"https://www.stackoverflow.com",
"https://www.stackoverflow.com",
"https://www.stackoverflow.com",
];
var concurrency = 5
Array(concurrency).fill(items.entries()).map(async (cursor) => {
for(let [index, url] of cursor){
console.log("getting url is ", index, url);
// run your async task instead of this next line
var text = await fetch(url).then(res => res.text());
console.log("text is", text.slice(0,20));
}
})
答案 14 :(得分:0)
这就是我在这里的代码中使用Promise.race
所做的
const identifyTransactions = async function() {
let promises = []
let concurrency = 0
for (let tx of this.transactions) {
if (concurrency > 4)
await Promise.race(promises).then(r => { promises = []; concurrency = 0 })
promises.push(tx.identifyTransaction())
concurrency++
}
if (promises.length > 0)
await Promise.race(promises) //resolve the rest
}
答案 15 :(得分:0)
这么多好的解决方案。我从@Endless发布的优雅解决方案开始,最后得到了这个小的扩展方法,该方法不使用任何外部库,也不分批运行(尽管假定您具有异步等功能)
Promise.allWithLimit = async (taskList, limit = 5) => {
const iterator = taskList.entries();
let results = new Array(taskList.length);
let workerThreads = new Array(limit).fill(0).map(() =>
new Promise(async (resolve, reject) => {
try {
let entry = iterator.next();
while (!entry.done) {
let [index, promise] = entry.value;
try {
results[index] = await promise;
entry = iterator.next();
}
catch (err) {
results[index] = err;
}
}
// No more work to do
resolve(true);
}
catch (err) {
// This worker is dead
reject(err);
}
}));
await Promise.all(workerThreads);
return results;
};
Promise.allWithLimit = async (taskList, limit = 5) => {
const iterator = taskList.entries();
let results = new Array(taskList.length);
let workerThreads = new Array(limit).fill(0).map(() =>
new Promise(async (resolve, reject) => {
try {
let entry = iterator.next();
while (!entry.done) {
let [index, promise] = entry.value;
try {
results[index] = await promise;
entry = iterator.next();
}
catch (err) {
results[index] = err;
}
}
// No more work to do
resolve(true);
}
catch (err) {
// This worker is dead
reject(err);
}
}));
await Promise.all(workerThreads);
return results;
};
const demoTasks = new Array(10).fill(0).map((v,i) => new Promise(resolve => {
let n = (i + 1) * 5;
setTimeout(() => {
console.log(`Did nothing for ${n} seconds`);
resolve(n);
}, n * 1000);
}));
var results = Promise.allWithLimit(demoTasks);
答案 16 :(得分:0)
扩展@deceleratedcaviar发布的答案,我创建了一个“批处理”实用程序函数,该函数以参数:值数组,并发限制和处理函数为参数。是的,我意识到使用Promise.All这种方式更类似于批处理而不是真正的并发,但是如果目标是一次限制过多的HTTP调用,我会采用这种方法,因为它简单易用,不需要外部库
async function batch(o) {
let arr = o.arr
let resp = []
while (arr.length) {
let subset = arr.splice(0, o.limit)
let results = await Promise.all(subset.map(o.process))
resp.push(results)
}
return [].concat.apply([], resp)
}
let arr = []
for (let i = 0; i < 250; i++) { arr.push(i) }
async function calc(val) { return val * 100 }
(async () => {
let resp = await batch({
arr: arr,
limit: 100,
process: calc
})
console.log(resp)
})();
答案 17 :(得分:0)
使用自定义承诺库 (CPromise) 的另一种解决方案:
import { CPromise } from "c-promise2";
import cpFetch from "cp-fetch";
const promise = CPromise.all(
function* () {
const urls = [
"https://run.mocky.io/v3/7b038025-fc5f-4564-90eb-4373f0721822?mocky-delay=2s&x=1",
"https://run.mocky.io/v3/7b038025-fc5f-4564-90eb-4373f0721822?mocky-delay=2s&x=2",
"https://run.mocky.io/v3/7b038025-fc5f-4564-90eb-4373f0721822?mocky-delay=2s&x=3",
"https://run.mocky.io/v3/7b038025-fc5f-4564-90eb-4373f0721822?mocky-delay=2s&x=4",
"https://run.mocky.io/v3/7b038025-fc5f-4564-90eb-4373f0721822?mocky-delay=2s&x=5",
"https://run.mocky.io/v3/7b038025-fc5f-4564-90eb-4373f0721822?mocky-delay=2s&x=6",
"https://run.mocky.io/v3/7b038025-fc5f-4564-90eb-4373f0721822?mocky-delay=2s&x=7"
];
for (const url of urls) {
yield cpFetch(url); // add a promise to the pool
console.log(`Request [${url}] completed`);
}
},
{ concurrency: 2 }
).then(
(v) => console.log(`Done: `, v),
(e) => console.warn(`Failed: ${e}`)
);
// yeah, we able to cancel the task and abort pending network requests
// setTimeout(() => promise.cancel(), 4500);
import { CPromise } from "c-promise2";
import cpFetch from "cp-fetch";
const promise = CPromise.all(
[
"https://run.mocky.io/v3/7b038025-fc5f-4564-90eb-4373f0721822?mocky-delay=2s&x=1",
"https://run.mocky.io/v3/7b038025-fc5f-4564-90eb-4373f0721822?mocky-delay=2s&x=2",
"https://run.mocky.io/v3/7b038025-fc5f-4564-90eb-4373f0721822?mocky-delay=2s&x=3",
"https://run.mocky.io/v3/7b038025-fc5f-4564-90eb-4373f0721822?mocky-delay=2s&x=4",
"https://run.mocky.io/v3/7b038025-fc5f-4564-90eb-4373f0721822?mocky-delay=2s&x=5",
"https://run.mocky.io/v3/7b038025-fc5f-4564-90eb-4373f0721822?mocky-delay=2s&x=6",
"https://run.mocky.io/v3/7b038025-fc5f-4564-90eb-4373f0721822?mocky-delay=2s&x=7"
],
{
mapper: (url) => {
console.log(`Request [${url}]`);
return cpFetch(url);
},
concurrency: 2
}
).then(
(v) => console.log(`Done: `, v),
(e) => console.warn(`Failed: ${e}`)
);
// yeah, we able to cancel the task and abort pending network requests
//setTimeout(() => promise.cancel(), 4500);
答案 18 :(得分:0)
警告这未针对效率进行基准测试,并且会进行大量数组复制/创建
如果您想要更实用的方法,您可以执行以下操作:
import chunk from 'lodash.chunk';
const maxConcurrency = (max) => (dataArr, promiseFn) =>
chunk(dataArr, max).reduce(
async (agg, batch) => [
...(await agg),
...(await Promise.all(batch.map(promiseFn)))
],
[]
);
然后你可以像这样使用它:
const randomFn = (data) =>
new Promise((res) => setTimeout(
() => res(data + 1),
Math.random() * 1000
));
const result = await maxConcurrency(5)(
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
randomFn
);
console.log('result+++', result);
答案 19 :(得分:0)
因此,我尝试使所示示例适用于我的代码,但是由于这仅适用于导入脚本而不适用于生产代码,因此使用npm软件包batch-promises无疑是我最简单的路径
注意:要求运行时支持Promise或进行填充。
Api batchPromises(int:batchSize,数组:Collection,i => Promise:Iteratee) 无极:Iteratee将在每批之后调用。
使用:
batch-promises
Easily batch promises
NOTE: Requires runtime to support Promise or to be polyfilled.
Api
batchPromises(int: batchSize, array: Collection, i => Promise: Iteratee)
The Promise: Iteratee will be called after each batch.
Use:
import batchPromises from 'batch-promises';
batchPromises(2, [1,2,3,4,5], i => new Promise((resolve, reject) => {
// The iteratee will fire after each batch resulting in the following behaviour:
// @ 100ms resolve items 1 and 2 (first batch of 2)
// @ 200ms resolve items 3 and 4 (second batch of 2)
// @ 300ms resolve remaining item 5 (last remaining batch)
setTimeout(() => {
resolve(i);
}, 100);
}))
.then(results => {
console.log(results); // [1,2,3,4,5]
});
答案 20 :(得分:-1)
如果您的目标是减慢 Promise.all 以避免速率限制或过载:
这是我的实现
async function promiseAllGentle(arr, batchSize = 5, sleep = 50) {
let output = [];
while (arr.length) {
const batchResult = await Promise.all(arr.splice(0, batchSize));
output = [...output, ...batchResult];
await new Promise((res) => setTimeout(res, sleep));
}
return output;
}