我正在开发一个以太坊项目,我想将所有特定的“传输”事件发出的值存储到一个 excel 文件中。我能够写到一定数量的块而不是全部,因为它显示错误:返回错误:查询返回超过 10000 个结果。我正在使用 infura 帐户。我也尝试过 AlchemyAPI 帐户,但遇到了同样的错误,因为它只支持 10K 之前的结果。谁能告诉我如何解决它?或者任何其他支持超过 10000 个结果的网络提供商。
Error: Returned error: query returned more than 10000 results
at Object.ErrorResponse (/home/admin1/Desktop/Ethereumevents/node_modules/web3-core-helpers/lib/errors.js:28:19)
at Object.callback (/home/admin1/Desktop/Ethereumevents/node_modules/web3-core-requestmanager/lib/index.js:303:36)
at /home/admin1/Desktop/Ethereumevents/node_modules/web3-providers-ws/lib/index.js:114:45
at Array.forEach (<anonymous>)
at WebsocketProvider._onMessage (/home/admin1/Desktop/Ethereumevents/node_modules/web3-providers-ws/lib/index.js:102:69)
at W3CWebSocket._dispatchEvent [as dispatchEvent] (/home/admin1/Desktop/Ethereumevents/node_modules/yaeti/lib/EventTarget.js:115:12)
at W3CWebSocket.onMessage (/home/admin1/Desktop/Ethereumevents/node_modules/websocket/lib/W3CWebSocket.js:234:14)
at WebSocketConnection.<anonymous> (/home/admin1/Desktop/Ethereumevents/node_modules/websocket/lib/W3CWebSocket.js:205:19)
at WebSocketConnection.emit (events.js:314:20)
at WebSocketConnection.processFrame (/home/admin1/Desktop/Ethereumevents/node_modules/websocket/lib/WebSocketConnection.js:554:26)
at /home/admin1/Desktop/Ethereumevents/node_modules/websocket/lib/WebSocketConnection.js:323:40
at processTicksAndRejections (internal/process/task_queues.js:79:11) {
data: null
}
代码如下:
code:
async function write_csv(obj) {
writer.pipe(fs.createWriteStream('out.csv', { flags: 'a' }));
console.log("written",obj)
writer.end();
};
const contract = new web3.eth.Contract(interface, contractAddress);
var Transfer = contract.events.Transfer({fromBlock:5521592}, async function (error, event) {
console.log(event);
})
.on('data',async function (event) {console.log("hi", JSON.stringify(event))
var res = {From:event.returnValues[0], To:event.returnValues[1], Value:event.returnValues[2]};
write_csv(res);
await new Promise(resolve => setTimeout(resolve('done'), 20000));
})
.on('changed', function (event) {console.log("hi")
})
.on('error', console.error);
答案 0 :(得分:0)
您需要对查询进行分页和节流,因为 JSON-RPC 响应有最大大小限制。
不幸的是,由于以太坊 ndf = df.groupby(['Packages']).agg({'Score':'max', 'Date':'max'})[['Score','Date']]
ndf = ndf.merge(df, on = ['Packages','Date'], how = 'inner')
# Filter out unnecessary columns
ndf['Score'] = ndf['Score_x'] # Use Score_x column on the left side (ndf) instead of Score_y on the right side (df)
ndf = ndf[['ID', 'Packages', 'Score', 'Date', 'Street']]
print(ndf)
API 的实现方式很脑残,这是不可能的。 On Web3.py project there is an example query code that can do Ethereum event fetch pagination, throttling and resume queries in the case of errors。该示例还强调了您的代码需要克服的各种障碍。