我正在使用下面的代码片段从服务器检索数据。这些数据真的很大。寻找更好的方法来优化这个或修复片段。感谢:
URL url = new URL("http://www.example.com" + path);
Map<String, Object> params = new LinkedHashMap<>();
params.put("param_1", value_1);
params.put("param_2",value_2);
StringBuilder postData = new StringBuilder();
for (Map.Entry<String, Object> param : params.entrySet()) {
if (postData.length() != 0) postData.append('&');
postData.append(URLEncoder.encode(param.getKey(), "UTF-8"));
postData.append('=');
postData.append(URLEncoder.encode(String.valueOf(param.getValue()), "UTF-8"));
}
byte[] postDataBytes = postData.toString().getBytes("UTF-8");
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("POST");
conn.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");
conn.setRequestProperty("Content-Length", String.valueOf(postDataBytes.length));
conn.setDoOutput(true);
conn.setConnectTimeout(3000);
conn.getOutputStream().write(postDataBytes);
BufferedReader in = new BufferedReader(new InputStreamReader(conn.getInputStream(), "UTF-8"));
String output = "";
String line = null;
while ((line = in.readLine()) != null) {
output += (line + "\n"); // this take forever with large data set
}
答案 0 :(得分:0)
创建BufferReader时,需要减小分配的堆的大小。
BufferReader的默认构造函数为8192个字符分配缓冲区。 尝试使用较少计数字符的BufferedReader(Reader in,int size)。