我正在尝试从服务器下载大约129个报告时优化我的代码。所有报告都有不同的URL,这是我开发的代码:
public static void getReport(String eqID, String reportCode, String reportName, String fileName) throws IOException{
String url = "http://example.com/api?function=getData&eqId=" + eqID;
URL obj = new URL(url);
HttpURLConnection con = (HttpURLConnection) obj.openConnection();
Authenticator.setDefault (new Authenticator() {
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication ("username", "password".toCharArray());
}
});
// optional default is GET
con.setRequestMethod("GET");
int responseCode = con.getResponseCode();
if(responseCode == 200){
System.out.println("Downloading: " + reportName);
File file = new File("C:/Users/fileName);
BufferedInputStream bis = new BufferedInputStream(con.getInputStream());
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(file.getPath()));
int i = 0;
while ((i = bis.read()) != -1) {
bos.write(i);
}
bos.flush();
bis.close();
bos.close();
}
else if(responseCode == 204){
System.out.println("\nSending 'GET' request to: " + reportName);
System.out.println("Response: Successful but report is empty. It will be skipped");
}
}
问题在于它花费了太多时间来处理这些下载。在所有129个报告中,它大约25MB,我有一个高速互联网连接。我通过java下载文件相当新,需要一些帮助。我总共称这种方法为129次。
如果您可以推荐优化方法或仅使用HTTP连接而不是单独打开129。
提前致谢!
答案 0 :(得分:3)
瓶颈在这里:
int i = 0;
while ((i = bis.read()) != -1) {
bos.write(i);
}
您正在逐字节读取,这可能会在较大的文件中花费大量时间。相反,通过块读取文件,通常是4KB或8KB:
int FILE_CHUNK_SIZE = 1024 * 4; //to make it easier to change to 8 KBs
byte[] chunk = new byte[FILE_CHUNK_SIZE];
int bytesRead = 0;
while ((bytesRead = input.read(chunk)) != -1) {
bos.write(chunk, 0, bytesRead);
}
另一个替代方案是使用IOUtils#copy
中的Apache Commons IO已经为您执行此操作:
IOUtils.copy(bis, bos);
答案 1 :(得分:0)
如果您使用的是Java 7,则非常简单。如果要编写文件,请执行以下操作:
final Path dstFile = Paths.get("C:/users/filename");
if (responseCode == 200) {
try (
final InputStream in = con.getInputStream();
) {
Files.copy(in, dstFile, StandardOpenOption.CREATE_NEW);
}
} // etc