我有一个链接列表指向服务器存储库位置,链接代表一些资源,包括图像,xml,txt,csv(每个不同大小)文件,但我面临的问题是当我下载所有下载的文件文件大小相同。
List<String> Links;//list of links dynamically populated
for(String link:Links)
{
int i=link.lastIndexOf("/");
String temp=link.substring(0, i);
String contentname = temp.substring(temp.lastIndexOf("/")+1);
String filePath = tempFolderPath + "\\" + contentname;
URL url = new URL(link);
URLConnection connection = url.openConnection();
InputStream is = new DataInputStream(connection.getInputStream());
FileOutputStream fos = null;
try {
fos = new FileOutputStream(new File(filePath));
int inByte;
while((inByte = is.read()) != -1)
fos.write(inByte);
is.close();
fos.close();
} catch (Exception e) {
e.printStackTrace();
}
finally
{
try {
is.close();
fos.close();
} catch (Exception e) {
e.printStackTrace();
}
}
其中链接可直接访问资源“//localhost:8090/documents/11234/13935/abc.txt”
答案 0 :(得分:-1)
使用BufferedInputStream而不是专门用于java对象流的DataInputStream。
InputStream is = connection.getInputStream();
FileOutputStream fos = null;
try {
Files.copy(is, new File(filePath));
} catch (Exception e) {
e.printStackTrace();
}
由于问题仍然存在:
为了下载HTML,我做了一件额外的事情:我假装是一个浏览器。
PrintWriter out = null;
try {
out = new PrintWriter(new BufferedWriter(new OutputStreamWriter(urlConnection.getOutputStream(),
StandardCharsets.ISO_8859_1)), true);
// We use the standard for our own headers: Latin-1 and "\r\n".
// Set our own headers, gotten from Firefox TamperData plugin.
out.print("GET " + pageURL.getPath() + " HTTP/1.1\r\n");
out.print("Host: " + pageURL.getHost() + "\r\n");
out.print("User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:26.0) Gecko/20100101 Firefox/26.0\r\n");
out.print("Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n");
out.print("Accept-Language: eo,de-de;q=0.8,de;q=0.6,en-us;q=0.4,en;q=0.2\r\n");
out.print("\r\n");
out.flush();
最重要的是检查生成的文件。