我无法设置正确的标头来从URL下载文件。 下面的代码只是在浏览器中显示URL的内容而不是下载。
$file_name = "image1.jpg";
$content_type = "image/jpeg";
$url = "http://ctan.imsc.res.in/macros/latex/contrib/incgraph/example.jpg";
header('Pragma: public'); // required
header('Expires: 0'); // no cache
header('Cache-Control: must-revalidate, post-check=0, pre-check=0');
header('Last-Modified: '.gmdate ('D, d M Y H:i:s', filemtime ($file_name)).' GMT');
header('Cache-Control: private',false);
header("Content-type: ".$content_type);
header("Cache-Control: no-store, no-cache");
header("Content-disposition: attachment; filename=".$file_name);
header("Location: ".$url);
exit(0);
先谢谢。
答案 0 :(得分:0)
$originalPath = $_REQUEST['path']; //path with filename
$fileName = $_REQUEST['fileName']; // filename for downloaded file name
if (file_exists($originalPath)) {
header('Content-Description: File Transfer');
header('Content-Type: application/octet-stream');
header('Content-Disposition: attachment; filename='.basename($fileName));
header('Content-Transfer-Encoding: binary');
header('Expires: 0');
header('Cache-Control: must-revalidate, post-check=0, pre-check=0');
header('Pragma: public');
header('Content-Length: ' . filesize($originalPath));
ob_clean();
flush();
readfile($originalPath);
exit;
} else {
echo 'FILE_NOT_FOUND';
}
答案 1 :(得分:0)
此代码可以从url下载:
set_time_limit(0);
//File to save the contents to
$fp = fopen ('file.type', 'w+');
$url = "http://yoursite.come/file.type";
//Here is the file we are downloading, replace spaces with %20
$ch = curl_init(str_replace(" ","%20",$url));
curl_setopt($ch, CURLOPT_TIMEOUT, 50);
//give curl the file pointer so that it can write to it
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
$data = curl_exec($ch);//get curl response
//done
curl_close($ch);
?>
答案 2 :(得分:0)
使用此代码,您可以将图像保存在我们的服务器中,您可以使用我的上层代码下载图像。
file_put_contents("Tmpfile.jpg", fopen("http://ctan.imsc.res.in/macros/latex/contrib/incgraph/example.jpg", 'r'));
答案 3 :(得分:0)
从url(在列表中)下载文件,并使用python保存具有url姓氏的文件
import urllib.request as urllib2
import re,os
for url in open('C:\\Users\\SRINU\\Desktop\\Election_pdfs_urls\\33.txt'):
possible_urls = re.findall(r'(https?://[^\s]+)', url)
for links in possible_urls:
response = urllib2.urlopen(links.format(url.encode('utf-8')))
filename_w_ext = os.path.basename(links)
file = open(filename_w_ext, 'wb')
file.write(response.read())
file.close()