我想要实现的目标:
获取API端点请求,检索XML并随后解析结果
我发送file_get_contents
请求来实现此目的。
的问题:
`file_get_Contents` fails, error:
Warning: file_get_contents(https://api.twitter.com/1.1/statuses/mentions_timeline.json):
failed to open stream:
A connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed because
connected host has failed to respond.
更新17/08
巩固目前的理解:
的 1。 PHP失败:
1.a它通过php(超时)失败
1.b它通过命令行失败(curl -G http://api.eve-central.com/api/quicklook?typeid=34)
1.c file_get_contents
1.d file_get_contents w / create_stream_context
2。工作原理:
2.a将网址粘贴在镀铬标签中
2.b通过邮递员
已尝试的内容: - 检查邮递员中的标题,并尝试通过php
复制它们Postman Headers sent back by eve-central:
Access-Control-Allow-Origin → *
Connection → Keep-Alive
Content-Encoding → gzip
Content-Type → text/xml; charset=UTF-8
Date → Wed, 17 Aug 2016 10:40:24 GMT
Proxy-Connection → Keep-Alive
Server → nginx
Transfer-Encoding → chunked
Vary → Accept-Encoding
Via → HTTP/1.1 proxy10014
对应代码:
$headers = array(
'method' => 'GET',
'header' => 'Connection: Keep-Alive',
'header' => 'Content-Encoding: gzip',
'header' => 'Content-Type: text/xml',
'header' => 'Proxy-Connection: Keep-Alive',
'header' => 'Server: nginx',
'header' => 'Transfer-Encoding: chunked',
'header' => 'Vary: Accept-Encoding',
'header' => 'Via: HTTP/1.1 proxy10014');
curl_setopt($curl, CURLOPT_HTTPHEADER, $headers);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true );
curl_setopt($curl, CURLOPT_PORT , 8080); // Attempt at changing port in the event it was blocked.
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($curl, CURLOPT_POST, false );
curl_setopt($curl, CURLOPT_URL, $url );
$resp = curl_exec($curl);
if(curl_error($curl))
{
echo 'error:' . curl_error($curl);
}
以前的尝试 我尝试了什么: 来自其他线程的各种cURL选项,例如
function curl_get_contents($url) {
$ch = curl_init();
if (!$ch)
{
die("Couldn't initialize a cURL handle");
} else
echo "Curl Handle initialized ";
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322)');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
$data = curl_exec($ch);
// Check if any error occurred
if (!curl_errno($ch))
{
$info = curl_getinfo($ch);
echo 'Took ', $info['total_time'], ' seconds to send a request to ', $info['url'], "";
displayData($info);
} else
echo "Failed Curl, reason: ".curl_error($ch)." ";
curl_close($ch);
return $data;
}
结果:没有,没有数据返回。
- 检查php.ini选项:
- allow_fopen为On
- allow_url_include = on
- 启用相关的ssl扩展名
- 提升超时窗口
- 都通过php.ini
- 也可以通过php文件中的显式声明
- 尝试使用其他网址
- 同样的错误,所以它并不真正取决于我的特定终点
- 例如,twitter / wikipedia / google都会返回特定错误
- 试过:
- 本地xml文件(https://msdn.microsoft.com/en-us/library/ms762271(v=vs.85).aspx)上的file_get_contents - >的工作原理
- 远程xml文件(http://www.xmlfiles.com/examples/note.xml)上的file_get_contents - > 失败同样的错误
- 总体而言,到目前为止,以下情况属实:
- 卷曲失败,超时
- file_get_Contents失败,超时
- 在浏览器中打开XML文件URL工作
- 通过Postman提出GET请求,工作
显然,在file_get_contents
通过php失败的所有情况下,我都可以通过任何浏览器轻松访问该文件。
试图解决这个问题
尝试1:
使用nitrous.io,创建一个LAMP堆栈,通过平台执行契约
结果:file_get_contents工作,但是,由于要检索的大量xml文件,操作超时。
暂定解决方案:
- 从源下载XML文件
- 拉链他们
- 下载xml_file
- 本地解析所述xml文件
稍后,编写一个小的PHP脚本,在调用时执行上述位,将数据发送到本地目录,然后解压缩并执行其他工作。
另一种尝试是使用Google表格,其中包含将数据拉入表格的用户功能,并将excel文件/值转储到mysql中。
就我的目的而言,虽然是一个非常无知的解决方案,但它可以解决问题。
用于避免共享主机上的超时问题的代码:
function downloadUrlToFile2($url, $outFileName)
{
//file_put_contents($xmlFileName, fopen($link, 'r'));
//copy($link, $xmlFileName); // download xml file
;
echo "Passing $url into $outFileName ";
// $outFileName = touch();
$fp = fopen($outFileName, "w");
if(is_file($url))
{
copy($url, $outFileName); // download xml file
} else
{
$ch = curl_init();
$options = array(
CURLOPT_TIMEOUT => 28800, // set this to 8 hours so we dont timeout on big files
CURLOPT_URL => $url
);
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt_array($ch, $options);
$contents = curl_exec($ch);
fwrite($fp, $contents);
curl_close($ch);
}
}
我还在ini脚本上添加了这个:
ignore_user_abort(true);
set_time_limit(0);
ini_set('memory_limit', '2048M');
答案 0 :(得分:3)
我发现HTTPS网址请求存在一些问题,对于修复问题,您必须在CURL请求中添加以下行
function curl_get_contents($url) {
$ch = curl_init();
$header[0] = "Accept: text/xml,application/xml,application/xhtml+xml,";
$header[0] .= "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5";
$header[] = "Cache-Control: max-age=0";
$header[] = "Connection: keep-alive";
$header[] = "Keep-Alive: 300";
$header[] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7";
$header[] = "Accept-Language: en-us,en;q=0.5";
$header[] = "Pragma: ";
curl_setopt( $ch, CURLOPT_HTTPHEADER, $header );
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $url);
// I have added below two lines
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}