如何获取网站中的所有文字,我不只是指ctrl + a / c。我希望能够从网站(以及所有相关页面)中提取所有文本,并使用它来构建该网站的单词一致性。有什么想法吗?
答案 0 :(得分:1)
我对此很感兴趣所以我已经写了解决方案的第一部分。
由于方便的strip_tags函数,代码是用PHP编写的。它也粗糙和程序化,但我觉得这表明了我的想法。
<?php
$url = "http://www.stackoverflow.com";
//To use this you'll need to get a key for the Readabilty Parser API http://readability.com/developers/api/parser
$token = "";
//I make a HTTP GET request to the readabilty API and then decode the returned JSON
$parserResponse = json_decode(file_get_contents("http://www.readability.com/api/content/v1/parser?url=$url&token=$token"));
//I'm only interested in the content string in the json object
$content = $parserResponse->content;
//I strip the HTML tags for the article content
$wordsOnPage = strip_tags($content);
$wordCounter = array();
$wordSplit = explode(" ", $wordsOnPage);
//I then loop through each word in the article keeping count of how many times I've seen the word
foreach($wordSplit as $word)
{
incrementWordCounter($word);
}
//Then I sort the array so the most frequent words are at the end
asort($wordCounter);
//And dump the array
var_dump($wordCounter);
function incrementWordCounter($word)
{
global $wordCounter;
if(isset($wordCounter[$word]))
{
$wordCounter[$word] = $wordCounter[$word] + 1;
}
else
{
$wordCounter[$word] = 1;
}
}
?>
我需要this为可读性API使用的SSL配置PHP。
解决方案的下一步是搜索页面中的链接,并以智能方式递归调用,以满足相关页面的要求。
此外,上面的代码只是提供了一个字数统计的原始数据,您可能希望再处理它以使其有意义。