我正在编写一个脚本来为包含约700k产品的电子商务网站创建站点地图。电子商务包已经有一个内置的站点地图生成器,但它没有正确格式化大型产品数据库(带有链接到gzip 50k url文件的主机)。
我在运行脚本时遇到内存管理问题,似乎脚本的速度超过了php的垃圾收集能力。这就是我所拥有的:
function buildSitemapFile()
{
// prep sitemap string
$content = "sitemap header info";
$max_entries_per_sitemap = 50000;
$current_batch = 0;
while (buildSitemapZip($current_batch, $content, $max_entries_per_sitemap))
{
$current_batch++;
}
// finish sitemap string, open/create file, write to it, close file
}
function buildSitemapZip($batch, &$main_content, $max_urls = 50000)
{
$file_name = *create file path string*
// add this file to the index file's content string
$main_content .= *this sitemap info*
// open and prep this batch's site map file
$handle = fopen($file_name, "w");
$content = *sitemap header info*
require_once('path_to_Product.class.php');
$product = new Product();
$group_size = 1000;
// on first time called, $group_start = 0 (0 * 50000), group_start loops till >= 50000 ((0+1) * 50000), and group_start increments by $group_size on each pass
for ($group_start = $batch * $max_urls; $group_start < ($batch + 1) * $max_urls; $group_start += $group_size)
{
// request the next group of products. the query ends with "order by id limit $group_start, $group_size"
$prods = $product->query_db_for_products($group_start, $group_size);
// loop product set adding urls to the content string
foreach($prods as $key => $prod)
{
$content .= *get info from product object to fill out the sitemap content string*
}
if (count($prods) != $group_size)
{
// end of list
$return_val = *stop looping*
break;
}
}
// complete sitemap string, write it to file, close file
// create and open gzip file, write to it, close it, erase temp sitemap file
return $return_val;
}
当我使用较小的$ group_size值运行脚本时,它似乎不是一个问题,但它非常慢并且开始减慢对数据库的访问(通过重复查询)。
eccomerce包在它的对象上没有__construct的__destruct方法。
我上次运行测试的时候是$ group_size为10k(应该是1k ......)并且内存使用失控。我停止了脚本,但内存使用(根据顶级shell命令)不断增加,几个小时后还没有发布。
是否有任何关于脚本可能导致问题的想法?