在for循环中第一次迭代后,StreamWriter无法运行

时间:2012-05-11 16:35:59

标签: c# html-parsing streamwriter

我在使用StreamWriter为当前项目编写刮刀时遇到了问题。我编码的循环低于

我调试了进入循环的所有变量,并且一切都按原样设置。当我传入一个url并根据url中的ID GET变量搜索范围时,它无法写入第二个sourceCode字符串

有人能够善意地告诉我,如果我不是在冲洗某些东西或者是否还有其他工作在这里?

我试图找到根本原因但我证明非常顽固

using System;
using System.IO;
using System.Windows.Forms;

namespace Scraper
{
    public partial class Form1 : Form
    {
        Scraper scraper = new Scraper();
        private StreamWriter sw;

        public Form1()
        {
            InitializeComponent();
        }

        private void button1_Click(object sender, EventArgs e)
        {
            string url = textBox1.Text;
            string[] urlBits = url.Split('.');
            string[] domain = urlBits[2].Split('/');

            string filepath = @"C:\Users\Herbaldinho\Desktop\"+urlBits[1]+"-"+domain[0];
            string parentPath = @"C:\Users\Herbaldinho\Desktop\";
            string newPath = Path.Combine(parentPath, filepath);

            if (File.Exists(filepath))
            {}
            else
            {
                Directory.CreateDirectory(newPath);
            }
            DateTime today = DateTime.Today;
            string curDate = String.Format("{0:ddd-MMM-dd-yyyy}", today);
            string subPath = newPath + "\\" + curDate;
            string newSubPath = Path.Combine(newPath, subPath);

            if (File.Exists(subPath))
            { }
            else
            {
                Directory.CreateDirectory(newSubPath);
            }

            string lower = textBox2.Text;
            int lowerValue;
            int.TryParse(lower, out lowerValue);

            string upper = textBox3.Text;
            int upperValue;
            int.TryParse(upper, out upperValue);

            int i;
            for (i = lowerValue; i < upperValue; i++)
            {
                string filename = newSubPath+"\\Advert-"+i+".html";
                string adPage = url + i;
                bool write = scraper.UrlExists(adPage);
                if (write)
                {
                    string sourceCode = scraper.getSourceCode(adPage);
                    using (sw = new StreamWriter(filename))
                    {
                        sw.Write(sourceCode);
                    }
                }
            }
            MessageBox.Show("Scrape Complete");

        }
    }
}

####This is the Scraper Object
using System.Net;

namespace Scraper
{
class Scraper
{
    WebClient w = new WebClient();
    public bool UrlExists(string url)
    {
        try
        {
            HttpWebRequest request = WebRequest.Create(url) as HttpWebRequest;
            request.Method = "HEAD";
            HttpWebResponse response = request.GetResponse() as HttpWebResponse;
            return (response.StatusCode == HttpStatusCode.OK);
        }
        catch
        {
            return false;
        }
    }

    public string getSourceCode(string url)
    {
        string s = w.DownloadString(url);
        return s;
    }
}

}

1 个答案:

答案 0 :(得分:0)

今天早上找到问题的答案 对于其他遇到类似问题的人来说,UrlExists方法中的try catch逻辑需要关闭响应(response.Close()) 根据我的理解它自动闭合,但情况并非如此 希望这有帮助

非常感谢帮助我解决所有人的回应

相关问题