我正在开发一个用C#下载网站的软件,但我在将文件夹从服务器复制到本地目录时遇到了一些麻烦。我正在为此目的实现以下代码;
public static void CopyFilesRecursively(DirectoryInfo source, DirectoryInfo target)
{
try
{
foreach (DirectoryInfo dir in source.GetDirectories())
CopyFilesRecursively(dir, target.CreateSubdirectory(dir.Name));
foreach (FileInfo file in source.GetFiles())
file.CopyTo(Path.Combine(target.FullName, file.Name));
}
catch (Exception ex)
{
MessageBox.Show(ex.Message, "Form2", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
}
函数调用是
private void button4_Click(object sender, EventArgs e)
{
try
{
CopyFilesRecursively(new DirectoryInfo(@"https:facebook.com"), new DirectoryInfo(@"G:\Projects\"));
}
catch (Exception ex)
{
MessageBox.Show(ex.Message, "Form2", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
}
消息框显示“不支持给定的路径格式。”
答案 0 :(得分:0)
我们知道,互联网上托管的所有网站都使用虚拟路径(更易读,更安全)来保存文件和文件夹。实际文件和文件夹位于这些虚拟路径后面的服务器上。因此,要从远程服务器复制文件或文件夹,我们需要资源的实际路径。
我提供以下代码片段,用于从我自己部署的服务器下载文件(我当然知道它的目录结构)
string filename = "MyPage.xml";
string filesource = Server.MapPath("~/MyFiles/") + filename; // server file "MyPage.xml" available in server directory "files"
System.IO.FileInfo fi = new System.IO.FileInfo(filesource);
string filedest = System.IO.Path.GetTempPath()+"MyFile.xml";
fi.CopyTo(filedest);
这些是你可以寻找的其他SO帖子;
How to download a file from a URL in C#?
Copy image file from web url to local folder?
how to copy contents of a website using a .net desktop application
how to copy all text from a certain webpage and save it to notepad C#
How do you archive an entire website for offline viewing?