我想通过NFS和CIFS提供2 TB左右。我正在寻找一个2(或更多)服务器解决方案,以实现高可用性和尽可能在服务器之间进行负载平衡的能力。有关群集或高可用性解决方案的任何建议吗?
这是商业用途,计划在未来几年内增长到5-10 TB。我们的设施几乎每天24小时,每周六天。我们可能有15-30分钟的停机时间,但我们希望尽量减少数据丢失。我想尽量减少凌晨3点的电话。
我们目前在Solaris上运行一台带有ZFS的服务器,我们正在考虑用于HA部分的AVS,但是我们在Solaris上遇到了一些小问题(CIFS实现不适用于Vista等)。
我们已经开始关注
了我们正在寻找一个提供数据的“黑匣子”。
我们目前在ZFS中对数据进行快照,并通过网络将快照发送到远程数据中心进行异地备份。
我们最初的计划是每10到15分钟就有一台第二台机器和rsync。失败的问题在于,正在进行的生产过程将丢失15分钟的数据并留在“中间”。从一开始,它们几乎更容易开始,而不是找出在中间拾取的位置。这就是驱使我们看待HA解决方案的原因。
答案 0 :(得分:6)
我最近使用DRBD作为后端部署了hanfs,在我的情况下,我正在运行主动/待机模式,但我也在主/主模式下使用OCFS2成功测试了它。遗憾的是,关于如何最好地实现这一点的文档并不多,大多数存在的文章几乎没有用处。如果您沿着drbd路线前进,我强烈建议您加入drbd邮件列表,并阅读所有文档。这是我写的ha / drbd设置和脚本来处理ha的失败:
DRBD8是必需的 - 这是由drbd8-utils和drbd8-source提供的。一旦安装了这些(我相信它们是由backports提供的),你可以使用模块助手来安装它 - m-a a-i drbd8。无论是depmod -a还是重启,如果你depmod -a,你需要modprobe drbd。
你需要一个后端分区用于drbd,不要让这个分区LVM,否则你会遇到各种各样的问题。不要将LVM放在drbd设备上,否则你会遇到各种各样的问题。
Hanfs1:
/etc/drbd.conf
global {
usage-count no;
}
common {
protocol C;
disk { on-io-error detach; }
}
resource export {
syncer {
rate 125M;
}
on hanfs2 {
address 172.20.1.218:7789;
device /dev/drbd1;
disk /dev/sda3;
meta-disk internal;
}
on hanfs1 {
address 172.20.1.219:7789;
device /dev/drbd1;
disk /dev/sda3;
meta-disk internal;
}
}
Hanfs2的/etc/drbd.conf:
global {
usage-count no;
}
common {
protocol C;
disk { on-io-error detach; }
}
resource export {
syncer {
rate 125M;
}
on hanfs2 {
address 172.20.1.218:7789;
device /dev/drbd1;
disk /dev/sda3;
meta-disk internal;
}
on hanfs1 {
address 172.20.1.219:7789;
device /dev/drbd1;
disk /dev/sda3;
meta-disk internal;
}
}
配置完毕后,我们需要调出drbd。
drbdadm create-md export drbdadm attach export drbdadm connect export
我们现在必须执行数据的初始同步 - 显然,如果这是一个全新的drbd群集,那么选择哪个节点无关紧要。
完成后,您需要在drbd设备上使用mkfs.yourchoiceoffiles - 上面配置中的设备是/ dev / drbd1。使用drbd时,http://www.drbd.org/users-guide/p-work.html是一个有用的文档。
心跳
安装heartbeat2。 (非常简单,apt-get install heartbeat2)。
每台机器上的/etc/ha.d/ha.cf应包含:
hanfs1:
logfacility local0
keepalive 2
warntime 10
deadtime 30
initdead 120
ucast eth1 172.20.1.218
auto_failback no
node hanfs1
node hanfs2
hanfs2:
logfacility local0
keepalive 2
warntime 10
deadtime 30
initdead 120
ucast eth1 172.20.1.219
auto_failback no
node hanfs1
node hanfs2
两个ha盒子上的/etc/ha.d/haresources应该是相同的:
hanfs1 IPaddr::172.20.1.230/24/eth1 hanfs1 HeartBeatWrapper
我写了一个包装器脚本来处理故障转移场景中由nfs和drbd引起的特性。该脚本应存在于每台机器上的/etc/ha.d/resources.d/中。
!/bin/bash
heartbeat fails hard.
so this is a wrapper
to get around that stupidity
I'm just wrapping the heartbeat scripts, except for in the case of umount
as they work, mostly
if [[ -e /tmp/heartbeatwrapper ]]; then
runningpid=$(cat /tmp/heartbeatwrapper)
if [[ -z $(ps --no-heading -p $runningpid) ]]; then
echo "PID found, but process seems dead. Continuing."
else
echo "PID found, process is alive, exiting."
exit 7
fi
fi
echo $$ > /tmp/heartbeatwrapper
if [[ x$1 == "xstop" ]]; then
/etc/init.d/nfs-kernel-server stop #>/dev/null 2>&1
NFS init script isn't LSB compatible, exit codes are 0 no matter what happens.
Thanks guys, you really make my day with this bullshit.
Because of the above, we just have to hope that nfs actually catches the signal
to exit, and manages to shut down its connections.
If it doesn't, we'll kill it later, then term any other nfs stuff afterwards.
I found this to be an interesting insight into just how badly NFS is written.
sleep 1
#we don't want to shutdown nfs first!
#The lock files might go away, which would be bad.
#The above seems to not matter much, the only thing I've determined
#is that if you have anything mounted synchronously, it's going to break
#no matter what I do. Basically, sync == screwed; in NFSv3 terms.
#End result of failing over while a client that's synchronous is that
#the client hangs waiting for its nfs server to come back - thing doesn't
#even bother to time out, or attempt a reconnect.
#async works as expected - it insta-reconnects as soon as a connection seems
#to be unstable, and continues to write data. In all tests, md5sums have
#remained the same with/without failover during transfer.
#So, we first unmount /export - this prevents drbd from having a shit-fit
#when we attempt to turn this node secondary.
#That's a lie too, to some degree. LVM is entirely to blame for why DRBD
#was refusing to unmount. Don't get me wrong, having /export mounted doesn't
#help either, but still.
#fix a usecase where one or other are unmounted already, which causes us to terminate early.
if [[ "$(grep -o /varlibnfs/rpc_pipefs /etc/mtab)" ]]; then
for ((test=1; test <= 10; test++)); do
umount /export/varlibnfs/rpc_pipefs >/dev/null 2>&1
if [[ -z $(grep -o /varlibnfs/rpc_pipefs /etc/mtab) ]]; then
break
fi
if [[ $? -ne 0 ]]; then
#try again, harder this time
umount -l /var/lib/nfs/rpc_pipefs >/dev/null 2>&1
if [[ -z $(grep -o /varlibnfs/rpc_pipefs /etc/mtab) ]]; then
break
fi
fi
done
if [[ $test -eq 10 ]]; then
rm -f /tmp/heartbeatwrapper
echo "Problem unmounting rpc_pipefs"
exit 1
fi
fi
if [[ "$(grep -o /dev/drbd1 /etc/mtab)" ]]; then
for ((test=1; test <= 10; test++)); do
umount /export >/dev/null 2>&1
if [[ -z $(grep -o /dev/drbd1 /etc/mtab) ]]; then
break
fi
if [[ $? -ne 0 ]]; then
#try again, harder this time
umount -l /export >/dev/null 2>&1
if [[ -z $(grep -o /dev/drbd1 /etc/mtab) ]]; then
break
fi
fi
done
if [[ $test -eq 10 ]]; then
rm -f /tmp/heartbeatwrapper
echo "Problem unmount /export"
exit 1
fi
fi
#now, it's important that we shut down nfs. it can't write to /export anymore, so that's fine.
#if we leave it running at this point, then drbd will screwup when trying to go to secondary.
#See contradictory comment above for why this doesn't matter anymore. These comments are left in
#entirely to remind me of the pain this caused me to resolve. A bit like why churches have Jesus
#nailed onto a cross instead of chilling in a hammock.
pidof nfsd | xargs kill -9 >/dev/null 2>&1
sleep 1
if [[ -n $(ps aux | grep nfs | grep -v grep) ]]; then
echo "nfs still running, trying to kill again"
pidof nfsd | xargs kill -9 >/dev/null 2>&1
fi
sleep 1
/etc/init.d/nfs-kernel-server stop #>/dev/null 2>&1
sleep 1
#next we need to tear down drbd - easy with the heartbeat scripts
#it takes input as resourcename start|stop|status
#First, we'll check to see if it's stopped
/etc/ha.d/resource.d/drbddisk export status >/dev/null 2>&1
if [[ $? -eq 2 ]]; then
echo "resource is already stopped for some reason..."
else
for ((i=1; i <= 10; i++)); do
/etc/ha.d/resource.d/drbddisk export stop >/dev/null 2>&1
if [[ $(egrep -o "st:[A-Za-z/]*" /proc/drbd | cut -d: -f2) == "Secondary/Secondary" ]] || [[ $(egrep -o "st:[A-Za-z/]*" /proc/drbd | cut -d: -f2) == "Secondary/Unknown" ]]; then
echo "Successfully stopped DRBD"
break
else
echo "Failed to stop drbd for some reason"
cat /proc/drbd
if [[ $i -eq 10 ]]; then
exit 50
fi
fi
done
fi
rm -f /tmp/heartbeatwrapper
exit 0
elif [[ x$1 == "xstart" ]]; then
#start up drbd first
/etc/ha.d/resource.d/drbddisk export start >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo "Something seems to have broken. Let's check possibilities..."
testvar=$(egrep -o "st:[A-Za-z/]*" /proc/drbd | cut -d: -f2)
if [[ $testvar == "Primary/Unknown" ]] || [[ $testvar == "Primary/Secondary" ]]
then
echo "All is fine, we are already the Primary for some reason"
elif [[ $testvar == "Secondary/Unknown" ]] || [[ $testvar == "Secondary/Secondary" ]]
then
echo "Trying to assume Primary again"
/etc/ha.d/resource.d/drbddisk export start >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo "I give up, something's seriously broken here, and I can't help you to fix it."
rm -f /tmp/heartbeatwrapper
exit 127
fi
fi
fi
sleep 1
#now we remount our partitions
for ((test=1; test <= 10; test++)); do
mount /dev/drbd1 /export >/tmp/mountoutput
if [[ -n $(grep -o export /etc/mtab) ]]; then
break
fi
done
if [[ $test -eq 10 ]]; then
rm -f /tmp/heartbeatwrapper
exit 125
fi
#I'm really unsure at this point of the side-effects of not having rpc_pipefs mounted.
#The issue here, is that it cannot be mounted without nfs running, and we don't really want to start
#nfs up at this point, lest it ruin everything.
#For now, I'm leaving mine unmounted, it doesn't seem to cause any problems.
#Now we start up nfs.
/etc/init.d/nfs-kernel-server start >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo "There's not really that much that I can do to debug nfs issues."
echo "probably your configuration is broken. I'm terminating here."
rm -f /tmp/heartbeatwrapper
exit 129
fi
#And that's it, done.
rm -f /tmp/heartbeatwrapper
exit 0
elif [[ "x$1" == "xstatus" ]]; then
#Lets check to make sure nothing is broken.
#DRBD first
/etc/ha.d/resource.d/drbddisk export status >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo "stopped"
rm -f /tmp/heartbeatwrapper
exit 3
fi
#mounted?
grep -q drbd /etc/mtab >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo "stopped"
rm -f /tmp/heartbeatwrapper
exit 3
fi
#nfs running?
/etc/init.d/nfs-kernel-server status >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo "stopped"
rm -f /tmp/heartbeatwrapper
exit 3
fi
echo "running"
rm -f /tmp/heartbeatwrapper
exit 0
fi
完成上述所有操作后,您只需要配置/ etc / exports
/export 172.20.1.0/255.255.255.0(rw,sync,fsid=1,no_root_squash)
然后,这只是在两台机器上启动心跳并在其中一台机器上发出hb_takeover的情况。您可以通过确保您发出接管的那个是主要的 - 检查/ proc / drbd,设备是否正确安装以及您可以访问nfs来测试它是否正常工作。
-
祝你好运。对我来说,从头开始设置是非常痛苦的经历。
答案 1 :(得分:3)
这些天2TB适合一台机器,因此您可以选择从简单到复杂的选项。这些都假设linux服务器:
还有很多商业解决方案,但是对于大多数人来说,2TB有点小。
你还没有提到你的应用程序,但是如果没有必要进行热故障转移,而你真正想要的就是能够承受丢失一两个磁盘的东西,找到一个支持RAID-5的NAS,至少4个驱动器和热交换,你应该很高兴。
答案 2 :(得分:1)
我会推荐 NAS存储。 (网络附加存储)。
HP有一些很好的选择。
http://h18006.www1.hp.com/storage/aiostorage.html
以及Clustered版本:
http://h18006.www1.hp.com/storage/software/clusteredfs/index.html?jumpid=reg_R1002_USEN
答案 3 :(得分:0)
您在寻找“企业”解决方案还是“家庭”解决方案?很难从你的问题中看出,因为2TB对于企业而言非常小,而对于家庭用户(特别是两台服务器)而言则是高端的。你能澄清一下需要,以便我们讨论权衡吗?
答案 4 :(得分:0)
有两种方法可以解决这个问题。首先是从戴尔或惠普购买SAN或NAS,并为这个问题投入资金。现代存储硬件使这一切变得容易,为更多核心问题节省了专业知识。
如果您想自己动手,请查看使用Linux和DRBD。
DRBD允许您创建联网的块设备。在两台服务器上考虑RAID 1,而不是仅考虑两个磁盘。在一个系统死亡的情况下,DRBD部署通常使用Heartbeat进行故障转移。
我不确定负载平衡,但您可以调查并查看是否可以使用LVS在DRBD主机之间进行负载均衡:
http://www.linuxvirtualserver.org/
最后,请允许我重申,从长远来看,你可能会节省很多时间,只需要花钱购买NAS。
答案 5 :(得分:0)
我从你的问题正文中假设你是商业用户?我从Silicon Mechanics购买了一台6TB RAID 5设备并连接了NAS,我的工程师在我们的服务器上安装了NFS。通过rsync执行备份到另一个大容量NAS。
答案 6 :(得分:0)
查看Amazon Simple Storage Service(Amazon S3)
- 这可能是有意义的。高可用性
亲爱的AWS客户:
许多人要求我们提前通知您当前正在开发的功能和服务,以便您可以更好地规划该功能如何与您的应用程序集成。为此,我们很高兴与您分享一些关于我们在AWS开发的新产品的早期细节 - 内容交付服务。
这项新服务将为您提供向最终用户分发内容的高性能方法,在您访问对象时为您的客户提供低延迟和高数据传输速率。初始版本将帮助需要通过HTTP连接提供流行的,可公开读取的内容的开发人员和企业。我们的目标是创建一个内容交付服务:
让开发人员和企业轻松上手 - 没有最低费用也没有承诺。您只需支付实际使用的费用。 简单易用 - 只需一个简单的API调用就可以开始提供内容。 与Amazon S3无缝协作 - 这使您可以为文件的原始最终版本提供持久存储,同时使内容交付服务更易于使用。 拥有全球业务 - 我们使用遍布三大洲的全球边缘网络网络,从最合适的位置提供您的内容。
您首先将对象的原始版本存储在Amazon S3中,确保它们是公开可读的。然后,您将进行简单的API调用,以使用新的内容传送服务注册您的存储桶。此API调用将返回一个新域名,供您包含在您的网页或应用程序中。当客户端使用此域名请求对象时,它们将自动路由到最近的边缘位置,以便高性能地传送内容。就这么简单。
我们目前正在与一小群私人测试版客户合作,并期望在今年年底之前广泛提供此服务。如果您希望在我们发布时收到通知,请点击此处告知我们。
此致
亚马逊网络服务团队
答案 7 :(得分:0)
你最好的选择可能是与那些以此为生的专家合作。这些家伙实际上在我们的办公楼里......我有机会与他们合作,参与我所领导的类似项目。
答案 8 :(得分:0)
我建议您访问F5网站并查看http://www.f5.com/solutions/virtualization/file/
答案 9 :(得分:0)
您可以查看镜像文件系统。它在文件系统级别上执行文件复制。 主系统和备份系统上的相同文件都是实时文件。