On Thu, 3 May 2001, Roberto Nibali wrote:
>
> rsync --archive --update --hard-links --delete-after --force -e \"ssh -p
> $sshport\""
>
> This works pretty fast and accurate.
>
> > ssh is slow in the local network:
> > for a 9.1 Mb gzipped file I measure 7 seconds.
>
> Hmm, what is your hardware used (the two nodes, the NICs, the kernel
> and the switch)?
>
I am now transferring my 10 gb, and I am measuring about
1.5 Mb / sec transfer speed per CPU.
I use rsync with underlying ssh protocol.
The bottleneck is the CPU of my source machine,
a HP J2240 with HP-PA Risc at 236 Mhz.
Each ssh process takes about 75% of one CPU, I can run two rsync
processes at once, as it is a dual CPU box.
the target is a Linux dual CPU 933 Mhz and I see only about 25% CPU load
per ssh process.
On the directory, in between (which does NAT), I see no CPU load at all.
All the work is the encrypting and decrypting of the ssh packets, it
seems.
So rcp would still help, but it is not worth it to bother, over the
lunch break all the 10 gb server data will be copied also via ssh.
Later, for maintenance of the server, of course rsync will have little
work to do as it is really a nice tool by sending only the diffs, in a
really intelligent way. I have always loved rsync, since I discovered it.
NFS is tricky: I can mount external volumes on the director, and I can
mount director volumes on the realservers, but I cannot mount-through.
It maybe because my external NFS server is a HPUX box which does
not have the 'nohide' option which I have seen in Linux NFS servers, which
should allow re-export of NFS-mounted volumes.
Alois
|