LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [PATCH] IPVS sync endianess fixed

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [PATCH] IPVS sync endianess fixed
From: Wensong Zhang <wensong@xxxxxxxxxxxx>
Date: Sun, 3 Oct 2004 01:51:47 +0800 (CST)

Hi,

Thanks a lot for the patch.

I tidy up your patch a little bit in order to make the code more readable.  
Please check the attached patch and test it.

Thanks,

Wensong


On Thu, 30 Sep 2004, Justin Ossevoort wrote:

> Hello,
> 
>   There was a small bug in the ip_vs_sync.c code that made it impossible
> for 2 servers of different endian to sync with eachother (eg. a sparc
> (big endian) and a i386 (little endian) based system). The problem was
> in the message size. All other data seems to be correctly rearranged to
> network byte order except for this one (probably because the size is
> used from the moment the data is being gathered to the moment it is send).
>   This caused "IPVS: bogus message" messages in my dmesg.
> 
>   This patch fixes this problem by converting the m->size at the last
> moment before sending it to network byte order. And changing it back to
> host order right before the message is processed. The patch is made
> agains the Linux kernel version 2.6.8.1.
> 
> --[ Patch: Start Snip ]--
> 
> --- linux-2.6.8.1/net/ipv4/ipvs/ip_vs_sync.c    2004-08-14
> 12:54:46.000000000 +0200
> +++ linux-2.6.8.1-fix/net/ipv4/ipvs/ip_vs_sync.c    2004-09-30
> 11:54:53.000000000 +0200
> @@ -16,6 +16,7 @@
>    *    Alexandre Cassen    :    Added master & backup support at a time.
>    *    Alexandre Cassen    :    Added SyncID support for incoming sync
>    *                    messages filtering.
> + *    Justin Ossevoort    :    Fix endian problem on sync message size.
>    */
> 
>   #include <linux/module.h>
> @@ -279,6 +280,9 @@
>       char *p;
>       int i;
> 
> +    /* Convert size back to host byte order */
> +    m->size = ntohs(m->size);
> +
>       if (buflen != m->size) {
>           IP_VS_ERR("bogus message\n");
>           return;
> @@ -569,6 +573,23 @@
>       return len;
>   }
> 
> +static void
> +ip_vs_send_sync_msg(struct socket *sock, struct ip_vs_sync_buff *sb)
> +{
> +    int msize;
> +    struct ip_vs_sync_mesg *m;
> +
> +    m = sb->mesg;
> +    msize = m->size;
> +
> +    /* Put size in network byte order */
> +    m->size = htons(m->size);
> +
> +    if (ip_vs_send_async(sock, (char *)m, msize) != msize)
> +        IP_VS_ERR("ip_vs_send_async error\n");
> +
> +    ip_vs_sync_buff_release(sb);
> +}
> 
>   static int
>   ip_vs_receive(struct socket *sock, char *buffer, const size_t buflen)
> @@ -605,7 +626,6 @@
>   {
>       struct socket *sock;
>       struct ip_vs_sync_buff *sb;
> -    struct ip_vs_sync_mesg *m;
> 
>       /* create the sending multicast socket */
>       sock = make_send_sock();
> @@ -618,20 +638,12 @@
> 
>       for (;;) {
>           while ((sb=sb_dequeue())) {
> -            m = sb->mesg;
> -            if (ip_vs_send_async(sock, (char *)m,
> -                         m->size) != m->size)
> -                IP_VS_ERR("ip_vs_send_async error\n");
> -            ip_vs_sync_buff_release(sb);
> +            ip_vs_send_sync_msg(sock, sb);
>           }
> 
>           /* check if entries stay in curr_sb for 2 seconds */
>           if ((sb = get_curr_sync_buff(2*HZ))) {
> -            m = sb->mesg;
> -            if (ip_vs_send_async(sock, (char *)m,
> -                         m->size) != m->size)
> -                IP_VS_ERR("ip_vs_send_async error\n");
> -            ip_vs_sync_buff_release(sb);
> +            ip_vs_send_sync_msg(sock, sb);
>           }
> 
>           if (stop_master_sync)
> 
> --[ Patch: End Snip ]--
> 
> Special credit goes to: Byte Internetdiensten (my current employer) for
> supplying the testbed that triggered this bug and the time sponsored to
> fix it.
> 
> ps.
> Congratulations on the great work, the code is good and the
> functionality great :)
> 

Attachment: linux-2.6-ipvs_sync-endian.diff
Description: Text document

<Prev in Thread] Current Thread [Next in Thread>