Most of the time i am testing from one server only and while testing from
one server i am geting around 155 TPS on Cluster.
When tried from 2 server the TPS is getting splitted like 75 TPS on one
server and 80 TPS on other sever.
In actual setup, Jboss (Port 8080) is running on Real servers. Web Services
are deployed on these Jboss.
<Connector port="8080" address="${jboss.bind.address}"
maxThreads="250" maxHttpHeaderSize="8192"
emptySessionPath="true" protocol="HTTP/1.1"
enableLookups="false" redirectPort="8443" acceptCount="100"
connectionTimeout="20000" disableUploadTimeout="true" />
When i do the ab test on one Real server directly i am geting a TPS of 150.
Below is the ab test result for Real server directly.
[root@om-01 tmp]$ ab -n1000 -c100 -p i.xml '
http://real1:8080/chargingManager/services/ChargingService'
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Finished 1000 requests
Server Software: Apache-Coyote/1.1
Server Hostname: 192.168.16.176
Server Port: 8080
Document Path: /chargingManager/services/ChargingService
Document Length: 511 bytes
Concurrency Level: 100
Time taken for tests: 6.613239 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 894000 bytes
Total POSTed: 825825
HTML transferred: 511000 bytes
Requests per second: 151.21 [#/sec] (mean)
Time per request: 661.324 [ms] (mean)
Time per request: 6.613 [ms] (mean, across all concurrent requests)
Transfer rate: 132.01 [Kbytes/sec] received
121.95 kb/s sent
253.96 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 2.4 0 13
Processing: 222 642 302.7 551 1769
Waiting: 221 641 302.7 550 1769
Total: 222 643 304.4 551 1771
Percentage of the requests served within a certain time (ms)
50% 551
66% 596
75% 627
80% 657
90% 1017
95% 1597
98% 1618
99% 1691
100% 1771 (longest request)
Below is the ab test result for Director.
[root@om-01 tmp]$ ab -n1000 -c300 -p i.xml '
http://Director:8080/chargingManager/services/ChargingService'
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Finished 1000 requests
Server Software: Apache-Coyote/1.1
Server Hostname: 192.168.16.183
Server Port: 8080
Document Path: /chargingManager/services/ChargingService
Document Length: 511 bytes
Concurrency Level: 300
Time taken for tests: 6.46081 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 894892 bytes
Total POSTed: 826650
HTML transferred: 511509 bytes
Requests per second: 165.40 [#/sec] (mean)
Time per request: 1813.824 [ms] (mean)
Time per request: 6.046 [ms] (mean, across all concurrent requests)
Transfer rate: 144.39 [Kbytes/sec] received
133.52 kb/s sent
278.06 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 11 95.7 1 2999
Processing: 359 1706 731.1 1579 3464
Waiting: 359 1705 731.1 1578 3463
Total: 360 1718 746.9 1579 4806
Percentage of the requests served within a certain time (ms)
50% 1579
66% 2086
75% 2302
80% 2388
90% 2616
95% 2896
98% 3439
99% 3441
100% 4806 (longest request)
On Fri, Oct 29, 2010 at 5:34 PM, Henrique Fernandes <sf.rique@xxxxxxxxx>wrote:
> Can you post ab results as well ?
>
> And how about apache, worker or prefork ? how is the, serverlimit,
> startservers, maxclients?
>
> []'sf.rique
>
>
> On Fri, Oct 29, 2010 at 9:59 AM, Henrique Fernandes <sf.rique@xxxxxxxxx
> >wrote:
>
> > But why are you testing with two serves ?
> >
> > Why not just one ab ?
> >
> >
> > []'sf.rique
> >
> >
> > On Fri, Oct 29, 2010 at 9:35 AM, Anil Pillai <rcamphor@xxxxxxxxx> wrote:
> >
> >> I have tried with
> >>
> >> ab -n10000 -c100
> >> ab -n10000 -c200
> >> ..
> >> ..
> >> ab -n10000 -c600
> >> Even i tried -k option, but results are still the same.
> >>
> >>
> >>
> >> On Fri, Oct 29, 2010 at 4:51 PM, Graeme Fowler <graeme@xxxxxxxxxxx>
> >> wrote:
> >>
> >> > On Fri, 2010-10-29 at 16:38 +0530, Anil Pillai wrote:
> >> > > One observation.
> >> >
> >> > One more observation - ApacheBench needs to have parameters tweaked to
> >> > increase the concurrency. The default is one request at a time.
> >> >
> >> > If you push the concurrency up, you should see a corresponding
> increase
> >> > in requests/sec.
> >> >
> >> > ab -c100 -n10000 http://target_url/
> >> >
> >> > For more fun, try
> >> >
> >> > ab -c100 -n10000 -k http://target_url/
> >> >
> >> > Graeme
> >> >
> >> >
> >> > _______________________________________________
> >> > Please read the documentation before posting - it's available at:
> >> > http://www.linuxvirtualserver.org/
> >> >
> >> > LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> >> > Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> >> > or go to http://lists.graemef.net/mailman/listinfo/lvs-users
> >> >
> >> _______________________________________________
> >> Please read the documentation before posting - it's available at:
> >> http://www.linuxvirtualserver.org/
> >>
> >> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> >> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> >> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
> >>
> >
> >
> _______________________________________________
> Please read the documentation before posting - it's available at:
> http://www.linuxvirtualserver.org/
>
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
>
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
|