Difference: ObsoleteThumperTrunkingTest (1 vs. 8)

Revision 82019-01-03 - DerekFeichtinger

Line: 1 to 1
 
META TOPICPARENT name="AdminArea"

Revision 72009-11-18 - DerekFeichtinger

Line: 1 to 1
 
META TOPICPARENT name="AdminArea"

Revision 62009-06-08 - DerekFeichtinger

Line: 1 to 1
 
META TOPICPARENT name="AdminArea"
Changed:
<
<

Testing of the X4500's interface bonding

>
>

Thumper local port to switch port mappings and trunking tests

 
Changed:
<
<

Local port to switch port mappings

>
>

Local port to switch port mappings

  These mappings are needed for correctly configuring the switch
Line: 33 to 33
 
1 4
2 2
3 3
Added:
>
>
t3fs05 0 23
1 21
2 24
3 22
t3fs06 0 17
1 20
2 19
3 18
 

For incoming connections, the 4 bonded interfaces of each X4500 must be known to the Switch, so that these connections are balanced over the interfaces.

Added:
>
>

Testing of the X4500's interface bonding

 The tests used my IperfLoadTest test suite.

Test setup:

Revision 52008-10-28 - DerekFeichtinger

Line: 1 to 1
 
META TOPICPARENT name="AdminArea"

Revision 42008-09-10 - DerekFeichtinger

Line: 1 to 1
 
META TOPICPARENT name="AdminArea"
Changed:
<
<

Switch information

>
>

Testing of the X4500's interface bonding

 

Local port to switch port mappings

Added:
>
>
These mappings are needed for correctly configuring the switch
 
Host Name local interface switch port
t3fs01 0 10
1 9
Line: 30 to 34
 
2 2
3 3
Deleted:
<
<

Testing of the X4500's interface bonding

  For incoming connections, the 4 bonded interfaces of each X4500 must be known to the Switch, so that these connections are balanced over the interfaces.

The tests used my IperfLoadTest test suite.

Test setup:

Changed:
<
<
  • 3 X4500 servers with 4*1Gb/s aggregated (bonded) interfaces (File Servers)
>
>
  • 3 X4500 servers (Thumpers) with 4*1Gb/s aggregated (bonded) interfaces (File Servers)
 
  • 7 X4150 servers with 1*1Gb/s interface (Worker Nodes)
Changed:
<
<

Worker nodes as clients sending data

>
>

Tests for worker nodes sending data to fileservers

  This is testing the switch and the hash function it uses to balance connections between the target interfaces on every thumper. The CISCO switches we use are regrettably only able to do IP or MAC based hashing.
Changed:
<
<
7 clients sending to 3 servers:
>
>
throughput measurements summary table
Sending Processes Sending nodes Receiving servers Rate (MBit/s)
  (each with 1GBit/s) (each with 4*1GBit/s)  
7 7 1 3593
7 7 3 4281
14 7 3 6034
21 7 3 6450
28 7 3 6432

Measurements for worker nodes sending

Every measurement was run for 60s with client processes on the worker nodes sending TCP streams to receiving processes on the fileservers.

7 processes on 7 worker nodes sending to 1 Fileserver: Show Hide

<--/twistyPlugin twikiMakeVisibleInline-->
Config: mode=tcp time=60


Clients:
--------------------------------------------
     Client             Server       Total   Rate
                                     MBytes  Mbits/s
192.33.123.87:32866  192.33.123.41:8001 3990.00  571.00
192.33.123.81:32881  192.33.123.41:8001 5950.00  851.00
192.33.123.84:32865  192.33.123.41:8001 6000.00  859.00
192.33.123.86:32865  192.33.123.41:8001 1410.00  201.00
192.33.123.83:32861  192.33.123.41:8001 2590.00  371.00
192.33.123.82:32862  192.33.123.41:8001 2590.00  370.00
192.33.123.85:32868  192.33.123.41:8001 2590.00  370.00

Servers:
--------------------------------------------
     Server       Total   Rate     clients
                  MBytes  Mbits/s
192.33.123.41:8001 25120.00 3593.00     7

Total:
--------------------------------------------
Rate: 3593 Mbits/sec
Data sent: 25120 MBytes
<--/twistyPlugin-->

7 processes on 7 worker nodes sending to 3 Fileservers: Show Hide

<--/twistyPlugin twikiMakeVisibleInline-->
 
Config: mode=tcp time=60

Line: 73 to 121
 Rate: 4281 Mbits/sec Data sent: 29900 MBytes
Added:
>
>
<--/twistyPlugin-->
 
Changed:
<
<
14 clients sending to 3 servers:
>
>
14 processes on 7 worker nodes sending to 3 Fileservers: Show Hide
<--/twistyPlugin twikiMakeVisibleInline-->
 
Config: mode=tcp time=60

Line: 112 to 161
 Rate: 6034 Mbits/sec Data sent: 42150 MBytes
Added:
>
>
<--/twistyPlugin-->
 
Changed:
<
<
21 clients sending to 3 servers:
>
>
21 processes on 7 worker nodes sending to 3 Fileservers: Show Hide
<--/twistyPlugin twikiMakeVisibleInline-->
 
Config: mode=tcp time=60

Line: 156 to 207
 Rate: 6450 Mbits/sec Data sent: 45070 MBytes
Added:
>
>
<--/twistyPlugin-->
 
Changed:
<
<
28 clients sending to 3 servers:
>
>
28 processes on 7 worker nodes sending to 3 Fileservers: Show Hide
<--/twistyPlugin twikiMakeVisibleInline-->
 
Config: mode=tcp time=60

Line: 209 to 261
 Rate: 6432 Mbits/sec Data sent: 44910 MBytes
Added:
>
>
<--/twistyPlugin-->

Tests for worker nodes reading data from fileservers

Measurements for worker nodes reading

4 processes on 1 fileserver sending to 4 worker nodes: Show Hide

<--/twistyPlugin twikiMakeVisibleInline-->
Config: mode=tcp time=60


Clients:
--------------------------------------------
     Client             Server       Total   Rate
                                     MBytes  Mbits/s
192.33.123.41:33536  192.33.123.85:8001 3300.00  473.00
192.33.123.41:33539  192.33.123.83:8001 3220.00  461.00
192.33.123.41:33537  192.33.123.82:8001 6390.00  915.00
192.33.123.41:33538  192.33.123.81:8001 6300.00  902.00

Servers:
--------------------------------------------
     Server       Total   Rate     clients
                  MBytes  Mbits/s
192.33.123.85:8001 3300.00  473.00     1
192.33.123.81:8001 6300.00  902.00     1
192.33.123.83:8001 3220.00  461.00     1
192.33.123.82:8001 6390.00  915.00     1

Total:
--------------------------------------------
Rate: 2751 Mbits/sec
Data sent: 19210 MBytes
<--/twistyPlugin-->

The less than expected throughput of 2.751 Mbits/s may be related to the non negligible processor load for the 4 iperf processes on the file server: Show Hide

<--/twistyPlugin twikiMakeVisibleInline-->
last pid: 10517;  load avg:  1.68,  0.68,  0.39;       up 13+18:24:39                                              11:49:32
63 processes: 58 sleeping, 1 running, 4 on cpu
CPU states:  0.0% idle, 42.8% user, 57.2% kernel,  0.0% iowait,  0.0% swap
Memory: 16G phys mem, 8965M free mem, 2000M swap, 2000M free swap

   PID USERNAME LWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
 10517 root       3  10    0 3840K 2284K cpu      0:33   108% iperf_sun
 10516 root       3  31    0 3840K 2284K cpu      0:31   104% iperf_sun
 10514 root       3  20    0 3840K 2284K cpu      0:31   102% iperf_sun
 10515 root       3  10    0 3840K 2284K run      0:26 78.66% iperf_sun
<--/twistyPlugin-->

2 processes on 1 fileserver sending to 2 worker nodes: Show Hide

<--/twistyPlugin twikiMakeVisibleInline-->
Config: mode=tcp time=60

Clients:
--------------------------------------------
     Client             Server       Total   Rate
                                     MBytes  Mbits/s
192.33.123.41:33553  192.33.123.86:8001 6600.00  945.00
192.33.123.41:33554  192.33.123.82:8001 6600.00  945.00

Servers:
--------------------------------------------
     Server       Total   Rate     clients
                  MBytes  Mbits/s
192.33.123.82:8001 6600.00  945.00     1
192.33.123.86:8001 6600.00  945.00     1

Total:
--------------------------------------------
Rate: 1890 Mbits/sec
Data sent: 13200 MBytes
<--/twistyPlugin-->

4 processes on 1 fileserver sending to 4 worker nodes: Show Hide

<--/twistyPlugin twikiMakeVisibleInline-->
Config: mode=tcp time=60

Clients:
--------------------------------------------
     Client             Server       Total   Rate
                                     MBytes  Mbits/s
192.33.123.41:33553  192.33.123.86:8001 6460.00  925.00
192.33.123.41:33554  192.33.123.82:8001 3250.00  466.00
192.33.123.41:33557  192.33.123.85:8001 3250.00  466.00

Servers:
--------------------------------------------
     Server       Total   Rate     clients
                  MBytes  Mbits/s
192.33.123.85:8001 3250.00  466.00     1
192.33.123.82:8001 3250.00  466.00     1
192.33.123.86:8001 6460.00  925.00     1

Total:
--------------------------------------------
Rate: 1857 Mbits/sec
Data sent: 12960 MBytes
<--/twistyPlugin-->

Processor load: Show Hide

<--/twistyPlugin twikiMakeVisibleInline-->
last pid: 10557;  load avg:  1.02,  0.74,  0.57;       up 13+18:38:34                                              12:03:27
61 processes: 57 sleeping, 4 on cpu
CPU states: 15.7% idle, 43.0% user, 41.3% kernel,  0.0% iowait,  0.0% swap
Memory: 16G phys mem, 8816M free mem, 2000M swap, 2000M free swap

   PID USERNAME LWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
 10555 root       3   0    0 3840K 2284K cpu      0:26   115% iperf_sun
 10557 root       3  10    0 3840K 2284K cpu      0:24   107% iperf_sun
 10556 root       3   0    0 3840K 2284K cpu      0:24   107% iperf_sun
<--/twistyPlugin-->
 

-- DerekFeichtinger - 08 Sep 2008

Added:
>
>
META TOPICMOVED by="DerekFeichtinger" date="1221041275" from="CmsTier3.SwitchPortMappings" to="CmsTier3.ThumperTrunkingTest"

Revision 32008-09-09 - DerekFeichtinger

Line: 1 to 1
 
META TOPICPARENT name="AdminArea"

Revision 22008-09-08 - DerekFeichtinger

Line: 1 to 1
 
META TOPICPARENT name="AdminArea"

Revision 12008-09-08 - DerekFeichtinger

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="AdminArea"
<-- keep this as a security measure:
   #uncomment if the subject should only be modifiable by the listed groups 
   # * Set ALLOWTOPICCHANGE = TWikiAdminGroup,Main.CMSAdminGroup
   # * Set ALLOWTOPICRENAME = TWikiAdminGroup,Main.CMSAdminGroup
   #uncomment this if you want the page only be viewable by the listed groups
   # * Set ALLOWTOPICVIEW = TWikiAdminGroup,Main.CMSAdminGroup
-->

Switch information

Local port to switch port mappings

Host Name local interface switch port
t3fs01 0 10
1 9
2 12
3 11
t3fs02 0 13
1 14
2 15
3 16
t3fs03 0 8
1 6
2 7
3 5
t3fs04 0 1
1 4
2 2
3 3

-- DerekFeichtinger - 08 Sep 2008

 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback