http://www.openflowhub.org/display/floodlightcontroller/Cbench+(New)
$ sudo apt-get install autoconf automake libtool libsnmp-dev libpcap-dev $ git clone git://gitosis.stanford.edu/oflops.git $ cd oflops; git submodule init && git submodule update $ git clone git://gitosis.stanford.edu/openflow.git $ cd openflow; git checkout -b release/1.0.0 remotes/origin/release/1.0.0 $ wget http://hyperrealm.com/libconfig/libconfig-1.4.9.tar.gz $ tar -xvzf libconfig-1.4.9.tar.gz $ cd libconfig-1.4.9 $ ./configure $ sudo make && sudo make install $ cd ../../netfpga-packet-generator-c-library/ $ sudo ./autogen.sh && sudo ./configure && sudo make $ cd .. $ sh ./boot.sh ; ./configure --with-openflow-src-dir=<absolute path to openflow branch>; make $ sudo make install $ cd cbench
交互:
Cbench产生Paket_in事件,发送给OpenFlow controller
Cbench模拟出一系列 switches 连接到NOX,发送Packet_in messages ,等待NOX下推flow-mods/packet_out,记录收到多少个flow-mods/packet_out。
NOX收到Packet_in,发给Cbench :ofp_flow_mod 。
参数配置:
Cbench 的参数有:
-c/--controller <str> hostname of controller to connect to ("localhost")
-l/--loops <int> loops per test (16)
-M/--mac-addresses <int> unique source MAC addresses per switch (100000)
-m/--ms-per-test <int> test length in ms (1000)
-p/--port <int> controller port (6633)
-s/--switches <int> fake $n switches (16)
-t/--throughput test throughput instead of latency
cbench -c 192.168.249.1 -p 6633 -m 10000 -l 10 -s 5 -M 10 -t
模拟出5个switch,每个switch可以支持10个Mac address,测试循环10次,吞吐量模式下
NOX的参数主要涉及的:
usage: nox_core [OPTIONS] [APP[=ARG[,ARG]...]] [APP[=ARG[,ARG]...]]...
-i ptcp:[IP]:[PORT] listen to TCP PORT on interface specified by IP(default: 0.0.0.0:6633) -t, --threads=COUNT set the number of threads
/nox_core -i ptcp:6633 switch -t 1
测试结果:
吞吐量模式:指控制器的某个应用程序在每秒的时间内能够处理事务的数量
32 switches: flows/sec: 91970 93653 87260 87260 87260 87260 85208 87260 87260 91970 87223 82707 68415 62022 55629 49236 42843 36450 30057 23664 17271 10878 4485 2362 1 0 0 0 0 0 0 0
total = 145.960342 per ms
Total:NOX每ms处理了多少次pakct_in,y也就是多少条流
NOX收到了packet_in,回发给Cbench一条ofp_flow_mod(openflow protocol modity),cbench收到一个ofp_flow_mod,flows/sec+1
延时模式:指对于每个OpenFlow 会话,发送一个packet_in 消息,计算这个消息的RTT 时延。
32 switches: flows/sec: 235 235 235 235 235 235 253 199 98 98 98 98 98 98 199 199 199 199 199 199 199 199 199 199 199 199 199 199 199 199 199 199 total = 0.600203 per ms
官网上的:
Test Setup
· CPU: 1 x Intel Core i7 930 @ 3.33ghz, 4 physical cores, 8 threads
· RAM: 9GB
· OS: Ubuntu 10.04.1 LTS x86_64
o Kernel: 2.6.32-24-generic #43-Ubuntu SMP Thu Sep 16 14:58:24 UTC 2010 x86_64 GNU/Linux
o Boost Library: v1.42 (libboost-all-dev)
o malloc: Google's Thread-Caching Malloc version 0.98-1
o Java: Sun Java 1.6.0_25
· Controller configuration:
NOX
Configured with ../configure --enable-ndebug --with-python=no
tcmalloc loaded before launch export LD_PRELOAD=/usr/lib/libtcmalloc_minimal.so.0
Launched with taskset -c 0 ./nox_core -i ptcp:6633 switch -t 1
Launched with taskset -c 0-1 ./nox_core -i ptcp:6633 switch -t 2
Launched with taskset -c 0-2 ./nox_core -i ptcp:6633 switch -t 3
Launched with taskset -c 0-3 ./nox_core -i ptcp:6633 switch -t 4
· Test methodology
o cbench is run locally via loopback, the 4th thread's performance is slightly impacted
o cbench emulates 32 switches, sending packet-ins from 1 million source MACs per switch
o 10 loops of 10 seconds each are run 3 times and averaged per thread/switch combination
o tcmalloc loaded first export LD_PRELOAD=/usr/lib/libtcmalloc_minimal.so.0
o Launched with taskset -c 7 ./cbench -c localhost -p 6633 -m 10000 -l 10 -s 32 -M 1000000 -t
测的延时:单线程,改变switch
switch | 4 | 8 | 16 | 32 | 64 | 128 | 256 |
吞吐量(response/s) | 195804.86 | 198291.97 | 192140.99 | 200087.88 | 45302.48 | 42891.07 | 18133.83 |
switch | 4 | 8 | 16 | 32 | 64 | 128 | 256 |
延时 | 37502.19 | 37096.19 | 38013.41 | 33653.44 | 34855.14 | 33680.94 | 31746.66 |
线程数 | 1 | 3 | 4 | 8 |
吞吐量 | 312.11 | 326.5 | 324.99 | 292.67 |
线程数 | 1 | 3 | 4 | 8 |
延时 | 2774.29 | 2774.72 | 2770.51 | 2765.71 |