当前位置: 首页 > 工具软件 > nGrinder > 使用案例 >

nGrinder控制器配置指南 - 玩转压测nGrinder

南宫保臣
2023-12-01

本章描述了高级nGrinder控制器配置。如果您不是系统管理员,您可能不需要阅读本指南。然而,如果您想将nGrinder作为PAAS运行,您应该阅读这一章。

控制器首页

${NGRINDER_HOME}

当nGrinder控制器启动时,nGrinder将创建 ${user.home}/.ngrinder 目录到用户的主目录。此目录包含默认配置文件和数据。下面是.ngrinder目录的默认位置。

  • Window:C:\Users${user.home}.ngrinder
  • Unix/Linux :${user.home}/.ngrinder

但是如果您想为主目录分配其他目录,请设置环境变量 ${NGRINDER_HOME} 在运行ngrinder之前,您可以在命令行中执行 --ngrinder-home HOME_PATH

java -XX:MaxPermSize=200m -jar  ngrinder-controller-X.X.war --ngrinder-home /home/user/ngrinder

如果您想运行多个nGrinder控制器(每个控制器只处理一个网络区域)并使它们作为一个(集群模式)工作,您应该让所有控制器共享相同的${NGRINDER_HOME}。这通常使用NFS来完成控制器目录共享。有关详细信息,请参阅 集群体系结构

${NGRINDER_EX_HOME}

${NGRINDER_EX_HOME}用于在集群模式中每个特定的控制器。默认情况下,它被设置在~/.ngrinder_ex

${NGRINDER_HOME}不同,${NGRINDER_EX_HOME}在nGrinder启动时不会自动创建。

  • Window : C:\Users${user.home}.ngrinder_ex
  • Unix/Linux :${user.home}/.ngrinder_ex

${NGRINDER_EX_HOME}不是多个控制器共享的主题。每个控制器都可以有自己的扩展home。用户可以添加额外的系统配置在${NGRINDER_EX_HOME}/system.conf

控制器首先从 ${NGRINDER_HOME}/system.conf 加载系统配置,然后,它将尝试从 ${NGRINDER_EX_HOME}/system.conf ,并将其覆盖 ${NGRINDER_HOME}/system.conf上的系统配置。

例如:cluster.region配置可以设置在每个集群成员的${NGRINDER_EX_HOME}/system.conf 文件中。当${NGRINDER_EX_HOME}目录存在,且控制器在集群模式下启动,控制器将输出日志到 ${NGRINDER_EX_HOME}/logs/ngrinder_{region_name}.log文件中。

命令行参数

基础

如果你运行一个没有WAS的控制器。您可以在CLI接口上提供多个选项。

名称示例重载属性描述
-p / –port-p 80服务器的HTTP端口。默认端口是:8080
-c / --context-path-c ngrinder控制器的Web上下文路径。默认的上下文路径是""。例如,如果您在这里提供"ngrinner",访问url将是"http://localhost:8080/ngrinder"
-cm / --cluster-mode-cm easycluster.mode集群模式。有三个可用的选项(none/easy/advanced)。默认值为none
-nh / --ngrinder-home-nh ~/ngrinderngrinder.homehome路径,默认:~/.ngrinder
-exh / --exhome-ex ~/ngrinder_exngrinder.exhome扩展的home路径,默认:~/.ngrinder_ex
-h / --help / –?-h帮助文档
-D-Ddatabase=cubrid &database_url=blarcan override all动态属性。这个选项可以覆盖 database.confsystem.conf 的所有配置

单应用模式

如果在非集群模式下运行ngrinder(这意味着您根本没有提供“-cm”选项),则可以使用以下附加选项。

名称示例重载属性描述
-cp / --controller-port-cp 9000controller.port代理连接的控制器端口。

简单的集群模式

一些公司使用多个idc,它们需要集群(单个nGrinder实例中的多区域支持)特性。然而,3.3版本前的nGrinder要求网络文件系统共享${NGRINDER_HOME}和Cubrid,以使多个控制器拥有相同的DB。通过允许在一台机器上安装多个控制器并允许H2 TCP服务器连接,我们取消了这些限制。为了理解容易的集群,我们强烈建议阅读我们的 简单集群指南

您可以通过以下命令以集群模式轻松的运行控制器。

java -XX:MaxPermSize=200m -jar  ngrinder-controller-X.X.war -cm easy

以下是必选项

名称示例重载属性描述
-clh / --cluster-host-clh 200.1.22.3cluster.host当前区域的代理连接的控制器ip或主机名。
-clp / --cluster-port-clp 10222cluster.port此集群成员的集群通信端口。每个集群成员都应该使用唯一的集群端口运行
-cp / --controller-port-cp 9000controller.port代理连接的控制器端口。每个集群成员都应该使用唯一的控制器端口运行。
-r / --region-region NORTHcluster.region区域名称。每个集群成员应该使用唯一的区域名称运行。
-dt / --database-type-dt h2database.type数据库类型。有h2和curbrid。
-dh / --database-host-dh localhostdatabase.host数据库主机名。默认值是localhost
-dp / --database-port-dp 9092database.port数据库端口号。当cubrid被选中时,默认值是33000;h2被选中,默认值是9092

高级集群模式

java -XX:MaxPermSize=200m -jar  ngrinder-controller-X.X.war -cm advanced

高级集群模式没有任何选项。它只是激活集群模式,然后从集群配置${NGRINDER_HOME}/system.conf 或者 ${NGRINDER_EX_HOME}/system.conf 文件中获取。

配置

当控制器启动时,它将默认配置复制到${NGRINDER_HOME}中。您可以修改它们来设置控制器。

${NGRINDER_HOME}/database.conf

  • 这里包含数据库配置。您可以在需要使用Cubrid时修改此文件。默认情况下,nGrinder使用H2作为数据库。
database=H2
database_username=admin
database_password=admin

如果你只设置以上选项,H2将${NGRINDER_HOME}/db/h2.db创建DB,以嵌入式模式运行。在这种情况下,没有其他进程在运行时不能访问这个数据库。

如果您在服务器模式下运行H2,而不是在自己的嵌入式模式下运行。您还应该提供数据库连接URL。

database_url=tcp://{your_h2_server_host_ip_or_name}:{the_h2_server_port}/db/ngrinder

如果您喜欢使用Cubrid,您需要设置以下配置。

database=cubrid
database_url={your_cubrid_host_ip_or_name}:{cubrid_port_maybe_33000}:{dbname}
database_username=admin
database_password=admin

注意:如果您想使用Cubrid DB高可用性特性。请按照指南在cubrid中启用HA,并在database.conf添加备选的db地址。

database_url_option=&althosts={you_cubrid_secondary_host_ip_or_name}:{cubrid_port_maybe_33000}

#### ${NGRINDER_HOME}/system.conf

##### Generic
- This contains controller configurations.
- You can modify these settings to calibrate the controller’s behavior.

|Key|Default|Compatible Keys (for ~nGrinder 3.2.X)|Description|
|---|-------|-------------------------------------|-----------|
|controller.verbose|false|verbose|Set true to see the more detailed log.|
|controller.dev_mode|false|testmode|Set true to run controller in dev mode. In the dev mode, the log goes to the default output(such as catalina.log in Tomcat) not ${NGRINDER_HOME}/logs/ and the security mode and cluster config verification are disabled. In addition, "agent force update" and "agent auto approval" is enabled. finally the script console is activated as well.|
|controller.demo_mode|false|demo|Set true to run controller in the demo mode. In the demo mode, each use does not allow to change the user password.|
|controller.security|false|security|Set true if security mode should be enabled. In the secutiry mode, nGrinder SecurityManager will be activated and limit the each test’s access to underlying resources/network in the agent. Please refer [Script Security](script-security)|
|controller.user_password_sha256|false|ngrinder.security.sha256|By default, nGrinder uses sha1 to encode passwords. If you like to use sha256, please set this true. However, you need to delete out all databases completly to apply the this configuration.|
|controller.usage_report|true|usage.report|Set false if you don't want to report ngrinder usage to google analytics. |
|controller.plugin_support|true|pluginsupport|Set false if the plugin should be de-activated. This is not the option applied on the fly. You need to restart the controller.|
|controller.user_security|false|user.security|Set true if you want to make some of the user profile fields(email, mobile phone) mandatory.|
|controller.allow_sign_up|false| |Set true if some users should be able to sign them by themselves. See [TBD](tbd)|
|controller.max_agent_per_test|10|agent.max.size|The maximum number of agents which can be attached per one test. This option is useful when you like to make nGrinder shared by many users. This configuration makes each test use only limited number of agents. For example, if you have 15 agents in total and you set 5 here, you can guarantee 3 users can run performance tests concurrently.|
controller.max_vuser_per_agent|3000|agent.max.vuser|The maximum number of vusers which can be used per one agent. In nGrinder, the vuser count means the total thread count. This should be carefully selected depending on the agent memory size. If you have the 8G RAM and 4 core agent, maybe the more than 10000 vusers can be executable. See TBD for our benchmark result.|
|controller.max_run_count|10000|agent.max.runcount|The maximum test run count for one thread. If you set this 10,000 and run 100 threads per an agent, the test can be executed 10,000 * 100 times at maximum.|
|controller.max_run_hour|8|agent.max.runhour|The maximum running hour for one test.|
|controller.max_concurrent_test|10|ngrinder.max.concurrenttest|The count of allowed maximun concurrent tests. If the more tests than specified here are started, some of them will be wating in the run queue.|
|controller.monitor_port|13243|monitor.listen.port|The monitor connecting port. The default value is 13243. When a perftest starts, the controller try to connect to the monitor in the specified target hosts for system statistics.|
|controller.url|auto-matically selected|ngrinder.http.url http.url|Controller URL (such as http://ngrinder.mycompany.com). This is used to construct the host name part of URL in the controller(such as SVN Link). If not set, the controller analyzes user request to represent URL text in the web page.|
|controller.controller_port|16001|ngrinder.agent.control.port|The port number to which each agent connects in the connection phase.|
|controller.console_port_base|12000|ngrinder.console.portbase|The base port number to which agents in the each test connects in the testing phase. If you allowed 10 concurrent tests by controller.max_concurrent_test=10 option, the ports from 12000 to 12009 are used for agents to connect the controller in the testing phase. You need to restart nGrinder to apply the this configuration.|
|controller.ip|all available IPs|ngrinder.controller.- ipaddress ngrinder.controller.ip|By default, the empty controller.ip configuration makes the controller bind to all available IPs in the current machine so that it can allow agents to be connected to all controller’s IP. Generally, It causes no problem. However, in the specialized env(such as EC2), more than 2 IPs(one for inboud and the other or outbound) are assigned. If you want to allow only one IP to be connected by the agent, you should put it here.|
|controller.validation_timeout|100|ngrinder.validation.timeout|Script validation timeout in the unit of sec. Increase this when you have the script which takes more than 100 secs for a single validation.|
|controller.enable_script_console|false| |true if the script console should be activated. Script console provides the way to directly access ngrinder internal.|
|controller.enable_agent_auto_approval|true| |false if the agent should be approved before using it. This option is useful when ngrinder is provided as PAAS.|
|controller.front_page_enabled|true| |Set false if the controller doesn’t have the internet access. this disables periodic RSS feed access to developer resources and QnAs.|
|controller.front_page_resources_rss|…|ngrinder.frontpage.rss|RSS URL for "Developer Resources" panel in the front page.|
|controller.front_page_resources_ more_url|…| |"More" URL for "Developer Resources" panel in the front page.|
|controller.front_page_qna_rss|…|ngrinder.qna.rss|RSS URL for "QnA panel" in the front page.|
|controller.front_page_ qna_more_url|…| |"More" URL for "Developer Resources" panel in the front page.|
|controller.front_page_ask_question_url|….|ngrinder.ask.question.url|"Ask a question" URL for "QnA panel" in the front page.|
|controller.help_url|…|ngrinder.help.url|The top most HELP link URL.|
|controller.default_lang|en|ngrinder.langauge.default|The default language if the user didn’t specify the langauge during login . This option is useful when you installs custom SSO plugin.|
|controller.admin_password_reset|false| |true if the admin password should be reset as "admin" while booting up. This option is useful when the admin lost its password. You should make it false after setting the admin password.|
|controller.agent_force_update|false| |true if the agents should always be updated when the update message is sent.  If it’s enabled, update will be performed even when the agent has later version than the controller’s.|
|controller.update_chunk_size|1048576| |The byte size of agent update message. by default, it’s set as 1048576(1MB). Agent update message contains the fragment of agent page and are sent to agents multiple times. If it’s bigger, the count of consequent messages are smaller so that it will be more speedy.|
|controller.safe_dist|false|ngrinder.dist.safe.region|true if you want to always enable the safe script transmission. If it is turned on, the each perf test starting speed will be slower but files are gurantee to be transmitted without errors.|
|controller.safe_dist_threshold|1000000|ngrinder.dist.safe.threashhold|If the files are bigger, the transmission error possibility is increased as well. nGrinder automatically enable safe script transmission by looking the file size. If you want to disable this, please make it 100000000.|

##### Cluster-Related Configurations
This file can have serveral cluster mode related options as well. Because ${NGRINDER_HOME}/system.conf should be shared by multiple controllers via NFS,
some cluster related configurations which applies to all controllers in the cluster can be located here for easy administration.  
Followings are the options.

|Key|Default|Compatible Keys (for ~nGrinder 3.2.X)|Description|
|---|-------|-------------------------------------|-----------|
|cluster.enabled|false|ngrinder.cluster.mode|true if the cluster mode should be activated.|
|cluster.mode|none| |easy if you want to run the multiple controllers in a single machine.|
|cluster.members|-|ngrinder.cluster.uris|Comma or semi-colon separated list of all cluster member’s IP. For example) - 192.168.1.1;192.168.2.2;192.168.3.3|
|cluster.port|40003|ngrinder.cluster.listener.port|cluster communication port. When it’s the easy mode, each controllers in a cluster should have unique cluster port. However it’s the advanced mode, all cluter should have the  same cluster port.|

#### ${NGRINDER_EX_HOME}/system.conf
As we described here, ${NGRINDER_EX_HOME} is used to provide the specialized configuration per each controller in the cluster mode.  You can add additional system.conf file here as well and define several per controller configurations.

If you run controller in the single mode or easy cluster mode, you don’t need to create this file at all. However if you run controller in the advanced mode, you may need following configurations in ${NGRINDER_EX_HOME}/system.conf file.

|Key|Default|Compatible Keys (for ~nGrinder 3.2.X)|Description|
|---|-------|-------------------------------------|-----------|
|cluster.port|40003|ngrinder.cluster.listener.port|cluster communication port. When it’s the easy mode, each controllers in a cluster should have unique cluster port.  However it’s the advanced|
|cluster.host|-|cluster.ip|Console binding IP of this region. If not set, console will be bound to all available IPs.|
|cluster.region|NONE|ngrinder.cluster.region|The region name of this cluster member.|
|cluster.hidden_region|false|ngrinder.cluster.region.hide|true if you want to make this controller invisible from cluster members. This is useful when you like to run a private controller for administration.|
|cluster.safe_dist|false|-|true if tje file transmission in this region should be done by safer way.|

#### ${NGRINDER_HOME}/process_and_thread_policy.js
This file defines the logic to determine the appropriate combination of processes and threads for the given count of vusers.  
This file provides the flexible way to configure the appropriate processes and threads. Users usually don’t know which process and thread combination can result the best performance. Therefore, nGrinder let a user just input expected vuser per agent and configure the process and thread count automatically.  
The default logic is like following.
```javascript
function getProcessCount(total) {
    if (total < 2) {
        return 1;
    }

    var processCount = 2;

    if (total > 80) {
        processCount = parseInt(total / 40) + 1;
    }

    if (processCount > 10) {
        processCount = 10;
    }
    return processCount;
}

function getThreadCount(total) {
    var processCount = getProcessCount(total);
    return parseInt(total / processCount);
}

默认情况下,不允许超过10个进程,当使用80个vusers时,每个进程分配的线程数超过40个。

您可以自由地重写这个方法(getProcessCount() / getThreadCount())以满足你的需要。

${NGRINDER_HOME}/grinder.properties

这个文件定义了默认的"Grinder"行为。有些在运行时被nGrinder重载,有些则不然。在大多数情况下,管理员不需要更改这个文件。
详情请参阅 http://grinder.sourceforge.net/g3/properties.html

插件

该文件夹包含插件。如果您想安装新的插件(~~.jar)或升级现有的插件,只需将它们放入这个文件夹。它们将被自动扫描和激活。如果你想删除插件,只要从那里删除插件文件。例如:你可以找到可用的插件TBD。

主文件夹结构

${NGRINDER_HOME}中,有一些文件夹用于存储nGrinder中使用的数据。以下是对它们的描述。

文件夹名称描述
logs这里存储nGrinder的日志. nGrinder拦截tomcat日志并保存日志到ngrinder.log文件。此日志只包含与控制器相关的日志。您还可以通过admin菜单监视这个文件的内容。
perftest此文件夹存储与每个性能测试相关的数据。
download此文件夹包含可下载的文件,如agent和monitor。例如,如果您想重新创建代理和监视器包,您可以删除该文件夹中的所有内容。
plugins这个文件夹包含插件。如果您想安装新的插件(~~.jar)或升级现有的插件,只需将它们放入这个文件夹。它们将被自动扫描和激活。
repos此文件夹包含每个用户的svn存储库。
script此文件夹包含与脚本验证相关的资源。这只在执行验证时使用。
db此文件夹包含H2数据库数据。
subversion此文件夹包含底层svnkit的默认配置。
webapp当控制器的web应用程序文件在嵌入式web服务器上执行时,该文件夹包含控制器的web应用程序文件。

更多内容请查看: 压力测试平台(nGrinder)入门到精通教程

转载于:https://my.oschina.net/u/1404949/blog/3039452

 类似资料: