当前位置: 首页 > 工具软件 > Hue > 使用案例 >

HUE基础入门

洪成济
2023-12-01

       HUE是一个开源的Apache Hadoop UI系统,早期由Cloudera开发,后来贡献给开源社区。它是基于Python Web框架Django实现的。通过使用Hue我们可以通过浏览器方式操纵Hadoop集群。例如put、get、执行MapReduce Job等等。
 学习网站:
    http://gethue.com
    https://github.com/cloudera/hue
    http://archive.cloudera.com/cdh5/cdh/5/hue-3.7.0-cdh5.3.6/manual.html
编译HUE

#需要的依赖包
gcc
gcc-c++
ant
asciidoc
cyrus-sasl-devel
cyrus-sasl-gssapi
krb5-devel
libtidy (for unit tests only)
libxml2-devel
libxslt-devel
mvn (from maven package)
mysql
mysql-devel
openssl-devel (for version 7+)
openldap-devel
python-devel
sqlite-devel

#执行以下命令:
make apps
build/env/bin/hue runserver

修改配置文件/opt/hue-3.7.0-cdh5.3.6/desktop/conf/hue.ini

  secret_key=jFE93j;2[290-eiw.KEiwN2s3['d;/.q[eIW^y#e=+Iei*@Mn<qW5o #固定写法,官方拷贝
  # Webserver listens on this address and port
  http_host=hadoop-senior01.zhangbk.com
  http_port=8888
  # Time zone name
  time_zone=Asia/Shanghai


启动HUE
  build/env/bin/supervisor
登录:http://192.168.159.21:8888
   首次创建用户:admin/admin

出现的问题:  
  KeyError: "Couldn't get user id for user hue"
解决:
  此问题的原因是因为你使用的root用户安装了hue,然后在root用户下使用的build/env/bin/supervisor,
  首先要创建个普通用户,并给添加密码。chown -R hadoop-senior01 hue-3.7.0-cdh5.3.6
  然后在普通用户启动。

HUE集成hdfs

https://www.cnblogs.com/xupccc/p/9583656.html
hue与hadoop连接,即访问hadoop文件,可以使用两种方式。                       
  WebHDFS                                                                     
    提供高速数据传输,client可以直接和DataNode通信。                            
  HttpFS(HA模式下只能使用该中方式)
    一个代理服务,方便于集群外部的系统进行集成。
1. etc/hadoop/hdfs-site.xml

<property>
 <name>dfs.webhdfs.enabled</name>
 <value>true</value>
</property>

2.You also need to add this to core-site.html.

<property>
  <name>hadoop.proxyuser.hue.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hue.groups</name>
  <value>*</value>
</property>

3.Also add this in httpfs-site.xml which might be in /etc/hadoop-httpfs/conf.

<property>
  <name>httpfs.proxyuser.hue.hosts</name>
  <value>*</value>
</property>
<property>
  <name>httpfs.proxyuser.hue.groups</name>
  <value>*</value>
</property>

同步到其他节点

启动进程:

sbin/httpfs.sh start

配置hue.ini

  [[hdfs_clusters]]
    # HA support by using HttpFs

    [[[default]]]
      # Enter the filesystem uri
  # ------------------------------------------------------------------------
  [[hdfs_clusters]]
    # HA support by using HttpFs

    [[[default]]]
      # Enter the filesystem uri
      fs_defaultfs=hdfs://ns1

      # NameNode logical name.
      logical_name=ns1

      # Use WebHdfs/HttpFs as the communication mechanism.
      # Domain should be the NameNode or HttpFs host.
      # Default port is 14000 for HttpFs.
      webhdfs_url=http://hadoop-senior01.zhangbk.com:14000/webhdfs/v1

      # Change this if your HDFS cluster is Kerberos-secured
      ## security_enabled=false

      # Default umask for file and directory creation, specified in an octal value.
      ## umask=022

      # Directory of the Hadoop configuration
      hadoop_conf_dir=/opt/hadoop-2.5.0-cdh5.3.6/etc/hadoop

出现的问题:

1. 无法执行操作。 " "" 注:您是 Hue 管理员,但不是 HDFS 超级用户(即 ""root"")。"""
AccessControlException: Permission denied: user=admin, access=WRITE, inode="/":root:supergroup:drwxr-xr-x (error 500)

解决:
说明用户对文件没有操作权限,对文件的权限进行修改,hdfs dfs -chmod -R 777  /user

或修改配置文件hdfs-site.xml

<property>
  <name>dfs.permissions.enabled</name>
  <value>false</value>
</property>

2. hadoop.hdfs_clusters.default.webhdfs_url    当前值: http://hadoop-senior01.zhangbk.com:14000/webhdfs/v1
文件系统根目录“/”应归属于“hdfs”

解决方法

  修改 文件desktop/libs/hadoop/src/hadoop/fs/webhdfs.py 中的  DEFAULT_HDFS_SUPERUSER = ‘hdfs’  更改为你的用户,或在HUE中创建hdfs用户。

HUE集成yarn

  # Configuration for YARN (MR2)
  # ------------------------------------------------------------------------
  [[yarn_clusters]]

    [[[default]]]
      # Enter the host on which you are running the ResourceManager
      resourcemanager_host=hadoop-senior03.zhangbk.com

      # The port where the ResourceManager IPC listens on
      resourcemanager_port=8032

      # Whether to submit jobs to this cluster
      submit_to=True

      # Resource Manager logical name (required for HA)
      ## logical_name=

      # Change this if your YARN cluster is Kerberos-secured
      ## security_enabled=false

      # URL of the ResourceManager API
      resourcemanager_api_url=http://hadoop-senior03.zhangbk.com:8088

      # URL of the ProxyServer API
      proxy_api_url=http://hadoop-senior03.zhangbk.com:8088

      # URL of the HistoryServer API
      history_server_api_url=http://hadoop-senior01.zhangbk.com:19888

      # In secure mode (HTTPS), if SSL certificates from Resource Manager's
      # Rest Server have to be verified against certificate authority
      ## ssl_cert_ca_verify=False


 HUE集成Hive
设置hiveserver2,metastore修改配置文件hive-site.xml

<property>
  <name>hive.server2.thrift.port</name>
  <value>10000</value>
</property>
<property>
  <name>hive.server2.thrift.bind.host</name>
  <value>hadoop-senior01.zhangbk.com</value>
</property>

<property>
  <name>hive.metastore.uris</name>
  <value>thrift://hadoop-senior01.zhangbk.com:9083</value>
</property>

启动hiveserver2和metastore服务

bin/hiveserver2
bin/hive --service metastore -p 9083 #端口可不加,使用默认

修改hue.ini文件

###########################################################################
# Settings to configure Beeswax with Hive
###########################################################################

[beeswax]

  # Host where HiveServer2 is running.
  # If Kerberos security is enabled, use fully-qualified domain name (FQDN).
  hive_server_host=hadoop-senior01.zhangbk.com

  # Port where HiveServer2 Thrift server runs on.
  hive_server_port=10000

  # Hive configuration directory, where hive-site.xml is located
  hive_conf_dir=/opt/hive-0.13.1-cdh5.3.6/conf

  # Timeout in seconds for thrift calls to Hive service
  server_conn_timeout=120

  # Choose whether Hue uses the GetLog() thrift call to retrieve Hive logs.
  # If false, Hue will use the FetchResults() thrift call instead.
  ## use_get_log_api=true

HUE集成RDBMS

###########################################################################
# Settings for the RDBMS application
###########################################################################

[librdbms]
  # The RDBMS app can have any number of databases configured in the databases
  # section. A database is known by its section name
  # (IE sqlite, mysql, psql, and oracle in the list below).

  [[databases]]
    # sqlite configuration.
     [[[sqlite]]]
      # Name to show in the UI.
      nice_name=SQLite

      # For SQLite, name defines the path to the database.
      name=/opt/hue-3.7.0-cdh5.3.6/desktop/desktop.db

      # Database backend to use.
      engine=sqlite

      # Database options to send to the server when connecting.
      # https://docs.djangoproject.com/en/1.4/ref/databases/
      ## options={}

    # mysql, oracle, or postgresql configuration.
     [[[mysql]]]
      # Name to show in the UI.
      nice_name=MySql

      # For MySQL and PostgreSQL, name is the name of the database.
      # For Oracle, Name is instance of the Oracle server. For express edition
      # this is 'xe' by default.
      name=test

      # Database backend to use. This can be:
      # 1. mysql
      # 2. postgresql
      # 3. oracle
      engine=mysql

      # IP or hostname of the database to connect to.
      host=hadoop-senior01.zhangbk.com

      # Port the database server is listening to. Defaults are:
      # 1. MySQL: 3306
      # 2. PostgreSQL: 5432
      # 3. Oracle Express Edition: 1521
      port=3306

      # Username to authenticate with when connecting to the database.
      user=root

      # Password matching the username to authenticate with when
      # connecting to the database.
      password=password01

      # Database options to send to the server when connecting.
      # https://docs.djangoproject.com/en/1.4/ref/databases/
      options= {"init_command":"SET NAMES 'utf8'"}

出现问题: 

1. 发生服务器错误: Error loading MySQLdb module: libmysqlclient.so.18: cannot open shared object file:
  No such file or directory
 这个错误出现的原因是找不到 libmysqlclient.so.18 这个文件,根本原因是,一般我们使用的mysql都是自己从新安装的,不是系统自带的,所以在
 我们安装我们自己的mysql的时候,会把删除原来的mysql,此时会连带这删掉这个文件,32的目录是 /usr/lib/mysql/,64位的为 /usr/lib64/mysql/下面有这个文件。
 解决:
  1、解决问题的关键是,在找一台未装过mysql的服务器,然后把此文件夹下面的 libmysqlclient.so.18.0.0 文件拷贝到此服务器的这个目录中,然后做软连接。
  2、然后 vi /etc/ld.so.conf 后面增加一条记录:/usr/lib64/mysql (libmysqlclient.so.18 和libmysqlclient.so.18.0.0所在目录)
  3、运行 ldconfig 命令,让其生效。

 2. 发生服务器错误: 'utf8' codec can't decode byte 0xfc in position 1: invalid start byte

 解决:
 中文乱码问题,修改hue.ini文件

options= {"init_command":"SET NAMES 'utf8'"}

HUE集成Oozie

###########################################################################
# Settings to configure liboozie
###########################################################################

[liboozie]
  # The URL where the Oozie service runs on. This is required in order for
  # users to submit jobs. Empty value disables the config check.
  oozie_url=http://hadoop-senior01.zhangbk.com:11000/oozie

  # Requires FQDN in oozie_url if enabled
  ## security_enabled=false

  # Location on HDFS where the workflows/coordinator are deployed when submitted.
  remote_deployement_dir=/user/oozie-apps


###########################################################################
# Settings to configure the Oozie app
###########################################################################

[oozie]
  # Location on local FS where the examples are stored.
  local_data_dir=/opt/oozie-4.0.0-cdh5.3.6/oozie-apps

  # Location on local FS where the data for the examples is stored.
  sample_data_dir=/opt/oozie-4.0.0-cdh5.3.6/oozie-apps/input-data

  # Location on HDFS where the oozie examples and workflows are stored.
  remote_data_dir=/user/oozie-apps

  # Maximum of Oozie workflows or coodinators to retrieve in one API call.
  oozie_jobs_count=100

  # Use Cron format for defining the frequency of a Coordinator instead of the old frequency number/unit.
  ## enable_cron_scheduling=true

出现问题:

/user/oozie/share/lib    Oozie 分享库 (Oozie Share Lib) 无法安装到默认位置。

解决:

由于安装sharelib库的时候,指定的文件夹与hue默认文件夹不一致
修改oozie-site.xml

    <property>
        <name>oozie.service.WorkflowAppService.system.libpath</name>
        <value>/user/oozie/share/lib</value>
    </property>

重新执行:

 bin/oozie-setup.sh sharelib create -fs hdfs://hadoop-senior01.zhangbk.com:8020 -locallib oozie-sharelib-4.0.0-cdh5.3.6-yarn.tar.gz

 

 类似资料: