建议使用apache 版本的环境,使用cdh可能会出现一些问题。
请参考stack overflow上非apache版本的hbase报错描述:
https://stackoverflow.com/questions/31849454/using-phoenix-with-cloudera-hbase-installed-from-repo
官方下载地址:
https://mirrors.tuna.tsinghua.edu.cn/apache/phoenix/
1.将phoenix的tar包上传到节点上解压
tar -xzvf apache-phoenix-4.10.0-HBase-1.2-bin.tar.gz -C /opt/
cd /opt/beh/core/apache-phoenix-4.10.0-HBase-1.2-bin
cp phoenix-4.10.0-HBase-1.2-server.jar /opt/hbase/lib
scp phoenix-4.10.0-HBase-1.2-server.jar test102:/opt/hbase/lib
scp phoenix-4.10.0-HBase-1.2-server.jar test103:/opt/hbase/lib
3.重启hbase
stop-hbase.sh
start-hbase.sh
4.登录连接phoenix
cd /opt/apache-phoenix-4.10.0-HBase-1.2-bin/bin
./sqlline.py localhost
出现以下信息表示安装成功
[hadoop@test101 bin]$ ./sqlline.py localhost
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:localhost none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:localhost
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/phoenix-4.10.0-HBase-1.2-bin/phoenix-4.10.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
17/08/14 17:02:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/08/14 17:02:26 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
Connected to: Phoenix (version 4.10)
Driver: PhoenixEmbeddedDriver (version 4.10)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
91/91 (100%) Done
Done
sqlline version 1.2.0
0: jdbc:phoenix:localhost>
启动phoenix报以下异常
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
解决方法:
在hbase的配置文件hbase-site.xml中添加以下参数
<property>
<name>hbase.table.sanity.checks</name>
<value>false</value>
</property>
分发该配置文件,重启hbase即可。
hbase.table.sanity.checks主要用于hbase的各种参数检查。当该参数为==true== 时检查步骤入下。
1.check max file size,hbase.hregion.max.filesize,最小为2MB
2.check flush size,hbase.hregion.memstore.flush.size,最小为1MB
3.check that coprocessors and other specified plugin classes can be loaded
4.check compression can be loaded
5.check encryption can be loaded
6.Verify compaction policy
7.check that we have at least 1 CF
8.check blockSize
9.check versions
10.check minVersions <= maxVerions
11.check replication scope
12.check data replication factor, it can be 0(default value) when user has not explicitly set