以下包括网上摘录和自己遇到问题的结合
- 在重新启动HBase服务的时候可能会出现如下的错误:
INFO ipc.HbaseRPC: Server at /227.23.56.100:60020 could not be reached after 1 tries, giving up.
出现这个错误的原因可能会有很多,比如各个配置文件中的地址不一致等,但我这里给出的一个可能原因是端口号被其他某一进程占用,或者还是HBase进程使用着,但此服务(或说此端口)已经不再可用,解决的方式为:通过此端口查找到对应的进程,然后杀死对应的进程,再重启HBase服务,你会发现上面出现的错误将消失,服务启动正常。 具体步骤为: 1、 lsof -i:60020 2、 kill -9 PID(进程ID) 3、 重启HBase服务。
- 错误二
error are localhost: starting zookeeper, logging to /usr/local/hadoop/hbase/bin/../logs/hbase-hadoop-zookeeper-user-HP-dx2480-MT-NA125PA.out localhost: java.net.BindException: Address already in use localhost: at sun.nio.ch.Net.bind(Native Method) localhost: at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:137) localhost: at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77) localhost: at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:70) localhost: at org.apache.zookeeper.server.NIOServerCnxn$Factory.(NIOServerCnxn.java:122) localhost: at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:106) localhost: at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.runZKServer(HQuorumPeer.java:85) localhost: at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.main(HQuorumPeer.java:70)
解决:
It looks like your zookeeper process is already running. Or (much less likely), something else is already listening on port 2181 or 2888. Did you install a separate zookeeper package and start it?
If so, you’ll need to tell HBase not to start zookeeper automatically. Edit the conf/hbase-env.sh file and add the line:
export HBASE_MANAGES_ZK=false
On the other hand, if you want HBase to start zookeeper automatically, then change that setting to “true”, make sure any current zookeeper process is stopped and try again.
- 当我把hadoop、hbase安装配置(具体参考这里)好了之后,启动hbase的shell交互模式,输入命令却出现了下面这样的错误:
ERROR: org.apache.hadoop.hbase.MasterNotRunningException: Retried 7 times
这是为什么呢,Master为什么没有启动起来呢? 查看logs目录下的Master日志,发现有以下信息:
2012-02-01 14:41:52,867 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 42, server = 41)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:364)
…………..
解决方法: 将hbase/lib目录下的hadoop-core的jar文件删除,将hadoop目录下的hadoop-0.20.2-core.jar拷贝到hbase/lib下面,然后重新启动hbase即可。
- 还有一个问题是,关于ip地址映射,dns解析,用的是ipv4,但如果本机的ipv6协议没关的话可能就将解析为ipv6的地址去了,所以,最好把ipv6协议关了。 ipv6关闭方法:
在/etc/modprobe.d/dist.conf结尾添加
alias net-pf-10 off
alias ipv6 off
可用vi等编辑器,也可以通过命令:
cat <<EOF>>/etc/modprobe.d/dist.conf
alias net-pf-10 off
alias ipv6 off
EOF
写改完毕重启电脑。
最后,集群时间要统一,可以用ntp来时间同步,网上可以查到具体配置,这里就不细说了。