新增节点
ip:192.168.0.113
1.修改host ,在所有节点中添加新节点
echo "192.168.0.113 hadoop.slave2">>/etc/hosts
2,配置ssh等效性,在master节点上执行
ssh hadoop.slave2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
然后将其发送到所有节点:
scp ~/.ssh/authorized_keys hadoop.slave1:~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys hadoop.slave2:~/.ssh/authorized_keys
3.修改namenode的配置文件/usr/local/hadoop-1.2.1/conf
添加新增节点的ip或hostname
echo "192.168.0.113">>/usr/local/hadoop-1.2.1/conf/slaves
4,配置java环境,设置环境变量
scp -r /usr/local/hadoop-1.2.1/ hadoop.slave2:/usr/local/
scp -r /usr/local/jdk1.7.0_51/ hadoop.slave2:/usr/local/
chown -R hadoop:hadoop /usr/local/hadoop-1.2.1/
chown -R hadoop:hadoop /usr/local/jdk1.7.0_51/
export JAVA_HOME=/usr/local/jdk1.7.0_51
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/bin:$PATH:$JAVA_HOME/lib:$JAVA_HOME/jre/bin:$PATH:$HOME/bin
export HADOOP_HOME=/usr/local/hadoop-1.2.1
export PATH=$PATH:$HADOOP_HOME/bin
5.启动新节点的机器服务
hadoop-daemon.sh start datanode
hadoop-daemon.sh start tasktracker
6.平衡磁盘利用率:
start-balancer.sh