毕业论文

打赏
当前位置: 毕业论文 > 计算机论文 >

hadoop多节点上的分布式安装方式(2)

时间:2017-05-07 11:43来源:毕业论文
$ssh-keygen -t rsa $cd .ssh 这里要注意,执行完命令后要选择生成文件所放的位置,按回车即选择默认即可。 [hadoop@localhost ~]$ ssh-keygen -t rsa Generating public/priva


$ssh-keygen -t rsa
$cd .ssh
这里要注意,执行完命令后要选择生成文件所放的位置,按回车即选择默认即可。
[hadoop@localhost ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop /.ssh/id_rsa):  (此处按回车)
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):                  (此处按回车)
Enter same passphrase again:                               (此处按回车)
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
23:a8:36:c7:c3:89:1a:f9:97:00:88:36:73:d6:5a:0b hadoop@localhost
The key's randomart image is:
+----+
| |
| |
|o . |
|++ E + |
|..= = o S |
| ..* o . . |
|o =.*. |
| = oo. |
|. .. |
2.    把公钥追加到此文件:
$cat id_rsa.pub > authorized_keys
 
$chmod 600 /home/hadoop/.ssh/authorized_keys
$chmod 700 /home/hadoop/.ssh
3.    修改sshd_config文件:
退出hadoop用户:exit
#vim /etc/ssh/sshd_config
RSAAuthentication yes //进去后发现已经是yes,就没有改。
 
4.    将无密访问设置同步到其他所有节点:
$scp  /home/hadoop/.ssh hostname:/home/hadoop/
 
#scp -r /etc/ssh/sshd_config hostname:/etc/ssh/sshd_config

5.    启动sshd服务:
在其他所有节点上执行:
#service sshd restart
切换到hadoop用户执行以下操作:
chmod 755 /home/hadoop   (/home/hadoop权限是755)

主节点部署
关闭防火墙:
进入root用户,执行命令:
service iptables stop

建议目录部署:
在每个节点上手动创建hadoop运行文件目录:mkdir –p /home/hadoop/deploy
Hadoop部署中还会用到如下目录:
hadoop数据目录: /home/hadoop/sysdata
hadoop namenode数据目录: /home/hadoop/namenode
hadoop临时文件目录: /home/hadoop/secondaryname
hadoop临时文件目录: /home/hadoop/tmp
mapred临时文件目录:  /home/hadoop/mapred
这些目录不需要手动创建,在集群搭建完成后,启动集群时会自动创建
将准备好的hadoop-1.2.1.tar.gz包放到master节点的deploy目录下
解压hadoop包:tar -zxvf hadoop-1.2.1.tar.gz
配置文件修改
cd /home/hadoop/deploy/Hadoop-1.2.1/conf
修改masters、slaves、core-site.xml、hdfs-site.xml、mapred-site.xml、hadoop-env.sh文件
masters(List master PC's IP address)
20.20.20.110
slaves(List all the slaves PC's IP address)
20.20.20.111
20.20.20.112
core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://20.20.20.110:8020</value>
<final>true</final>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
<final>true</final>
</property>
如图:
 
hdfs-site.xml
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/sysdata</value>
</property>
<property>
<name>fs.checkpoint.dir</name> hadoop多节点上的分布式安装方式(2):http://www.751com.cn/jisuanji/lunwen_6525.html
------分隔线----------------------------
推荐内容