hadoop 学习笔记(1)-for linux install
本站文章除注明转载外,均为本站原创: 转载自love wife love life —Roger的Oracle/MySQL/PostgreSQL数据恢复博客
本文链接地址: hadoop 学习笔记(1)-for linux install
1 |
本文将是学习Nosql 数据库学习笔记系列的第一篇,回头会继续坚持,欢迎高手赐教! |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 |
++++++ 卸载老版的JDK ++++++ [root@roger etc]# java -version java version "1.4.2" gij (GNU libgcj) version 4.1.2 20080704 (Red Hat 4.1.2-48) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [root@roger etc]# rpm -qa | grep gcj libgcj-devel-4.1.2-48.el5 java-1.4.2-gcj-compat-devel-1.4.2.0-40jpp.115 java-1.4.2-gcj-compat-src-1.4.2.0-40jpp.115 libgcj-4.1.2-48.el5 libgcj-src-4.1.2-48.el5 java-1.4.2-gcj-compat-1.4.2.0-40jpp.115 [root@roger etc]# yum -y remove java-1.4.2-gcj-compat-1.4.2.0-40jpp.115 Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. Setting up Remove Process Resolving Dependencies --> Running transaction check ---> Package java-1.4.2-gcj-compat.i386 0:1.4.2.0-40jpp.115 set to be erased --> Processing Dependency: java-gcj-compat for package: jakarta-commons-codec --> Processing Dependency: java-gcj-compat for package: jakarta-commons-codec --> Processing Dependency: java-gcj-compat for package: antlr --> Processing Dependency: java-gcj-compat for package: antlr --> Processing Dependency: java-gcj-compat for package: junit --> Processing Dependency: java-gcj-compat for package: junit --> Processing Dependency: java-gcj-compat for package: jakarta-commons-logging --> Processing Dependency: java-gcj-compat for package: jakarta-commons-logging --> Processing Dependency: java-gcj-compat >= 1.0.31 for package: tomcat5-jsp-2.0-api --> Processing Dependency: java-gcj-compat >= 1.0.31 for package: tomcat5-jsp-2.0-api --> Processing Dependency: java-gcj-compat >= 1.0.64 for package: gjdoc --> Processing Dependency: java-gcj-compat >= 1.0.64 for package: gjdoc --> Processing Dependency: java-gcj-compat for package: jakarta-commons-httpclient --> Processing Dependency: java-gcj-compat for package: jakarta-commons-httpclient --> Processing Dependency: java-gcj-compat >= 1.0.31 for package: tomcat5-servlet-2.4-api --> Processing Dependency: java-gcj-compat >= 1.0.31 for package: tomcat5-servlet-2.4-api --> Processing Dependency: java-gcj-compat for package: bsf --> Processing Dependency: java-gcj-compat for package: bsf --> Processing Dependency: java-gcj-compat for package: xalan-j2 --> Processing Dependency: java-gcj-compat for package: xalan-j2 --> Processing Dependency: java-gcj-compat for package: xmlrpc --> Processing Dependency: java-gcj-compat for package: xmlrpc --> Processing Dependency: java-gcj-compat for package: bsh --> Processing Dependency: java-gcj-compat for package: bsh --> Processing Dependency: java-1.4.2-gcj-compat = 1.4.2.0-40jpp.115 for package: java-1.4.2-gcj-compat-src --> Processing Dependency: java-1.4.2-gcj-compat = 1.4.2.0-40jpp.115 for package: java-1.4.2-gcj-compat-devel --> Running transaction check ---> Package antlr.i386 0:2.7.6-4jpp.2 set to be erased ---> Package bsf.i386 0:2.3.0-11jpp.1 set to be erased ---> Package bsh.i386 0:1.3.0-9jpp.1 set to be erased ---> Package gjdoc.i386 0:0.7.7-12.el5 set to be erased ---> Package jakarta-commons-codec.i386 0:1.3-7jpp.2 set to be erased ---> Package jakarta-commons-httpclient.i386 1:3.0-7jpp.1 set to be erased ---> Package jakarta-commons-logging.i386 0:1.0.4-6jpp.1 set to be erased ---> Package java-1.4.2-gcj-compat-devel.i386 0:1.4.2.0-40jpp.115 set to be erased ---> Package java-1.4.2-gcj-compat-src.i386 0:1.4.2.0-40jpp.115 set to be erased ---> Package junit.i386 0:3.8.2-3jpp.1 set to be erased ---> Package tomcat5-jsp-2.0-api.i386 0:5.5.23-0jpp.7.el5_3.2 set to be erased ---> Package tomcat5-servlet-2.4-api.i386 0:5.5.23-0jpp.7.el5_3.2 set to be erased ---> Package xalan-j2.i386 0:2.7.0-6jpp.1 set to be erased ---> Package xmlrpc.i386 0:2.0.1-3jpp.1 set to be erased --> Processing Dependency: /usr/bin/rebuild-gcj-db for package: eclipse-ecj --> Processing Dependency: /usr/bin/rebuild-gcj-db for package: eclipse-ecj --> Restarting Dependency Resolution with new changes. --> Running transaction check ---> Package eclipse-ecj.i386 1:3.2.1-19.el5 set to be erased --> Finished Dependency Resolution Dependencies Resolved ==================================================================================== Package Arch Version Repository Size ==================================================================================== Removing: java-1.4.2-gcj-compat i386 1.4.2.0-40jpp.115 installed 441 Removing for dependencies: antlr i386 2.7.6-4jpp.2 installed 2.5 M bsf i386 2.3.0-11jpp.1 installed 812 k bsh i386 1.3.0-9jpp.1 installed 1.2 M eclipse-ecj i386 1:3.2.1-19.el5 installed 18 M gjdoc i386 0.7.7-12.el5 installed 1.7 M jakarta-commons-codec i386 1.3-7jpp.2 installed 207 k jakarta-commons-httpclient i386 1:3.0-7jpp.1 installed 1.3 M jakarta-commons-logging i386 1.0.4-6jpp.1 installed 233 k java-1.4.2-gcj-compat-devel i386 1.4.2.0-40jpp.115 installed 81 k java-1.4.2-gcj-compat-src i386 1.4.2.0-40jpp.115 installed 0.0 junit i386 3.8.2-3jpp.1 installed 602 k tomcat5-jsp-2.0-api i386 5.5.23-0jpp.7.el5_3.2 installed 163 k tomcat5-servlet-2.4-api i386 5.5.23-0jpp.7.el5_3.2 installed 250 k xalan-j2 i386 2.7.0-6jpp.1 installed 5.1 M xmlrpc i386 2.0.1-3jpp.1 installed 864 k Transaction Summary ==================================================================================== Remove 16 Package(s) Reinstall 0 Package(s) Downgrade 0 Package(s) Downloading Packages: Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Erasing : java-1.4.2-gcj-compat-devel 1/16 Erasing : bsf 2/16 Erasing : antlr 3/16 Erasing : tomcat5-servlet-2.4-api 4/16 Erasing : jakarta-commons-codec 5/16 Erasing : java-1.4.2-gcj-compat-src 6/16 Erasing : jakarta-commons-logging 7/16 Erasing : junit 8/16 Erasing : tomcat5-jsp-2.0-api 9/16 Erasing : xmlrpc 10/16 Erasing : java-1.4.2-gcj-compat 11/16 Erasing : xalan-j2 12/16 Erasing : jakarta-commons-httpclient 13/16 Erasing : bsh 14/16 Erasing : gjdoc 15/16 Erasing : eclipse-ecj 16/16 Removed: java-1.4.2-gcj-compat.i386 0:1.4.2.0-40jpp.115 Dependency Removed: antlr.i386 0:2.7.6-4jpp.2 bsf.i386 0:2.3.0-11jpp.1 bsh.i386 0:1.3.0-9jpp.1 eclipse-ecj.i386 1:3.2.1-19.el5 gjdoc.i386 0:0.7.7-12.el5 jakarta-commons-codec.i386 0:1.3-7jpp.2 jakarta-commons-httpclient.i386 1:3.0-7jpp.1 jakarta-commons-logging.i386 0:1.0.4-6jpp.1 java-1.4.2-gcj-compat-devel.i386 0:1.4.2.0-40jpp.115 java-1.4.2-gcj-compat-src.i386 0:1.4.2.0-40jpp.115 junit.i386 0:3.8.2-3jpp.1 tomcat5-jsp-2.0-api.i386 0:5.5.23-0jpp.7.el5_3.2 tomcat5-servlet-2.4-api.i386 0:5.5.23-0jpp.7.el5_3.2 xalan-j2.i386 0:2.7.0-6jpp.1 xmlrpc.i386 0:2.0.1-3jpp.1 Complete! |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
++++++ 安装新版的1.6版本JDK(可以去oracle官网下载) ++++++ [root@roger java]# sudo ./jdk-6u27-linux-i586-rpm.bin Unpacking... Checksumming... Extracting... UnZipSFX 5.50 of 17 February 2002, by Info-ZIP (Zip-Bugs@lists.wku.edu). inflating: jdk-6u27-linux-i586.rpm inflating: sun-javadb-common-10.6.2-1.1.i386.rpm inflating: sun-javadb-core-10.6.2-1.1.i386.rpm inflating: sun-javadb-client-10.6.2-1.1.i386.rpm inflating: sun-javadb-demo-10.6.2-1.1.i386.rpm inflating: sun-javadb-docs-10.6.2-1.1.i386.rpm inflating: sun-javadb-javadoc-10.6.2-1.1.i386.rpm Preparing... ########################################### [100%] 1:jdk ########################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... tools.jar... localedata.jar... plugin.jar... javaws.jar... deploy.jar... Installing JavaDB Preparing... ########################################### [100%] 1:sun-javadb-common ########################################### [ 17%] 2:sun-javadb-core ########################################### [ 33%] 3:sun-javadb-client ########################################### [ 50%] 4:sun-javadb-demo ########################################### [ 67%] 5:sun-javadb-docs ########################################### [ 83%] 6:sun-javadb-javadoc ########################################### [100%] Java(TM) SE Development Kit 6 successfully installed. Product Registration is FREE and includes many benefits: * Notification of new versions, patches, and updates * Special offers on Oracle products, services and training * Access to early releases and documentation Product and system data will be collected. If your configuration supports a browser, the JDK Product Registration form will be presented. If you do not register, none of this information will be saved. You may also register your JDK later by opening the register.html file (located in the JDK installation directory) in a browser. For more information on what data Registration collects and how it is managed and used, see: http://java.sun.com/javase/registration/JDKRegistrationPrivacy.html Press Enter to continue..... Done. [root@roger java]# groupadd hadoop [root@roger java]# useradd -g hadoop -G hadoop hadoop useradd: warning: the home directory already exists. Not copying any file from skel directory into it. [root@roger java]# passwd hadoop Changing password for user hadoop. New UNIX password: BAD PASSWORD: it is based on a dictionary word Retype new UNIX password: passwd: all authentication tokens updated successfully. ====== vi /etc/profile add ====== export JAVA_HOME=/usr/java/jdk1.6.0_27 export PATH=$PATH:$JAVA_HOME/bin export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/rt.jar ====== source /etc/profile(然后进行验证) ====== [root@roger bin]# source /etc/profile [root@roger bin]# which java /usr/java/jdk1.6.0_27/bin/java [root@roger bin]# java -version java version "1.6.0_27" Java(TM) SE Runtime Environment (build 1.6.0_27-b07) Java HotSpot(TM) Client VM (build 20.2-b06, mixed mode, sharing) [root@roger bin]# ====== 验证hadoop java环境 ====== [root@roger bin]# su - hadoop -bash-3.2$ which java /usr/java/jdk1.6.0_27/bin/java ++++++ 配置ssh key ++++++ ssh-keygen -t rsa cat /home/hadoop/.ssh/id_rsa.pub >> /home/hadoop/.ssh/authorized_keys ++++++ 修改hadoop conf ++++++ -bash-3.2$ pwd /home/hadoop/hadoop-0.20.2/conf -bash-3.2$ cat hadoop-env.sh|grep JAVA_HOME # The only required environment variable is JAVA_HOME. All others are # set JAVA_HOME in this file, so that it is correctly defined on # export JAVA_HOME=/usr/lib/j2sdk1.5-sun export JAVA_HOME=/usr/java/jdk1.6.0_27 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
====== 修改如下几个文件 ====== ++++++ 重命名core-site.xml,新建该文件,添加如下内容 ++++++ -bash-3.2$ cat core-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/hadooptmp</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://192.168.2.130:9000</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> </configuration> |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
++++++ 重命名hdfs-site.xml,新建该文件,添加如下内容 ++++++ -bash-3.2$ cat hdfs-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.replication</name> <value>1</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property> </configuration> |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
++++++ 重命名mapred-site.xml,新建该文件,添加如下内容 ++++++ -bash-3.2$ cat mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>192.168.2.130:9001</value> <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property> </configuration><span style="color: rgb(0, 0, 0); font-family: Verdana, Arial, Helvetica, sans-serif; font-size: 12px;"> </span> |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
++++++ 格式化hadoop ++++++ -bash-3.2$ pwd /home/hadoop/hadoop-0.20.2/bin -bash-3.2$ ./hadoop namenode -format 11/10/12 07:01:40 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = hadoopName/192.168.2.130 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 0.20.2 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010 ************************************************************/ 11/10/12 07:01:40 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop 11/10/12 07:01:40 INFO namenode.FSNamesystem: supergroup=supergroup 11/10/12 07:01:40 INFO namenode.FSNamesystem: isPermissionEnabled=true 11/10/12 07:01:40 INFO common.Storage: Image file of size 96 saved in 0 seconds. 11/10/12 07:01:40 INFO common.Storage: Storage directory /home/hadoop/hadooptmp/dfs/name has been successfully formatted. 11/10/12 07:01:40 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoopName/192.168.2.130 ************************************************************/ ++++++ 启动hadoop ++++++ -bash-3.2$ ./start-all.sh starting namenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-hadoopName.out hadoop@localhost's password: localhost: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-hadoopName.out hadoop@localhost's password: localhost: starting secondarynamenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-hadoopName.out starting jobtracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-jobtracker-hadoopName.out hadoop@localhost's password: localhost: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-hadoopName.out localhost: [Fatal Error] mapred-site.xml:15:18: The markup in the document following the root element must be well-formed. -bash-3.2$ jps 11597 Jps -bash-3.2$ hadoop fs -ls -bash: hadoop: command not found -bash-3.2$ ./hadoop fs -ls [Fatal Error] mapred-site.xml:15:18: The markup in the document following the root element must be well-formed. 11/10/12 07:06:35 FATAL conf.Configuration: error parsing conf file: org.xml.sax.SAXParseException: The markup in the document following the root element must be well-formed. Exception in thread "main" java.lang.RuntimeException: org.xml.sax.SAXParseException: The markup in the document following the root element must be well-formed. at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1168) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1030) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980) at org.apache.hadoop.conf.Configuration.get(Configuration.java:382) at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:451) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:182) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) at org.apache.hadoop.fs.FsShell.init(FsShell.java:82) at org.apache.hadoop.fs.FsShell.run(FsShell.java:1731) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at org.apache.hadoop.fs.FsShell.main(FsShell.java:1880) Caused by: org.xml.sax.SAXParseException: The markup in the document following the root element must be well-formed. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:249) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284) at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:180) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1079) ... 17 more ++++++ 下面来看看到底是什么错误?++++++ -bash-3.2$ cat hadoop-hadoop-tasktracker-hadoopName.out [Fatal Error] mapred-site.xml:15:18: The markup in the document following the root element must be well-formed. -bash-3.2$ cat hadoop-hadoop-tasktracker-hadoopName.log 2011-10-12 07:03:08,906 INFO org.apache.hadoop.mapred.TaskTracker: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting TaskTracker STARTUP_MSG: host = hadoopName/192.168.2.130 STARTUP_MSG: args = [] STARTUP_MSG: version = 0.20.2 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010 ************************************************************/ 2011-10-12 07:03:09,053 FATAL org.apache.hadoop.conf.Configuration: error parsing conf file: org.xml.sax.SAXParseException: The markup in the document following the root element must be well-formed. 2011-10-12 07:03:09,054 ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.lang.RuntimeException: org.xml.sax.SAXParseException: The markup in the document following the root element must be well-formed. at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1168) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1030) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980) at org.apache.hadoop.conf.Configuration.get(Configuration.java:382) at org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:1662) at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:165) at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2829) Caused by: org.xml.sax.SAXParseException: The markup in the document following the root element must be well-formed. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:249) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284) at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:180) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1079) ... 6 more 2011-10-12 07:03:09,055 INFO org.apache.hadoop.mapred.TaskTracker: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down TaskTracker at hadoopName/192.168.2.130 ************************************************************/ |
1 2 3 4 5 6 7 8 9 10 11 12 13 |
++++++ 从错误来看,是文件配置有点问题,修改为如下内容,然后再次启动ok。++++++ -bash-3.2$ cat mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <property> <name>mapred.job.tracker</name> <value>192.168.2.130:9001</value> </property> |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
-bash-3.2$ ./start-all.sh starting namenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-hadoopName.out hadoop@192.168.2.130's password: 192.168.2.130: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-hadoopName.out hadoop@192.168.2.130's password: 192.168.2.130: starting secondarynamenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-hadoopName.out starting jobtracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-jobtracker-hadoopName.out hadoop@192.168.2.130's password: 192.168.2.130: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-hadoopName.out -bash-3.2$ jps 17189 SecondaryNameNode 17461 Jps 17094 DataNode -bash-3.2$ ./hadoop dfs Usage: java FsShell [-ls <path>] [-lsr <path>] [-du <path>] [-dus <path>] [-count[-q] <path>] [-mv <src> <dst>] [-cp <src> <dst>] [-rm [-skipTrash] <path>] [-rmr [-skipTrash] <path>] [-expunge] [-put <localsrc> ... <dst>] [-copyFromLocal <localsrc> ... <dst>] [-moveFromLocal <localsrc> ... <dst>] [-get [-ignoreCrc] [-crc] <src> <localdst>] [-getmerge <src> <localdst> [addnl]] [-cat <src>] [-text <src>] [-copyToLocal [-ignoreCrc] [-crc] <src> <localdst>] [-moveToLocal [-crc] <src> <localdst>] [-mkdir <path>] [-setrep [-R] [-w] <rep> <path/file>] [-touchz <path>] [-test -[ezd] <path>] [-stat [format] <path>] [-tail [-f] <file>] [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...] [-chown [-R] [OWNER][:[GROUP]] PATH...] [-chgrp [-R] GROUP PATH...] [-help [cmd]] Generic options supported are -conf <configuration file> specify an application configuration file -D <property=value> use value for given property -fs <local|namenode:port> specify a namenode -jt <local|jobtracker:port> specify a job tracker -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is bin/hadoop command [genericOptions] [commandOptions] -bash-3.2$ ls hadoop hadoop-daemon.sh rcc hadoop-config.sh hadoop-daemons.sh slaves.sh start-all.sh start-dfs.sh stop-all.sh stop-dfs.sh start-balancer.sh start-mapred.sh stop-balancer.sh stop-mapred.sh -bash-3.2$ ./stop-all.sh no jobtracker to stop hadoop@192.168.2.130's password: 192.168.2.130: no tasktracker to stop no namenode to stop hadoop@192.168.2.130's password: 192.168.2.130: stopping datanode hadoop@192.168.2.130's password: 192.168.2.130: stopping secondarynamenode |
2 Responses to “hadoop 学习笔记(1)-for linux install”
不错,帅牛,加油
Webmaster, I am the admin at SEOPlugins.org. We profile SEO Plugins for WordPress blogs for on-site and off-site SEO. I’d like to invite you to check out our recent profile for a pretty amazing plugin which can double or triple traffic for a Worpdress blog. You can delete this comment, I didn’t want to comment on your blog, just wanted to drop you a personal message. Thanks, Rich
Leave a Reply
You must be logged in to post a comment.