• 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

 

2011/10/21にCDH3u2がリリースされたのでアップデートしてみました。
CDH3 Installation Guide - Cloudera Support
Upgrading to CDH3 - Cloudera Support

 

   1  $ hadoop version
   2  Hadoop 0.20.2-cdh3u1
   3  Subversion file:///tmp/nightly_2011-07-18_07-57-52_3/hadoop-0.20-0.20.2+923.97-1~maverick -r bdafb1dbffd0d5f2fbc6ee022e1c8df6500fd638
   4  Compiled by root on Mon Jul 18 09:40:07 PDT 2011
   5  From source with checksum 3127e3d410455d2bacbff7673bf3284c

 

現在はCDH3u1がインストールされてます。

 

   1  $ for x in /etc/init.d/hadoop-* ; do sudo $x stop ; done
   2  [sudo] password for h-akanuma:
   3  Stopping Hadoop datanode daemon: no datanode to stop
   4  hadoop-0.20-datanode.
   5  Stopping Hadoop jobtracker daemon: no jobtracker to stop
   6  hadoop-0.20-jobtracker.
   7  Stopping Hadoop namenode daemon: no namenode to stop
   8  hadoop-0.20-namenode.
   9  Stopping Hadoop secondarynamenode daemon: no secondarynamenode to stop
  10  hadoop-0.20-secondarynamenode.
  11  Stopping Hadoop tasktracker daemon: no tasktracker to stop
  12  hadoop-0.20-tasktracker.
  13  Stopping Hadoop HBase master daemon: no master to stop because kill -0 of pid 2271 failed with status 1
  14  hbase-master.
  15  Stopping Hadoop HBase regionserver daemon: stopping regionserver........
  16  hbase-regionserver.
  17  JMX enabled by default
  18  Using config: /etc/zookeeper/zoo.cfg
  19  Stopping zookeeper ... STOPPED
  20  $
  21  $ jps
  22  9534 Jps
  23  $
  24  $ ps aux | grep hadoop
  25  1000 9544 0.0 0.0 5164 788 pts/0 S+ 21:56 0:00 grep --color=auto hadoop

 

Hadoop関連プロセスを停止。

 

   1  $ sudo dpkg -i ダウンロード/cdh3-repository_1.0_all.deb 
   2  未選択パッケージ cdh3-repository を選択しています。
   3  (データベースを読み込んでいます ... 現在 262400 個のファイルとディレクトリがインストールされています。)
   4  (.../cdh3-repository_1.0_all.deb から) cdh3-repository を展開しています...
   5  cdh3-repository (1.0) を設定しています ...
   6  gpg: 鍵輪「/etc/apt/secring.gpg」ができました
   7  gpg: 鍵輪「/etc/apt/trusted.gpg.d/cloudera-cdh3.gpg」ができました
   8  gpg: 鍵02A818DD: 公開鍵“Cloudera Apt Repository”を読み込みました
   9  gpg: 処理数の合計: 1
  10  gpg: 読込み: 1

 

ダウンロードしたパッケージをインストール

 

   1  $ sudo apt-get update
   2  ・・・

 

APTパッケージインデックスを更新

 

   1  $ apt-cache search hadoop
   2  ubuntu-orchestra-modules-hadoop - Modules mainly used by orchestra-management-server
   3  flume - reliable, scalable, and manageable distributed data collection application
   4  hadoop-0.20 - A software platform for processing vast amounts of data
   5  hadoop-0.20-conf-pseudo - Pseudo-distributed Hadoop configuration
   6  hadoop-0.20-datanode - Data Node for Hadoop
   7  hadoop-0.20-doc - Documentation for Hadoop
   8  hadoop-0.20-fuse - HDFS exposed over a Filesystem in Userspace
   9  hadoop-0.20-jobtracker - Job Tracker for Hadoop
  10  hadoop-0.20-namenode - Name Node for Hadoop
  11  hadoop-0.20-native - Native libraries for Hadoop (e.g., compression)
  12  hadoop-0.20-pipes - Interface to author Hadoop MapReduce jobs in C++
  13  hadoop-0.20-sbin - Server-side binaries necessary for secured Hadoop clusters
  14  hadoop-0.20-secondarynamenode - Secondary Name Node for Hadoop
  15  hadoop-0.20-source - Source code for Hadoop
  16  hadoop-0.20-tasktracker - Task Tracker for Hadoop
  17  hadoop-hbase - HBase is the Hadoop database
  18  hadoop-hbase-doc - Documentation for HBase
  19  hadoop-hbase-master - HMaster is the "master server" for a HBase
  20  hadoop-hbase-regionserver - HRegionServer makes a set of HRegions available to clients
  21  hadoop-hbase-thrift - Provides an HBase Thrift service
  22  hadoop-hive - A data warehouse infrastructure built on top of Hadoop
  23  hadoop-hive-metastore - Shared metadata repository for Hive
  24  hadoop-hive-server - Provides a Hive Thrift service
  25  hadoop-pig - A platform for analyzing large data sets using Hadoop
  26  hadoop-zookeeper - A high-performance coordination service for distributed applications.
  27  hadoop-zookeeper-server - This runs the zookeeper server on startup.
  28  hue-common - A browser-based desktop interface for Hadoop
  29  hue-filebrowser - A UI for the Hadoop Distributed File System (HDFS)
  30  hue-jobbrowser - A UI for viewing Hadoop map-reduce jobs
  31  hue-jobsub - A UI for designing and submitting map-reduce jobs to Hadoop
  32  hue-plugins - Plug-ins for Hadoop to enable integration with Hue
  33  hue-shell - A shell for console based Hadoop applications
  34  libhdfs0 - JNI Bindings to access Hadoop HDFS from C
  35  libhdfs0-dev - Development support for libhdfs0
  36  mahout - A set of Java libraries for scalable machine learning.
  37  oozie - A workflow and coordinator sytem for Hadoop jobs.
  38  sqoop - Tool for easy imports and exports of data sets between databases and HDFS
  39  cdh3-repository - Cloudera's Distribution including Apache Hadoop

 

Hadoopパッケージの検索

 

   1  $ sudo apt-get install hadoop-0.20
   2  ・・・
   3  $ hadoop version
   4  Hadoop 0.20.2-cdh3u2
   5  Subversion file:///tmp/nightly_2011-10-13_20-02-02_3/hadoop-0.20-0.20.2+923.142-1~maverick -r 95a824e4005b2a94fe1c11f1ef9db4c672ba43cb
   6  Compiled by root on Thu Oct 13 21:52:18 PDT 2011
   7  From source with checksum 644e5db6c59d45bca96cec7f220dda51

 

Hadoopコアパッケージをインストール。
CDH3u2がインストールされました。
Hadoop各デーモンも同時にアップデートされています。

 

   1  $ sudo apt-get install hadoop-hbase-master
   2  ・・・
   3  $ sudo apt-get install hadoop-zookeeper-server
   4  ・・・
   5  $ hbase shell
   6  11/10/26 22:36:54 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
   7  HBase Shell; enter 'help<RETURN>' for list of supported commands.
   8  Type "exit<RETURN>" to leave the HBase Shell
   9  Version 0.90.4-cdh3u2, r, Thu Oct 13 20:32:26 PDT 2011
  10 
  11  hbase(main):001:0>

 

HBase, Zookeeper もアップデート。CDH3u2にアップデートされました。

 

   1  $ sudo /etc/init.d/hadoop-0.20-namenode start
   2  Starting Hadoop namenode daemon: starting namenode, logging to /usr/lib/hadoop-0.20/logs/hadoop-hadoop-namenode-h-akanuma-CF-W4.out
   3  hadoop-0.20-namenode.
   4  $
   5  $ sudo /etc/init.d/hadoop-0.20-datanode start
   6  Starting Hadoop datanode daemon: starting datanode, logging to /usr/lib/hadoop-0.20/logs/hadoop-hadoop-datanode-h-akanuma-CF-W4.out
   7  hadoop-0.20-datanode.
   8  $
   9  $ sudo /etc/init.d/hadoop-0.20-secondarynamenode start
  10  Starting Hadoop secondarynamenode daemon: starting secondarynamenode, logging to /usr/lib/hadoop-0.20/logs/hadoop-hadoop-secondarynamenode-h-akanuma-CF-W4.out
  11  hadoop-0.20-secondarynamenode.
  12  $
  13  $ sudo /etc/init.d/hadoop-0.20-jobtracker start
  14  Starting Hadoop jobtracker daemon: starting jobtracker, logging to /usr/lib/hadoop-0.20/logs/hadoop-hadoop-jobtracker-h-akanuma-CF-W4.out
  15  hadoop-0.20-jobtracker.
  16  $
  17  $ sudo /etc/init.d/hadoop-0.20-tasktracker start
  18  Starting Hadoop tasktracker daemon: starting tasktracker, logging to /usr/lib/hadoop-0.20/logs/hadoop-hadoop-tasktracker-h-akanuma-CF-W4.out
  19  hadoop-0.20-tasktracker.
  20  $
  21  $ sudo jps
  22  12799 SecondaryNameNode
  23  12672 DataNode
  24  12552 NameNode
  25  12895 JobTracker
  26  13029 Jps
  27  11574 QuorumPeerMain
  28  12996 TaskTracker

 

Hadoop各デーモンを起動

 

   1  $ hadoop jar /usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u2-*examples.jar pi 10 10000
   2  Number of Maps = 10
   3  Samples per Map = 10000
   4  Wrote input for Map #0
   5  Wrote input for Map #1
   6  Wrote input for Map #2
   7  Wrote input for Map #3
   8  Wrote input for Map #4
   9  Wrote input for Map #5
  10  Wrote input for Map #6
  11  Wrote input for Map #7
  12  Wrote input for Map #8
  13  Wrote input for Map #9
  14  Starting Job
  15  11/10/26 23:09:21 INFO mapred.FileInputFormat: Total input paths to process : 10
  16  11/10/26 23:09:22 INFO mapred.JobClient: Running job: job_201110262307_0001
  17  11/10/26 23:09:23 INFO mapred.JobClient: map 0% reduce 0%
  18  11/10/26 23:09:42 INFO mapred.JobClient: map 20% reduce 0%
  19  11/10/26 23:09:57 INFO mapred.JobClient: map 40% reduce 0%
  20  11/10/26 23:10:12 INFO mapred.JobClient: map 60% reduce 0%
  21  11/10/26 23:10:14 INFO mapred.JobClient: map 60% reduce 13%
  22  11/10/26 23:10:20 INFO mapred.JobClient: map 80% reduce 20%
  23  11/10/26 23:10:26 INFO mapred.JobClient: map 100% reduce 20%
  24  11/10/26 23:10:29 INFO mapred.JobClient: map 100% reduce 33%
  25  11/10/26 23:10:32 INFO mapred.JobClient: map 100% reduce 100%
  26  11/10/26 23:10:34 INFO mapred.JobClient: Job complete: job_201110262307_0001
  27  11/10/26 23:10:35 INFO mapred.JobClient: Counters: 23
  28  11/10/26 23:10:35 INFO mapred.JobClient: Job Counters
  29  11/10/26 23:10:35 INFO mapred.JobClient: Launched reduce tasks=1
  30  11/10/26 23:10:35 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=113667
  31  11/10/26 23:10:35 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
  32  11/10/26 23:10:35 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
  33  11/10/26 23:10:35 INFO mapred.JobClient: Launched map tasks=10
  34  11/10/26 23:10:35 INFO mapred.JobClient: Data-local map tasks=10
  35  11/10/26 23:10:35 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=49553
  36  11/10/26 23:10:35 INFO mapred.JobClient: FileSystemCounters
  37  11/10/26 23:10:35 INFO mapred.JobClient: FILE_BYTES_READ=226
  38  11/10/26 23:10:35 INFO mapred.JobClient: HDFS_BYTES_READ=2420
  39  11/10/26 23:10:35 INFO mapred.JobClient: FILE_BYTES_WRITTEN=609632
  40  11/10/26 23:10:35 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=215
  41  11/10/26 23:10:35 INFO mapred.JobClient: Map-Reduce Framework
  42  11/10/26 23:10:35 INFO mapred.JobClient: Reduce input groups=2
  43  11/10/26 23:10:35 INFO mapred.JobClient: Combine output records=0
  44  11/10/26 23:10:35 INFO mapred.JobClient: Map input records=10
  45  11/10/26 23:10:35 INFO mapred.JobClient: Reduce shuffle bytes=280
  46  11/10/26 23:10:35 INFO mapred.JobClient: Reduce output records=0
  47  11/10/26 23:10:35 INFO mapred.JobClient: Spilled Records=40
  48  11/10/26 23:10:35 INFO mapred.JobClient: Map output bytes=180
  49  11/10/26 23:10:35 INFO mapred.JobClient: Map input bytes=240
  50  11/10/26 23:10:35 INFO mapred.JobClient: Combine input records=0
  51  11/10/26 23:10:35 INFO mapred.JobClient: Map output records=20
  52  11/10/26 23:10:35 INFO mapred.JobClient: SPLIT_RAW_BYTES=1240
  53  11/10/26 23:10:35 INFO mapred.JobClient: Reduce input records=20
  54  Job Finished in 74.586 seconds
  55  Estimated value of Pi is 3.14120000000000000000

 

Hadoopジョブをテスト実行。
無事成功しました。

 

   1  $ sudo /etc/init.d/hadoop-hbase-master start
   2  Starting Hadoop HBase master daemon: starting master, logging to /usr/lib/hbase/logs/hbase-hbase-master-h-akanuma-CF-W4.out
   3  hbase-master.
   4  $
   5  $ sudo /etc/init.d/hadoop-hbase-regionserver start
   6  Starting Hadoop HBase regionserver daemon: starting regionserver, logging to /usr/lib/hbase/logs/hbase-hbase-regionserver-h-akanuma-CF-W4.out
   7  hbase-regionserver.
   8  $
   9  $ sudo jps
  10  14202 Jps
  11  12799 SecondaryNameNode
  12  12672 DataNode
  13  14134 HRegionServer
  14  13996 HMaster
  15  12552 NameNode
  16  12895 JobTracker
  17  11574 QuorumPeerMain
  18  12996 TaskTracker

 

HBaseのデーモンも起動。
擬似分散モードなのでZookeeperは起動させません。

 

   1  $ hbase shell
   2  HBase Shell; enter 'help<RETURN>' for list of supported commands.
   3  Type "exit<RETURN>" to leave the HBase Shell
   4  Version 0.90.4-cdh3u2, r, Thu Oct 13 20:32:26 PDT 2011
   5 
   6  hbase(main):001:0>
   7  hbase(main):002:0* list
   8  TABLE
   9  courses
  10  scores
  11  2 row(s) in 2.0210 seconds
  12 
  13  hbase(main):003:0>

 

hbase shell の listコマンドで動作確認。
こちらも成功です。

posted by Png akanuma on Wed 26 Oct 2011 at 22:43

Comments:

or Preview
Social Bookmarks
  • Delicious
  • B_entry2000
  • Clip_16_12_w
Services from s21g
twpro(ツイプロ)
Twitterプロフィールを快適検索
地価2009
土地の値段を調べてみよう
MyRestaurant
自分だけのレストラン手帳
Formula
ブログに数式を埋め込める数式コミュニティ