hadoop log

2014-01-31 05:09:48,525 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = master/127.0.0.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.2.0
STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.6.1.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/... -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG: java = 1.7.0_51
************************************************************/
2014-01-31 05:09:48,532 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2014-01-31 05:09:50,544 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-01-31 05:09:50,820 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2014-01-31 05:09:50,821 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2014-01-31 05:09:50,826 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is master
2014-01-31 05:09:50,937 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2014-01-31 05:09:50,961 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2014-01-31 05:09:56,110 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2014-01-31 05:09:56,270 INFO org.apache.hadoop.http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2014-01-31 05:09:56,273 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2014-01-31 05:09:56,273 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2014-01-31 05:09:56,273 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2014-01-31 05:09:56,279 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 0.0.0.0:50075
2014-01-31 05:09:56,295 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
2014-01-31 05:09:56,296 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2014-01-31 05:09:56,296 INFO org.mortbay.log: jetty-6.1.26
2014-01-31 05:09:57,087 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
2014-01-31 05:10:02,588 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2014-01-31 05:10:02,637 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2014-01-31 05:10:02,655 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2014-01-31 05:10:02,731 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices:
2014-01-31 05:10:02,787 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool (storage id unknown) service to master/127.0.0.1:54310 starting to offer service
2014-01-31 05:10:02,893 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-01-31 05:10:02,896 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2014-01-31 05:10:04,081 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /app/hadoop/tmp/dfs/data/in_use.lock acquired by nodename 3842@master
2014-01-31 05:10:04,553 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2014-01-31 05:10:04,641 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1306092092;bpid=BP-1879086674-127.0.1.1-1390819325075;lv=-47;nsInfo=lv=-47;cid=CID-e43f22c6-ac82-44f2-bfac-a4fded352309;nsid=1306092092;c=0;bpid=BP-1879086674-127.0.1.1-1390819325075
2014-01-31 05:10:04,709 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /app/hadoop/tmp/dfs/data/current
2014-01-31 05:10:04,727 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2014-01-31 05:10:04,783 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1391165388783 with interval 21600000
2014-01-31 05:10:04,788 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1879086674-127.0.1.1-1390819325075
2014-01-31 05:10:04,791 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1879086674-127.0.1.1-1390819325075 on volume /app/hadoop/tmp/dfs/data/current...
2014-01-31 05:10:04,876 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1879086674-127.0.1.1-1390819325075 on /app/hadoop/tmp/dfs/data/current: 80ms
2014-01-31 05:10:04,884 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1879086674-127.0.1.1-1390819325075: 96ms
2014-01-31 05:10:04,884 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-1879086674-127.0.1.1-1390819325075 on volume /app/hadoop/tmp/dfs/data/current...
2014-01-31 05:10:04,887 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1879086674-127.0.1.1-1390819325075 on volume /app/hadoop/tmp/dfs/data/current: 3ms
2014-01-31 05:10:04,887 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 3ms
2014-01-31 05:10:04,898 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1879086674-127.0.1.1-1390819325075 (storage id DS-1845738474-127.0.1.1-50010-1390820536599) service to master/127.0.0.1:54310 beginning handshake with NN
2014-01-31 05:10:05,050 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1879086674-127.0.1.1-1390819325075 (storage id DS-1845738474-127.0.1.1-50010-1390820536599) service to master/127.0.0.1:54310 successfully registered with NN
2014-01-31 05:10:05,051 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode master/127.0.0.1:54310 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
2014-01-31 05:10:05,231 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-1879086674-127.0.1.1-1390819325075 (storage id DS-1845738474-127.0.1.1-50010-1390820536599) service to master/127.0.0.1:54310 trying to claim ACTIVE state with txid=1021
2014-01-31 05:10:05,231 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1879086674-127.0.1.1-1390819325075 (storage id DS-1845738474-127.0.1.1-50010-1390820536599) service to master/127.0.0.1:54310
2014-01-31 05:10:05,394 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 4 blocks took 7 msec to generate and 155 msecs for RPC and NN processing
2014-01-31 05:10:05,394 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@d278af
2014-01-31 05:10:05,412 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
2014-01-31 05:10:05,412 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-01-31 05:10:05,422 INFO org.apache.hadoop.util.GSet: 0.5% max memory = 966.7 MB
2014-01-31 05:10:05,422 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2014-01-31 05:10:05,505 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-1879086674-127.0.1.1-1390819325075
2014-01-31 05:10:05,535 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-1879086674-127.0.1.1-1390819325075 to blockPoolScannerMap, new size=1
2014-01-31 06:05:57,376 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action: DNA_REGISTER
2014-01-31 06:05:57,408 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1879086674-127.0.1.1-1390819325075 (storage id DS-1845738474-127.0.1.1-50010-1390820536599) service to master/127.0.0.1:54310 beginning handshake with NN
2014-01-31 06:05:57,412 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1879086674-127.0.1.1-1390819325075 (storage id DS-1845738474-127.0.1.1-50010-1390820536599) service to master/127.0.0.1:54310 successfully registered with NN
2014-01-31 06:05:57,416 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 4 blocks took 0 msec to generate and 4 msecs for RPC and NN processing
2014-01-31 06:05:57,416 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@16a279c
2014-01-31 06:20:52,790 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-1879086674-127.0.1.1-1390819325075 Total blocks: 4, missing metadata files:0, missing block files:0, missing blocks in memory:0, mismatched blocks:0
2014-01-31 06:47:58,475 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService
java.io.IOException: Failed on local exception: java.io.EOFException; Host Details : local host is: "master/127.0.0.1"; destination host is: "master":54310;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy9.sendHeartbeat(Unknown Source)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy9.sendHeartbeat(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:167)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:445)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:525)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:676)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:995)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:891)
2014-01-31 06:48:02,338 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-01-31 06:48:03,340 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-01-31 06:48:04,344 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-01-31 06:48:05,345 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-01-31 06:48:06,350 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-01-31 06:48:07,354 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-01-31 06:48:08,358 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-01-31 06:48:08,993 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
2014-01-31 06:48:09,007 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at master/127.0.0.1
************************************************************/
2014-01-31 06:50:53,675 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = master/127.0.0.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.2.0
STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.6.1.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/... -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG: java = 1.7.0_51
************************************************************/
2014-01-31 06:50:53,682 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2014-01-31 06:50:55,396 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-01-31 06:50:55,609 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2014-01-31 06:50:55,609 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2014-01-31 06:50:55,664 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is master
2014-01-31 06:50:55,750 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2014-01-31 06:50:55,765 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2014-01-31 06:51:00,860 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2014-01-31 06:51:00,994 INFO org.apache.hadoop.http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2014-01-31 06:51:00,997 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2014-01-31 06:51:00,997 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2014-01-31 06:51:00,997 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2014-01-31 06:51:01,004 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 0.0.0.0:50075
2014-01-31 06:51:01,007 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
2014-01-31 06:51:01,007 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2014-01-31 06:51:01,008 INFO org.mortbay.log: jetty-6.1.26
2014-01-31 06:51:01,625 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
2014-01-31 06:51:07,177 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2014-01-31 06:51:07,212 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2014-01-31 06:51:07,232 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2014-01-31 06:51:07,278 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices:
2014-01-31 06:51:07,331 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool (storage id unknown) service to master/127.0.0.1:54310 starting to offer service
2014-01-31 06:51:07,364 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-01-31 06:51:07,379 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2014-01-31 06:51:08,162 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /app/hadoop/tmp/dfs/data/in_use.lock acquired by nodename 6741@master
2014-01-31 06:51:08,528 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2014-01-31 06:51:08,564 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1306092092;bpid=BP-1879086674-127.0.1.1-1390819325075;lv=-47;nsInfo=lv=-47;cid=CID-e43f22c6-ac82-44f2-bfac-a4fded352309;nsid=1306092092;c=0;bpid=BP-1879086674-127.0.1.1-1390819325075
2014-01-31 06:51:08,626 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /app/hadoop/tmp/dfs/data/current
2014-01-31 06:51:08,648 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2014-01-31 06:51:08,670 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1391175404670 with interval 21600000
2014-01-31 06:51:08,680 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1879086674-127.0.1.1-1390819325075
2014-01-31 06:51:08,681 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1879086674-127.0.1.1-1390819325075 on volume /app/hadoop/tmp/dfs/data/current...
2014-01-31 06:51:08,741 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1879086674-127.0.1.1-1390819325075 on /app/hadoop/tmp/dfs/data/current: 60ms
2014-01-31 06:51:08,743 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1879086674-127.0.1.1-1390819325075: 63ms
2014-01-31 06:51:08,743 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-1879086674-127.0.1.1-1390819325075 on volume /app/hadoop/tmp/dfs/data/current...
2014-01-31 06:51:08,746 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1879086674-127.0.1.1-1390819325075 on volume /app/hadoop/tmp/dfs/data/current: 3ms
2014-01-31 06:51:08,746 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 3ms
2014-01-31 06:51:08,755 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1879086674-127.0.1.1-1390819325075 (storage id DS-1845738474-127.0.1.1-50010-1390820536599) service to master/127.0.0.1:54310 beginning handshake with NN
2014-01-31 06:51:08,885 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1879086674-127.0.1.1-1390819325075 (storage id DS-1845738474-127.0.1.1-50010-1390820536599) service to master/127.0.0.1:54310 successfully registered with NN
2014-01-31 06:51:08,886 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode master/127.0.0.1:54310 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
2014-01-31 06:51:09,105 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-1879086674-127.0.1.1-1390819325075 (storage id DS-1845738474-127.0.1.1-50010-1390820536599) service to master/127.0.0.1:54310 trying to claim ACTIVE state with txid=1026
2014-01-31 06:51:09,106 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1879086674-127.0.1.1-1390819325075 (storage id DS-1845738474-127.0.1.1-50010-1390820536599) service to master/127.0.0.1:54310
2014-01-31 06:51:09,448 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 4 blocks took 9 msec to generate and 333 msecs for RPC and NN processing
2014-01-31 06:51:09,448 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@b9b548
2014-01-31 06:51:09,473 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
2014-01-31 06:51:09,473 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-01-31 06:51:09,487 INFO org.apache.hadoop.util.GSet: 0.5% max memory = 966.7 MB
2014-01-31 06:51:09,487 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2014-01-31 06:51:09,542 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-1879086674-127.0.1.1-1390819325075
2014-01-31 06:51:09,549 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-1879086674-127.0.1.1-1390819325075 to blockPoolScannerMap, new size=1
2014-01-31 07:09:11,305 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741882_1058 src: /127.0.0.1:42785 dest: /127.0.0.1:50010
2014-01-31 07:09:12,351 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:42785, dest: /127.0.0.1:50010, bytes: 78, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1359912828_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741882_1058, duration: 297141899
2014-01-31 07:09:12,363 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741882_1058, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-01-31 07:09:17,265 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741882_1058
2014-01-31 07:09:18,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741883_1059 src: /127.0.0.1:42786 dest: /127.0.0.1:50010
2014-01-31 07:09:18,595 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:42786, dest: /127.0.0.1:50010, bytes: 47, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1359912828_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741883_1059, duration: 3970095
2014-01-31 07:09:18,603 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741883_1059, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-01-31 07:09:18,788 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741884_1060 src: /127.0.0.1:42787 dest: /127.0.0.1:50010
2014-01-31 07:09:18,863 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:42787, dest: /127.0.0.1:50010, bytes: 23, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1359912828_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741884_1060, duration: 3210026
2014-01-31 07:09:18,882 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741884_1060, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-01-31 07:09:22,456 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741883_1059
2014-01-31 07:09:22,460 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741884_1060
2014-01-31 07:12:32,697 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741885_1061 src: /127.0.0.1:42793 dest: /127.0.0.1:50010
2014-01-31 07:12:32,810 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:42793, dest: /127.0.0.1:50010, bytes: 78, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1820165816_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741885_1061, duration: 69216642
2014-01-31 07:12:32,814 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741885_1061, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-01-31 07:12:32,984 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741886_1062 src: /127.0.0.1:42794 dest: /127.0.0.1:50010
2014-01-31 07:12:33,013 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:42794, dest: /127.0.0.1:50010, bytes: 47, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1820165816_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741886_1062, duration: 4872631
2014-01-31 07:12:33,023 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741886_1062, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-01-31 07:12:33,096 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741887_1063 src: /127.0.0.1:42795 dest: /127.0.0.1:50010
2014-01-31 07:12:33,111 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:42795, dest: /127.0.0.1:50010, bytes: 23, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1820165816_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741887_1063, duration: 2858720
2014-01-31 07:12:33,133 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741887_1063, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-01-31 07:12:37,581 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741886_1062
2014-01-31 07:12:37,586 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741885_1061
2014-01-31 07:12:37,604 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741887_1063
2014-01-31 07:46:33,540 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 10 blocks took 0 msec to generate and 54 msecs for RPC and NN processing
2014-01-31 07:46:33,540 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@9ac0c6
2014-01-31 07:58:21,965 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741881_1057 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741881 for deletion
2014-01-31 07:58:22,180 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741881_1057 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741881
2014-01-31 08:00:01,749 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:42848, bytes: 82, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_788458961_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741832_1008, duration: 534827280
2014-01-31 08:00:06,547 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:42849, bytes: 51, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_788458961_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741833_1009, duration: 213179231
2014-01-31 08:00:10,106 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741888_1064 src: /127.0.0.1:42850 dest: /127.0.0.1:50010
2014-01-31 08:00:11,454 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:42850, dest: /127.0.0.1:50010, bytes: 128, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_788458961_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741888_1064, duration: 1007854497
2014-01-31 08:00:11,461 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741888_1064, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-01-31 08:00:24,481 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:42852, bytes: 132, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1271975239_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741888_1064, duration: 378329
2014-01-31 08:06:09,964 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741888_1064 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741888 for deletion
2014-01-31 08:06:09,966 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741888_1064 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741888
2014-01-31 08:06:20,530 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:42861, bytes: 82, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_38997627_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741832_1008, duration: 2123900
2014-01-31 08:06:22,022 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:42861, bytes: 51, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_38997627_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741833_1009, duration: 1527669
2014-01-31 08:06:23,414 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741889_1065 src: /127.0.0.1:42862 dest: /127.0.0.1:50010
2014-01-31 08:06:23,680 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:42862, dest: /127.0.0.1:50010, bytes: 335, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_38997627_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741889_1065, duration: 136741823
2014-01-31 08:06:23,686 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741889_1065, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-01-31 08:06:38,414 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:42864, bytes: 339, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1732539601_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741889_1065, duration: 389621
2014-01-31 08:11:25,076 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741889_1065 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741889 for deletion
2014-01-31 08:11:25,079 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741889_1065 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741889
2014-01-31 08:11:34,096 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:42872, bytes: 82, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_562890134_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741832_1008, duration: 9988631
2014-01-31 08:11:34,804 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:42872, bytes: 51, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_562890134_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741833_1009, duration: 2190897
2014-01-31 08:11:36,480 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741890_1066 src: /127.0.0.1:42873 dest: /127.0.0.1:50010
2014-01-31 08:11:36,615 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:42873, dest: /127.0.0.1:50010, bytes: 335, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_562890134_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741890_1066, duration: 99277504
2014-01-31 08:11:36,616 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741890_1066, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-01-31 08:11:48,525 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:42876, bytes: 339, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-534214971_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741890_1066, duration: 401561
2014-01-31 08:36:44,968 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-1879086674-127.0.1.1-1390819325075 Total blocks: 10, missing metadata files:0, missing block files:0, missing blocks in memory:0, mismatched blocks:0
2014-02-03 01:22:55,477 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 10 blocks took 0 msec to generate and 53 msecs for RPC and NN processing
2014-02-03 01:22:55,477 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@16f50d7
2014-02-03 01:30:54,027 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741891_1067 src: /127.0.0.1:42941 dest: /127.0.0.1:50010
2014-02-03 01:30:54,178 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:42941, dest: /127.0.0.1:50010, bytes: 113079, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_844906033_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741891_1067, duration: 102110276
2014-02-03 01:30:54,182 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741891_1067, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-02-03 01:30:54,392 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741892_1068 src: /127.0.0.1:42942 dest: /127.0.0.1:50010
2014-02-03 01:30:54,453 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:42942, dest: /127.0.0.1:50010, bytes: 78, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_844906033_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741892_1068, duration: 8114460
2014-02-03 01:30:54,459 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741892_1068, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-02-03 01:30:54,516 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741893_1069 src: /127.0.0.1:42943 dest: /127.0.0.1:50010
2014-02-03 01:30:54,543 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:42943, dest: /127.0.0.1:50010, bytes: 47, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_844906033_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741893_1069, duration: 5792147
2014-02-03 01:30:54,552 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741893_1069, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-02-03 01:30:54,666 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741894_1070 src: /127.0.0.1:42944 dest: /127.0.0.1:50010
2014-02-03 01:30:54,705 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:42944, dest: /127.0.0.1:50010, bytes: 23, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_844906033_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741894_1070, duration: 7447427
2014-02-03 01:30:54,715 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741894_1070, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-02-03 01:30:54,723 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741890_1066
2014-02-03 01:30:54,773 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741891_1067
2014-02-03 01:30:54,776 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741894_1070
2014-02-03 01:30:54,777 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741893_1069
2014-02-03 01:30:54,780 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741892_1068
2014-02-03 01:33:07,854 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741891_1067 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741891 for deletion
2014-02-03 01:33:07,857 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741892_1068 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741892 for deletion
2014-02-03 01:33:07,860 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741891_1067 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741891
2014-02-03 01:33:07,889 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741893_1069 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741893 for deletion
2014-02-03 01:33:07,889 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741894_1070 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741894 for deletion
2014-02-03 01:33:07,889 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741832_1008 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741832 for deletion
2014-02-03 01:33:07,889 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741833_1009 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741833 for deletion
2014-02-03 01:33:07,892 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741892_1068 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741892
2014-02-03 01:33:07,898 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741893_1069 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741893
2014-02-03 01:33:07,900 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741894_1070 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741894
2014-02-03 01:33:07,900 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741832_1008 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741832
2014-02-03 01:33:07,901 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741833_1009 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741833
2014-02-03 01:46:35,481 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 8 blocks took 0 msec to generate and 36 msecs for RPC and NN processing
2014-02-03 01:46:35,482 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@1b7cd2e
2014-02-03 02:11:06,492 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741882_1058 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741882 for deletion
2014-02-03 02:11:06,493 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741883_1059 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741883 for deletion
2014-02-03 02:11:06,493 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741884_1060 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741884 for deletion
2014-02-03 02:11:06,494 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741885_1061 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741885 for deletion
2014-02-03 02:11:06,494 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741886_1062 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741886 for deletion
2014-02-03 02:11:06,494 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741887_1063 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741887 for deletion
2014-02-03 02:11:06,495 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741882_1058 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741882
2014-02-03 02:11:06,496 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741883_1059 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741883
2014-02-03 02:11:06,496 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741884_1060 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741884
2014-02-03 02:11:06,500 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741885_1061 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741885
2014-02-03 02:11:06,500 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741886_1062 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741886
2014-02-03 02:11:06,500 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741887_1063 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741887
2014-02-03 02:12:25,827 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741895_1071 src: /127.0.0.1:42998 dest: /127.0.0.1:50010
2014-02-03 02:12:25,928 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:42998, dest: /127.0.0.1:50010, bytes: 113079, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1472719726_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741895_1071, duration: 73039352
2014-02-03 02:12:25,934 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741895_1071, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-02-03 02:12:26,119 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741896_1072 src: /127.0.0.1:42999 dest: /127.0.0.1:50010
2014-02-03 02:12:26,148 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:42999, dest: /127.0.0.1:50010, bytes: 78, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1472719726_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741896_1072, duration: 11592647
2014-02-03 02:12:26,153 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741896_1072, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-02-03 02:12:26,307 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741897_1073 src: /127.0.0.1:43000 dest: /127.0.0.1:50010
2014-02-03 02:12:26,328 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741896_1072
2014-02-03 02:12:26,376 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:43000, dest: /127.0.0.1:50010, bytes: 47, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1472719726_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741897_1073, duration: 29515001
2014-02-03 02:12:26,384 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741897_1073, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-02-03 02:12:26,420 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741895_1071
2014-02-03 02:12:26,431 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741897_1073
2014-02-03 02:12:26,502 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741898_1074 src: /127.0.0.1:43001 dest: /127.0.0.1:50010
2014-02-03 02:12:26,533 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:43001, dest: /127.0.0.1:50010, bytes: 23, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1472719726_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741898_1074, duration: 13436129
2014-02-03 02:12:26,544 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741898_1074, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-02-03 02:13:42,576 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741890_1066 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741890 for deletion
2014-02-03 02:13:42,579 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741890_1066 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741890
2014-02-03 02:14:06,937 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:43006, bytes: 113963, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-172469209_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741895_1071, duration: 26854439
2014-02-03 02:14:09,190 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:43007, bytes: 82, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-172469209_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741896_1072, duration: 6648606
2014-02-03 02:14:09,849 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:43007, bytes: 51, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-172469209_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741897_1073, duration: 11851135
2014-02-03 02:14:10,241 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:43007, bytes: 27, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-172469209_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741898_1074, duration: 8638825
2014-02-03 02:14:12,373 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741899_1075 src: /127.0.0.1:43008 dest: /127.0.0.1:50010
2014-02-03 02:14:12,630 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:43008, dest: /127.0.0.1:50010, bytes: 77796, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-172469209_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741899_1075, duration: 121662147
2014-02-03 02:14:12,640 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741899_1075, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-02-03 02:14:16,554 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741899_1075
2014-02-03 02:14:16,556 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741898_1074
2014-02-03 02:14:29,895 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:43011, bytes: 78404, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-473539357_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741899_1075, duration: 3781190
2014-02-03 02:34:19,393 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741899_1075 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741899 for deletion
2014-02-03 02:34:19,404 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1879086674-127.0.1.1-1390819325075 blk_1073741899_1075 file /app/hadoop/tmp/dfs/data/current/BP-1879086674-127.0.1.1-1390819325075/current/finalized/blk_1073741899
2014-02-03 02:38:47,048 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:43047, bytes: 113963, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-348162522_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741895_1071, duration: 19127574
2014-02-03 02:38:50,047 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:43048, bytes: 82, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-348162522_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741896_1072, duration: 9950129
2014-02-03 02:38:50,718 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:43048, bytes: 51, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-348162522_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741897_1073, duration: 5030366
2014-02-03 02:38:51,100 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:43048, bytes: 27, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-348162522_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741898_1074, duration: 2184586
2014-02-03 02:38:53,818 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1879086674-127.0.1.1-1390819325075:blk_1073741900_1076 src: /127.0.0.1:43049 dest: /127.0.0.1:50010
2014-02-03 02:38:54,038 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:43049, dest: /127.0.0.1:50010, bytes: 77796, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-348162522_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741900_1076, duration: 117302706
2014-02-03 02:38:54,049 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1879086674-127.0.1.1-1390819325075:blk_1073741900_1076, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2014-02-03 02:38:57,466 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1879086674-127.0.1.1-1390819325075:blk_1073741900_1076
2014-02-03 02:39:11,453 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:43051, bytes: 78404, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_840208082_1, offset: 0, srvID: DS-1845738474-127.0.1.1-50010-1390820536599, blockid: BP-1879086674-127.0.1.1-1390819325075:blk_1073741900_1076, duration: 3495148
2014-02-03 06:00:48,879 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService
java.io.IOException: Failed on local exception: java.io.EOFException; Host Details : local host is: "master/127.0.0.1"; destination host is: "master":54310;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy9.sendHeartbeat(Unknown Source)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy9.sendHeartbeat(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:167)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:445)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:525)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:676)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:995)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:891)
2014-02-03 06:00:52,362 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 06:00:53,368 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 06:00:54,382 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 06:00:55,391 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 06:00:56,108 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
2014-02-03 06:00:56,414 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 06:00:56,469 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at master/127.0.0.1
************************************************************/
2014-02-03 06:31:50,340 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = master/127.0.0.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.2.0
STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.6.1.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/... -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG: java = 1.7.0_51
************************************************************/
2014-02-03 06:31:50,464 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2014-02-03 06:32:02,431 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-02-03 06:32:03,001 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2014-02-03 06:32:03,001 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2014-02-03 06:32:03,027 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is master
2014-02-03 06:32:03,272 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2014-02-03 06:32:03,317 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2014-02-03 06:32:09,162 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2014-02-03 06:32:09,986 INFO org.apache.hadoop.http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2014-02-03 06:32:10,033 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2014-02-03 06:32:10,034 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2014-02-03 06:32:10,041 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2014-02-03 06:32:10,096 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 0.0.0.0:50075
2014-02-03 06:32:10,166 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
2014-02-03 06:32:10,172 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2014-02-03 06:32:10,172 INFO org.mortbay.log: jetty-6.1.26
2014-02-03 06:32:13,011 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
2014-02-03 06:32:20,365 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2014-02-03 06:32:20,762 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2014-02-03 06:32:20,899 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2014-02-03 06:32:21,265 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices:
2014-02-03 06:32:21,374 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool (storage id unknown) service to master/127.0.0.1:54310 starting to offer service
2014-02-03 06:32:21,459 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-02-03 06:32:21,496 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2014-02-03 06:32:24,867 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /app/hadoop/tmp/dfs/data/in_use.lock acquired by nodename 13110@master
2014-02-03 06:32:24,884 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /app/hadoop/tmp/dfs/data is not formatted
2014-02-03 06:32:24,884 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2014-02-03 06:32:27,568 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2014-02-03 06:32:27,614 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /app/hadoop/tmp/dfs/data/current/BP-1549795159-127.0.0.1-1391426667789 is not formatted.
2014-02-03 06:32:27,614 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2014-02-03 06:32:27,614 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-1549795159-127.0.0.1-1391426667789 directory /app/hadoop/tmp/dfs/data/current/BP-1549795159-127.0.0.1-1391426667789/current
2014-02-03 06:32:27,746 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1474272420;bpid=BP-1549795159-127.0.0.1-1391426667789;lv=-47;nsInfo=lv=-47;cid=CID-a37b3083-bc2d-4b39-a49f-73e893358dcc;nsid=1474272420;c=0;bpid=BP-1549795159-127.0.0.1-1391426667789
2014-02-03 06:32:28,408 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /app/hadoop/tmp/dfs/data/current
2014-02-03 06:32:28,808 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2014-02-03 06:32:29,228 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1391434999228 with interval 21600000
2014-02-03 06:32:29,303 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1549795159-127.0.0.1-1391426667789
2014-02-03 06:32:29,333 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1549795159-127.0.0.1-1391426667789 on volume /app/hadoop/tmp/dfs/data/current...
2014-02-03 06:32:29,856 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1549795159-127.0.0.1-1391426667789 on /app/hadoop/tmp/dfs/data/current: 510ms
2014-02-03 06:32:29,868 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1549795159-127.0.0.1-1391426667789: 565ms
2014-02-03 06:32:29,869 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-1549795159-127.0.0.1-1391426667789 on volume /app/hadoop/tmp/dfs/data/current...
2014-02-03 06:32:29,880 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1549795159-127.0.0.1-1391426667789 on volume /app/hadoop/tmp/dfs/data/current: 11ms
2014-02-03 06:32:29,881 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 12ms
2014-02-03 06:32:29,967 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1549795159-127.0.0.1-1391426667789 (storage id DS-13322827-127.0.0.1-50010-1391427144953) service to master/127.0.0.1:54310 beginning handshake with NN
2014-02-03 06:32:31,089 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1549795159-127.0.0.1-1391426667789 (storage id DS-13322827-127.0.0.1-50010-1391427144953) service to master/127.0.0.1:54310 successfully registered with NN
2014-02-03 06:32:31,090 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode master/127.0.0.1:54310 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
2014-02-03 06:32:32,301 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-1549795159-127.0.0.1-1391426667789 (storage id DS-13322827-127.0.0.1-50010-1391427144953) service to master/127.0.0.1:54310 trying to claim ACTIVE state with txid=1
2014-02-03 06:32:32,301 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1549795159-127.0.0.1-1391426667789 (storage id DS-13322827-127.0.0.1-50010-1391427144953) service to master/127.0.0.1:54310
2014-02-03 06:32:33,189 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks took 27 msec to generate and 860 msecs for RPC and NN processing
2014-02-03 06:32:33,190 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@9a6b5a
2014-02-03 06:32:33,344 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
2014-02-03 06:32:33,344 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-02-03 06:32:33,368 INFO org.apache.hadoop.util.GSet: 0.5% max memory = 966.7 MB
2014-02-03 06:32:33,368 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2014-02-03 06:32:33,680 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-1549795159-127.0.0.1-1391426667789
2014-02-03 06:32:33,828 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-1549795159-127.0.0.1-1391426667789 to blockPoolScannerMap, new size=1
2014-02-03 06:52:41,575 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService
java.io.IOException: Failed on local exception: java.io.EOFException; Host Details : local host is: "master/127.0.0.1"; destination host is: "master":54310;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy9.sendHeartbeat(Unknown Source)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy9.sendHeartbeat(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:167)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:445)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:525)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:676)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:995)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:891)
2014-02-03 06:52:44,809 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 06:52:45,811 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 06:52:46,816 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 06:52:47,821 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 06:52:48,834 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 06:52:49,839 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 06:52:50,846 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 06:52:51,173 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
2014-02-03 06:52:51,348 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at master/127.0.0.1
************************************************************/
2014-02-03 06:55:14,816 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = master/127.0.0.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.2.0
STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.6.1.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/... -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG: java = 1.7.0_51
************************************************************/
2014-02-03 06:55:14,938 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2014-02-03 06:55:28,426 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-02-03 06:55:29,126 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2014-02-03 06:55:29,127 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2014-02-03 06:55:29,186 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is master
2014-02-03 06:55:29,350 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2014-02-03 06:55:29,389 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2014-02-03 06:55:34,993 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2014-02-03 06:55:36,184 INFO org.apache.hadoop.http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2014-02-03 06:55:36,207 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2014-02-03 06:55:36,226 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2014-02-03 06:55:36,226 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2014-02-03 06:55:36,292 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 0.0.0.0:50075
2014-02-03 06:55:36,345 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
2014-02-03 06:55:36,346 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2014-02-03 06:55:36,346 INFO org.mortbay.log: jetty-6.1.26
2014-02-03 06:55:39,871 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
2014-02-03 06:55:48,572 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2014-02-03 06:55:49,376 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2014-02-03 06:55:49,522 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2014-02-03 06:55:49,907 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices:
2014-02-03 06:55:50,150 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool (storage id unknown) service to master/127.0.0.1:54310 starting to offer service
2014-02-03 06:55:50,290 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-02-03 06:55:50,311 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2014-02-03 06:55:54,461 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /app/hadoop/tmp/dfs/data/in_use.lock acquired by nodename 15357@master
2014-02-03 06:55:56,481 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2014-02-03 06:55:56,628 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1474272420;bpid=BP-1549795159-127.0.0.1-1391426667789;lv=-47;nsInfo=lv=-47;cid=CID-a37b3083-bc2d-4b39-a49f-73e893358dcc;nsid=1474272420;c=0;bpid=BP-1549795159-127.0.0.1-1391426667789
2014-02-03 06:55:57,280 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /app/hadoop/tmp/dfs/data/current
2014-02-03 06:55:57,580 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2014-02-03 06:55:57,872 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1391434518872 with interval 21600000
2014-02-03 06:55:57,930 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1549795159-127.0.0.1-1391426667789
2014-02-03 06:55:57,985 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1549795159-127.0.0.1-1391426667789 on volume /app/hadoop/tmp/dfs/data/current...
2014-02-03 06:55:58,920 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1549795159-127.0.0.1-1391426667789 on /app/hadoop/tmp/dfs/data/current: 869ms
2014-02-03 06:55:58,921 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1549795159-127.0.0.1-1391426667789: 990ms
2014-02-03 06:55:58,921 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-1549795159-127.0.0.1-1391426667789 on volume /app/hadoop/tmp/dfs/data/current...
2014-02-03 06:55:58,922 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1549795159-127.0.0.1-1391426667789 on volume /app/hadoop/tmp/dfs/data/current: 1ms
2014-02-03 06:55:58,922 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 1ms
2014-02-03 06:55:59,061 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1549795159-127.0.0.1-1391426667789 (storage id DS-13322827-127.0.0.1-50010-1391427144953) service to master/127.0.0.1:54310 beginning handshake with NN
2014-02-03 06:56:00,339 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1549795159-127.0.0.1-1391426667789 (storage id DS-13322827-127.0.0.1-50010-1391427144953) service to master/127.0.0.1:54310 successfully registered with NN
2014-02-03 06:56:00,381 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode master/127.0.0.1:54310 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
2014-02-03 06:56:02,210 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-1549795159-127.0.0.1-1391426667789 (storage id DS-13322827-127.0.0.1-50010-1391427144953) service to master/127.0.0.1:54310 trying to claim ACTIVE state with txid=4
2014-02-03 06:56:02,220 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1549795159-127.0.0.1-1391426667789 (storage id DS-13322827-127.0.0.1-50010-1391427144953) service to master/127.0.0.1:54310
2014-02-03 06:56:03,170 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks took 12 msec to generate and 937 msecs for RPC and NN processing
2014-02-03 06:56:03,170 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@1db18c
2014-02-03 06:56:03,284 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
2014-02-03 06:56:03,284 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-02-03 06:56:03,294 INFO org.apache.hadoop.util.GSet: 0.5% max memory = 966.7 MB
2014-02-03 06:56:03,294 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2014-02-03 06:56:03,550 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-1549795159-127.0.0.1-1391426667789
2014-02-03 06:56:03,648 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-1549795159-127.0.0.1-1391426667789 to blockPoolScannerMap, new size=1
2014-02-03 07:11:08,285 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService
java.io.IOException: Failed on local exception: java.io.IOException: Connection reset by peer; Host Details : local host is: "master/127.0.0.1"; destination host is: "master":54310;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy9.sendHeartbeat(Unknown Source)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy9.sendHeartbeat(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:167)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:445)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:525)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:676)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:457)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:995)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:891)
2014-02-03 07:11:11,227 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 07:11:12,238 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 07:11:13,270 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 07:11:14,298 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 07:11:15,306 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 07:11:16,325 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 07:11:17,333 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 07:11:18,337 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/127.0.0.1:54310. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-03 07:11:19,110 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
2014-02-03 07:11:19,258 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at master/127.0.0.1
************************************************************/
2014-02-03 07:14:56,537 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = master/127.0.0.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.2.0
STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.6.1.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/... -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG: java = 1.7.0_51
************************************************************/
2014-02-03 07:14:56,645 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2014-02-03 07:15:08,767 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-02-03 07:15:09,603 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2014-02-03 07:15:09,604 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2014-02-03 07:15:09,648 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is master
2014-02-03 07:15:09,871 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2014-02-03 07:15:09,898 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2014-02-03 07:15:15,707 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2014-02-03 07:15:16,469 INFO org.apache.hadoop.http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2014-02-03 07:15:16,502 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2014-02-03 07:15:16,506 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2014-02-03 07:15:16,507 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2014-02-03 07:15:16,570 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 0.0.0.0:50075
2014-02-03 07:15:16,613 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
2014-02-03 07:15:16,614 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2014-02-03 07:15:16,614 INFO org.mortbay.log: jetty-6.1.26
2014-02-03 07:15:19,268 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
2014-02-03 07:15:26,733 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2014-02-03 07:15:27,054 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2014-02-03 07:15:27,149 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2014-02-03 07:15:27,461 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices:
2014-02-03 07:15:27,647 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool (storage id unknown) service to master/127.0.0.1:54310 starting to offer service
2014-02-03 07:15:27,752 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-02-03 07:15:27,774 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2014-02-03 07:15:33,360 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /app/hadoop/tmp/dfs/data/in_use.lock acquired by nodename 18363@master
2014-02-03 07:15:33,382 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /app/hadoop/tmp/dfs/data is not formatted
2014-02-03 07:15:33,383 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2014-02-03 07:15:34,702 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2014-02-03 07:15:34,730 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /app/hadoop/tmp/dfs/data/current/BP-1021727196-127.0.0.1-1391429643772 is not formatted.
2014-02-03 07:15:34,731 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2014-02-03 07:15:34,731 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-1021727196-127.0.0.1-1391429643772 directory /app/hadoop/tmp/dfs/data/current/BP-1021727196-127.0.0.1-1391429643772/current
2014-02-03 07:15:34,858 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=552390373;bpid=BP-1021727196-127.0.0.1-1391429643772;lv=-47;nsInfo=lv=-47;cid=CID-911dac98-62a6-494c-8432-cb8a9da1ecce;nsid=552390373;c=0;bpid=BP-1021727196-127.0.0.1-1391429643772
2014-02-03 07:15:35,764 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /app/hadoop/tmp/dfs/data/current
2014-02-03 07:15:36,192 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2014-02-03 07:15:36,567 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1391448960567 with interval 21600000
2014-02-03 07:15:36,630 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1021727196-127.0.0.1-1391429643772
2014-02-03 07:15:36,659 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1021727196-127.0.0.1-1391429643772 on volume /app/hadoop/tmp/dfs/data/current...
2014-02-03 07:15:37,477 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1021727196-127.0.0.1-1391429643772 on /app/hadoop/tmp/dfs/data/current: 721ms
2014-02-03 07:15:37,500 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1021727196-127.0.0.1-1391429643772: 870ms
2014-02-03 07:15:37,501 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-1021727196-127.0.0.1-1391429643772 on volume /app/hadoop/tmp/dfs/data/current...
2014-02-03 07:15:37,502 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1021727196-127.0.0.1-1391429643772 on volume /app/hadoop/tmp/dfs/data/current: 1ms
2014-02-03 07:15:37,502 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 1ms
2014-02-03 07:15:37,756 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1021727196-127.0.0.1-1391429643772 (storage id DS-39958983-127.0.0.1-50010-1391429733493) service to master/127.0.0.1:54310 beginning handshake with NN
2014-02-03 07:15:39,200 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1021727196-127.0.0.1-1391429643772 (storage id DS-39958983-127.0.0.1-50010-1391429733493) service to master/127.0.0.1:54310 successfully registered with NN
2014-02-03 07:15:39,201 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode master/127.0.0.1:54310 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
2014-02-03 07:15:40,517 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-1021727196-127.0.0.1-1391429643772 (storage id DS-39958983-127.0.0.1-50010-1391429733493) service to master/127.0.0.1:54310 trying to claim ACTIVE state with txid=1
2014-02-03 07:15:40,518 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1021727196-127.0.0.1-1391429643772 (storage id DS-39958983-127.0.0.1-50010-1391429733493) service to master/127.0.0.1:54310
2014-02-03 07:15:41,366 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks took 11 msec to generate and 837 msecs for RPC and NN processing
2014-02-03 07:15:41,366 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@6fc505
2014-02-03 07:15:41,460 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
2014-02-03 07:15:41,460 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-02-03 07:15:41,474 INFO org.apache.hadoop.util.GSet: 0.5% max memory = 966.7 MB
2014-02-03 07:15:41,474 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2014-02-03 07:15:41,890 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-1021727196-127.0.0.1-1391429643772
2014-02-03 07:15:42,016 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-1021727196-127.0.0.1-1391429643772 to blockPoolScannerMap, new size=1