3 Replies Latest reply: Aug 8, 2012 1:25 PM by 459245 RSS

    problem with orhc installation

    459245
      Hi All,

      I got problem to install Oracle R Hadoop Connector. The install process hung without end.
      Here is the steps I tried:
      1. download orhc.tgz.zip
      2. unzip the file to orhc.tgz
      3. try to install by the following command
      R CMD INSTALL orhc.tgz
      The output is:
      ----------------
      WARNING: ignoring environment value of R_HOME
      * installing to library \u2018/home/oracle/R/x86_64-redhat-linux-gnu-library/2.13\u2019
      * installing source package \u2018ORHC\u2019 ...
      ** R
      ** inst
      ** preparing package for lazy loading
      ** help
      No man pages found in package \u2018ORHC\u2019
      *** installing help indices
      ** building package indices ...
      ** testing if installed package can be loaded
      ----------------
      But the install process hung and never end.

      Any suggestions on how to fix the problem?

      Thanks.
        • 1. Re: problem with orhc installation
          vsashika
          This step tries to connect to hadoop environment and checks if
          what ORCH expects is properly configured. That test is hanging...

          ** testing if installed package can be loaded


          Try to run just the following first since this is what ORCH also does.

          hadoop job -list

          If this command hangs then either task tracker is down or incorrectly configured.

          Please run this command and post your results.
          • 2. Re: problem with orhc installation
            459245
            Yes, the command "hadoop job -list" is hanging.
            • 3. Re: problem with orhc installation
              459245
              Thanks.

              The problem is solved. I checked the log file of the jobtracker and found the following error:
              2012-05-04 10:00:29,687 INFO org.apache.hadoop.mapred.JobTracker: problem cleaning system directory: hdfs://host1.example.com:9000/tmp/hadoop-oracle/mapred/system
              org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-oracle/mapred/system. Name node is in safe mode.
              The ratio of reported blocks 0.0021 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
                   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1851)
                   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1831)
                   at org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:691)
                   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
                   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
                   at java.lang.reflect.Method.invoke(Method.java:597)
                   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
                   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
                   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
                   at java.security.AccessController.doPrivileged(Native Method)
                   at javax.security.auth.Subject.doAs(Subject.java:396)
                   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
                   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)

                   at org.apache.hadoop.ipc.Client.call(Client.java:1030)
                   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:224)
                   at $Proxy5.delete(Unknown Source)
                   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
                   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
                   at java.lang.reflect.Method.invoke(Method.java:597)
                   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
                   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
                   at $Proxy5.delete(Unknown Source)
                   at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:629)
                   at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:242)
                   at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2392)
                   at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2171)
                   at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
                   at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
                   at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4956)

              Then I force hadoop to leave safemode by the following command:
              hadoop dfsadmin -safemode leave

              Try hadoop job -list again, the command exit normally. Then I can install ORCH successfully.