i trying resolve following error:
13/05/05 19:49:04 info handler.openregionhandler: opening of region {name => '-root-,,0', startkey => '', endkey => '', encoded => 70236052,} failed, marking failed_open in zk 13/05/05 19:49:04 info regionserver.hregionserver: received request open region: -root-,,0.70236052 13/05/05 19:49:04 info regionserver.hregion: setting tabledescriptor config ... 13/05/05 19:49:04 error handler.openregionhandler: failed open of region=-root-,,0.70236052, starting roll global memstore size. java.lang.illegalstateexception: not instantiate region instance. @ org.apache.hadoop.hbase.regionserver.hregion.newhregion(hregion.java:3747) @ org.apache.hadoop.hbase.regionserver.hregion.openhregion(hregion.java:3927) @ org.apache.hadoop.hbase.regionserver.handler.openregionhandler.openregion(openregionhandler.java:332) @ org.apache.hadoop.hbase.regionserver.handler.openregionhandler.process(openregionhandler.java:108) @ org.apache.hadoop.hbase.executor.eventhandler.run(eventhandler.java:175) @ java.util.concurrent.threadpoolexecutor$worker.runtask(threadpoolexecutor.java:895) @ java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor.java:918) @ java.lang.thread.run(thread.java:680) caused by: java.lang.reflect.invocationtargetexception @ sun.reflect.generatedconstructoraccessor17.newinstance(unknown source) @ sun.reflect.delegatingconstructoraccessorimpl.newinstance(delegatingconstructoraccessorimpl.java:27) @ java.lang.reflect.constructor.newinstance(constructor.java:513) @ org.apache.hadoop.hbase.regionserver.hregion.newhregion(hregion.java:3744) ... 7 more caused by: java.lang.noclassdeffounderror: not initialize class org.apache.hadoop.hbase.regionserver.regioncoprocessorhost @ org.apache.hadoop.hbase.regionserver.hregion.<init>(hregion.java:421) ... 11 more
i have following maven dependency:
<properties> <hadoopcdhmrversion>2.0.0-mr1-cdh4.2.0</hadoopcdhmrversion> <hadoopcdhversion>2.0.0-cdh4.2.0</hadoopcdhversion> <hbasecdhversion><b>0.94.2-cdh4.2.0</b></hbasecdhversion> </properties> <dependencymanagement> <dependencies> <!-- apache --> <dependency> <groupid>org.apache.hadoop</groupid> <artifactid>hadoop-core</artifactid> <version>${hadoopcdhmrversion}</version> <exclusions> <exclusion> <groupid>tomcat</groupid> <artifactid>jasper-compiler</artifactid> </exclusion> <exclusion> <groupid>tomcat</groupid> <artifactid>jasper-runtime</artifactid> </exclusion> </exclusions> </dependency> <dependency> <groupid>org.apache.hadoop</groupid> <artifactid>hadoop-common</artifactid> <version>${hadoopcdhversion}</version> <exclusions> <exclusion> <groupid>org.mockito</groupid> <artifactid>mockito-all</artifactid> </exclusion> <exclusion> <groupid>junit</groupid> <artifactid>junit</artifactid> </exclusion> <exclusion> <groupid>log4j</groupid> <artifactid>log4j</artifactid> </exclusion> <exclusion> <groupid>tomcat</groupid> <artifactid>jasper-compiler</artifactid> </exclusion> <exclusion> <groupid>tomcat</groupid> <artifactid>jasper-runtime</artifactid> </exclusion> <exclusion> <groupid>org.mortbay.jetty</groupid> <artifactid>jetty</artifactid> </exclusion> <exclusion> <groupid>org.mortbay.jetty</groupid> <artifactid>jetty-util</artifactid> </exclusion> </exclusions> </dependency> <dependency> <groupid>org.apache.hadoop</groupid> <artifactid>hadoop-hdfs</artifactid> <version>${hadoopcdhversion}</version> </dependency> <!-- test --> <dependency> <groupid>org.apache.hbase</groupid> <artifactid>hbase</artifactid> <scope>test</scope> <classifier>tests</classifier> <version>${hbasecdhversion}</version> </dependency> <dependency> <groupid>org.apache.hbase</groupid> <artifactid>hbase</artifactid> <scope>provided</scope> <version>${hbasecdhversion}</version> </dependency> <dependency> <groupid>org.apache.hadoop</groupid> <artifactid>hadoop-test</artifactid> <version>${hadoopcdhmrversion}</version> <scope>test</scope> </dependency> <dependency> <groupid>org.apache.hadoop</groupid> <artifactid>hadoop-minicluster</artifactid> <version>${hadoopcdhmrversion}</version> <scope>test</scope> </dependency> <dependencies> </dependencymanagement>
i bringing dependency parent pom child pom. code test against:
//started mini cluster perform unit test final configuration startingconf = hbaseconfiguration.create(); startingconf.setlong("hbase.client.keyvalue.maxsize", 65536); startingconf.setstrings(hconstants.zookeeper_quorum, "localhost"); startingconf.setstrings("mapreduce.jobtracker.address", "local"); startingconf.setlong(hconstants.hbase_client_pause, 50); startingconf.setint(hconstants.hbase_client_retries_number, 200); testutil = new hbasetestingutility(startingconf); //point of failure testutil.startminicluster();
error @ after startminicluster() of work of instantiating environment, drop in between due above error. things tried:
- if roll hbasecdhversion 0.94.2-cdh4.2.0 version of 0.92.1-cdh4.x.x, works.
- removed .m2 cache , see 0.94.2-cdh4.2.0 created.
- tried versions of 0.94.2-cdh4.x.x
- i run mvn clean , install via commandline , not relying on eclipse magic, aslo tried eclipse:eclipse.
- check type/resource through eclipse missing class , points correct version of local repo, can find via eclipse.
- observe dependency tree conflicts.
- i opened repo jar myself , saw class exists.
- tried creating new project , create pom file scratch.
any pointers appreciated.
the problem commons-configuration jar. parent pom bringing in 1.9 version, leading conflicts hadoop common jar bringing in 1.6 version. way figure out problem keeping minmum dependency in parent pom , uncommenting dependencies 1 one narrow down problem. once problem found exclude dependencies in hadoop commons dependency. hope helps someone. hadoop jar should upgrade there commons-configuration 5 years old right now. can roll latest jar 1.9 1.6
Comments
Post a Comment