• 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏吧

Java FsDelegationToken类的典型用法和代码示例

java 1次浏览

本文整理汇总了Java中org.apache.hadoop.hbase.security.token.FsDelegationToken的典型用法代码示例。如果您正苦于以下问题:Java FsDelegationToken类的具体用法?Java FsDelegationToken怎么用?Java FsDelegationToken使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。

FsDelegationToken类属于org.apache.hadoop.hbase.security.token包,在下文中一共展示了FsDelegationToken类的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: initialize

点赞 2

import org.apache.hadoop.hbase.security.token.FsDelegationToken; //导入依赖的package包/类
private void initialize() throws Exception {
  if (hbAdmin == null) {
    // make a copy, just to be sure we're not overriding someone else's config
    setConf(HBaseConfiguration.create(getConf()));
    Configuration conf = getConf();
    // disable blockcache for tool invocation, see HBASE-10500
    conf.setFloat(HConstants.HFILE_BLOCK_CACHE_SIZE_KEY, 0);
    this.hbAdmin = new HBaseAdmin(conf);
    this.userProvider = UserProvider.instantiate(conf);
    this.fsDelegationToken = new FsDelegationToken(userProvider, "renewer");
    assignSeqIds = conf.getBoolean(ASSIGN_SEQ_IDS, true);
    maxFilesPerRegionPerFamily = conf.getInt(MAX_FILES_PER_REGION_PER_FAMILY, 32);
  }
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:15,
代码来源:LoadIncrementalHFiles.java

示例2: LoadIncrementalHFiles

点赞 2

import org.apache.hadoop.hbase.security.token.FsDelegationToken; //导入依赖的package包/类
public LoadIncrementalHFiles(Configuration conf) throws Exception {
  super(conf);
  // make a copy, just to be sure we're not overriding someone else's config
  setConf(HBaseConfiguration.create(getConf()));
  // disable blockcache for tool invocation, see HBASE-10500
  getConf().setFloat(HConstants.HFILE_BLOCK_CACHE_SIZE_KEY, 0);
  this.hbAdmin = new HBaseAdmin(conf);
  this.userProvider = UserProvider.instantiate(conf);
  this.fsDelegationToken = new FsDelegationToken(userProvider, "renewer");
  assignSeqIds = conf.getBoolean(ASSIGN_SEQ_IDS, true);
  maxFilesPerRegionPerFamily = conf.getInt(MAX_FILES_PER_REGION_PER_FAMILY, 32);
}
 

开发者ID:tenggyut,
项目名称:HIndex,
代码行数:13,
代码来源:LoadIncrementalHFiles.java

示例3: run

点赞 2

import org.apache.hadoop.hbase.security.token.FsDelegationToken; //导入依赖的package包/类
public static Map<byte[], Response> run(final Configuration conf, TableName tableName, Scan scan, Path dir) throws Throwable {
  FileSystem fs = dir.getFileSystem(conf);
  UserProvider userProvider = UserProvider.instantiate(conf);
  checkDir(fs, dir);
  FsDelegationToken fsDelegationToken = new FsDelegationToken(userProvider, "renewer");
  fsDelegationToken.acquireDelegationToken(fs);
  try {
    final ExportProtos.ExportRequest request = getConfiguredRequest(conf, dir,
      scan, fsDelegationToken.getUserToken());
    try (Connection con = ConnectionFactory.createConnection(conf);
            Table table = con.getTable(tableName)) {
      Map<byte[], Response> result = new TreeMap<>(Bytes.BYTES_COMPARATOR);
      table.coprocessorService(ExportProtos.ExportService.class,
        scan.getStartRow(),
        scan.getStopRow(),
        (ExportProtos.ExportService service) -> {
          ServerRpcController controller = new ServerRpcController();
          Map<byte[], ExportProtos.ExportResponse> rval = new TreeMap<>(Bytes.BYTES_COMPARATOR);
          CoprocessorRpcUtils.BlockingRpcCallback<ExportProtos.ExportResponse>
            rpcCallback = new CoprocessorRpcUtils.BlockingRpcCallback<>();
          service.export(controller, request, rpcCallback);
          if (controller.failedOnException()) {
            throw controller.getFailedOn();
          }
          return rpcCallback.get();
        }).forEach((k, v) -> result.put(k, new Response(v)));
      return result;
    } catch (Throwable e) {
      fs.delete(dir, true);
      throw e;
    }
  } finally {
    fsDelegationToken.releaseDelegationToken();
  }
}
 

开发者ID:apache,
项目名称:hbase,
代码行数:36,
代码来源:Export.java

示例4: HFileReplicator

点赞 2

import org.apache.hadoop.hbase.security.token.FsDelegationToken; //导入依赖的package包/类
public HFileReplicator(Configuration sourceClusterConf,
    String sourceBaseNamespaceDirPath, String sourceHFileArchiveDirPath,
    Map<String, List<Pair<byte[], List<String>>>> tableQueueMap, Configuration conf,
    Connection connection) throws IOException {
  this.sourceClusterConf = sourceClusterConf;
  this.sourceBaseNamespaceDirPath = sourceBaseNamespaceDirPath;
  this.sourceHFileArchiveDirPath = sourceHFileArchiveDirPath;
  this.bulkLoadHFileMap = tableQueueMap;
  this.conf = conf;
  this.connection = connection;

  userProvider = UserProvider.instantiate(conf);
  fsDelegationToken = new FsDelegationToken(userProvider, "renewer");
  this.hbaseStagingDir = new Path(FSUtils.getRootDir(conf), HConstants.BULKLOAD_STAGING_DIR_NAME);
  this.maxCopyThreads =
      this.conf.getInt(REPLICATION_BULKLOAD_COPY_MAXTHREADS_KEY,
        REPLICATION_BULKLOAD_COPY_MAXTHREADS_DEFAULT);
  ThreadFactoryBuilder builder = new ThreadFactoryBuilder();
  builder.setNameFormat("HFileReplicationCallable-%1$d");
  this.exec =
      new ThreadPoolExecutor(maxCopyThreads, maxCopyThreads, 60, TimeUnit.SECONDS,
          new LinkedBlockingQueue<>(), builder.build());
  this.exec.allowCoreThreadTimeOut(true);
  this.copiesPerThread =
      conf.getInt(REPLICATION_BULKLOAD_COPY_HFILES_PERTHREAD_KEY,
        REPLICATION_BULKLOAD_COPY_HFILES_PERTHREAD_DEFAULT);

  sinkFs = FileSystem.get(conf);
}
 

开发者ID:apache,
项目名称:hbase,
代码行数:30,
代码来源:HFileReplicator.java

示例5: LoadIncrementalHFiles

点赞 2

import org.apache.hadoop.hbase.security.token.FsDelegationToken; //导入依赖的package包/类
public LoadIncrementalHFiles(Configuration conf) {
  // make a copy, just to be sure we're not overriding someone else's config
  super(HBaseConfiguration.create(conf));
  conf = getConf();
  // disable blockcache for tool invocation, see HBASE-10500
  conf.setFloat(HConstants.HFILE_BLOCK_CACHE_SIZE_KEY, 0);
  userProvider = UserProvider.instantiate(conf);
  fsDelegationToken = new FsDelegationToken(userProvider, "renewer");
  assignSeqIds = conf.getBoolean(ASSIGN_SEQ_IDS, true);
  maxFilesPerRegionPerFamily = conf.getInt(MAX_FILES_PER_REGION_PER_FAMILY, 32);
  nrThreads = conf.getInt("hbase.loadincremental.threads.max",
    Runtime.getRuntime().availableProcessors());
  rpcControllerFactory = new RpcControllerFactory(conf);
}
 

开发者ID:apache,
项目名称:hbase,
代码行数:15,
代码来源:LoadIncrementalHFiles.java


版权声明:本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系管理员进行删除。
喜欢 (0)