• 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏吧

Java ByteStringer类的典型用法和代码示例

java 1次浏览

本文整理汇总了Java中org.apache.hadoop.hbase.util.ByteStringer的典型用法代码示例。如果您正苦于以下问题:Java ByteStringer类的具体用法?Java ByteStringer怎么用?Java ByteStringer使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。

ByteStringer类属于org.apache.hadoop.hbase.util包,在下文中一共展示了ByteStringer类的29个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: write

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
@Override
public void write(Cell cell) throws IOException {
  checkFlushed();
  CellProtos.Cell.Builder builder = CellProtos.Cell.newBuilder();
  // This copies bytes from Cell to ByteString.  I don't see anyway around the copy.
  // ByteString is final.
  builder.setRow(ByteStringer.wrap(cell.getRowArray(), cell.getRowOffset(),
      cell.getRowLength()));
  builder.setFamily(ByteStringer.wrap(cell.getFamilyArray(), cell.getFamilyOffset(),
      cell.getFamilyLength()));
  builder.setQualifier(ByteStringer.wrap(cell.getQualifierArray(),
      cell.getQualifierOffset(), cell.getQualifierLength()));
  builder.setTimestamp(cell.getTimestamp());
  builder.setCellType(CellProtos.CellType.valueOf(cell.getTypeByte()));
  builder.setValue(ByteStringer.wrap(cell.getValueArray(), cell.getValueOffset(),
      cell.getValueLength()));
  CellProtos.Cell pbcell = builder.build();
  pbcell.writeDelimitedTo(this.out);
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:20,
代码来源:MessageCodec.java

示例2: serializeAsPB

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * Write trailer data as protobuf
 * @param outputStream
 * @throws IOException
 */
void serializeAsPB(DataOutputStream output) throws IOException {
  ByteArrayOutputStream baos = new ByteArrayOutputStream();
  HFileProtos.FileTrailerProto.Builder builder = HFileProtos.FileTrailerProto.newBuilder()
    .setFileInfoOffset(fileInfoOffset)
    .setLoadOnOpenDataOffset(loadOnOpenDataOffset)
    .setUncompressedDataIndexSize(uncompressedDataIndexSize)
    .setTotalUncompressedBytes(totalUncompressedBytes)
    .setDataIndexCount(dataIndexCount)
    .setMetaIndexCount(metaIndexCount)
    .setEntryCount(entryCount)
    .setNumDataIndexLevels(numDataIndexLevels)
    .setFirstDataBlockOffset(firstDataBlockOffset)
    .setLastDataBlockOffset(lastDataBlockOffset)
    // TODO this is a classname encoded into an  HFile's trailer. We are going to need to have 
    // some compat code here.
    .setComparatorClassName(comparatorClassName)
    .setCompressionCodec(compressionCodec.ordinal());
  if (encryptionKey != null) {
    builder.setEncryptionKey(ByteStringer.wrap(encryptionKey));
  }
  // We need this extra copy unfortunately to determine the final size of the
  // delimited output, see use of baos.size() below.
  builder.build().writeDelimitedTo(baos);
  baos.writeTo(output);
  // Pad to make up the difference between variable PB encoding length and the
  // length when encoded as writable under earlier V2 formats. Failure to pad
  // properly or if the PB encoding is too big would mean the trailer wont be read
  // in properly by HFile.
  int padding = getTrailerSize() - NOT_PB_SIZE - baos.size();
  if (padding < 0) {
    throw new IOException("Pbuf encoding size exceeded fixed trailer size limit");
  }
  for (int i = 0; i < padding; i++) {
    output.write(0);
  }
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:42,
代码来源:FixedFileTrailer.java

示例3: sum

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
private static Map<byte [], Long> sum(final Table table, final byte [] family,
  final byte [] qualifier, final byte [] start, final byte [] end)
    throws ServiceException, Throwable {
return table.coprocessorService(ColumnAggregationProtos.ColumnAggregationService.class,
    start, end,
  new Batch.Call<ColumnAggregationProtos.ColumnAggregationService, Long>() {
    @Override
    public Long call(ColumnAggregationProtos.ColumnAggregationService instance)
    throws IOException {
      BlockingRpcCallback<ColumnAggregationProtos.SumResponse> rpcCallback =
          new BlockingRpcCallback<ColumnAggregationProtos.SumResponse>();
      ColumnAggregationProtos.SumRequest.Builder builder =
        ColumnAggregationProtos.SumRequest.newBuilder();
      builder.setFamily(ByteStringer.wrap(family));
      if (qualifier != null && qualifier.length > 0) {
        builder.setQualifier(ByteStringer.wrap(qualifier));
      }
      instance.sum(null, builder.build(), rpcCallback);
      return rpcCallback.get().getSum();
    }
  });
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:23,
代码来源:TestCoprocessorTableEndpoint.java

示例4: sum

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
private Map<byte [], Long> sum(final Table table, final byte [] family,
    final byte [] qualifier, final byte [] start, final byte [] end)
throws ServiceException, Throwable {
  return table.coprocessorService(ColumnAggregationProtos.ColumnAggregationService.class,
      start, end,
    new Batch.Call<ColumnAggregationProtos.ColumnAggregationService, Long>() {
      @Override
      public Long call(ColumnAggregationProtos.ColumnAggregationService instance)
      throws IOException {
        BlockingRpcCallback<ColumnAggregationProtos.SumResponse> rpcCallback =
            new BlockingRpcCallback<ColumnAggregationProtos.SumResponse>();
        ColumnAggregationProtos.SumRequest.Builder builder =
          ColumnAggregationProtos.SumRequest.newBuilder();
        builder.setFamily(ByteStringer.wrap(family));
        if (qualifier != null && qualifier.length > 0) {
          builder.setQualifier(ByteStringer.wrap(qualifier));
        }
        instance.sum(null, builder.build(), rpcCallback);
        return rpcCallback.get().getSum();
      }
    });
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:23,
代码来源:TestCoprocessorEndpoint.java

示例5: buildGetRowOrBeforeRequest

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * Create a new protocol buffer GetRequest to get a row, all columns in a family.
 * If there is no such row, return the closest row before it.
 *
 * @param regionName the name of the region to get
 * @param row the row to get
 * @param family the column family to get
 * should return the immediate row before
 * @return a protocol buffer GetReuqest
 */
public static GetRequest buildGetRowOrBeforeRequest(
    final byte[] regionName, final byte[] row, final byte[] family) {
  GetRequest.Builder builder = GetRequest.newBuilder();
  RegionSpecifier region = buildRegionSpecifier(
    RegionSpecifierType.REGION_NAME, regionName);
  builder.setRegion(region);

  Column.Builder columnBuilder = Column.newBuilder();
  columnBuilder.setFamily(ByteStringer.wrap(family));
  ClientProtos.Get.Builder getBuilder =
    ClientProtos.Get.newBuilder();
  getBuilder.setRow(ByteStringer.wrap(row));
  getBuilder.addColumn(columnBuilder.build());
  getBuilder.setClosestRowBefore(true);
  builder.setGet(getBuilder.build());
  return builder.build();
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:28,
代码来源:RequestConverter.java

示例6: buildBulkLoadHFileRequest

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * Create a protocol buffer bulk load request
 *
 * @param familyPaths
 * @param regionName
 * @param assignSeqNum
 * @return a bulk load request
 */
public static BulkLoadHFileRequest buildBulkLoadHFileRequest(
    final List<Pair<byte[], String>> familyPaths,
    final byte[] regionName, boolean assignSeqNum) {
  BulkLoadHFileRequest.Builder builder = BulkLoadHFileRequest.newBuilder();
  RegionSpecifier region = buildRegionSpecifier(
    RegionSpecifierType.REGION_NAME, regionName);
  builder.setRegion(region);
  FamilyPath.Builder familyPathBuilder = FamilyPath.newBuilder();
  for (Pair<byte[], String> familyPath: familyPaths) {
    familyPathBuilder.setFamily(ByteStringer.wrap(familyPath.getFirst()));
    familyPathBuilder.setPath(familyPath.getSecond());
    builder.addFamilyPath(familyPathBuilder.build());
  }
  builder.setAssignSeqNum(assignSeqNum);
  return builder.build();
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:25,
代码来源:RequestConverter.java

示例7: buildCreateTableRequest

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * Creates a protocol buffer CreateTableRequest
 *
 * @param hTableDesc
 * @param splitKeys
 * @return a CreateTableRequest
 */
public static CreateTableRequest buildCreateTableRequest(
    final HTableDescriptor hTableDesc,
    final byte [][] splitKeys,
    final long nonceGroup,
    final long nonce) {
  CreateTableRequest.Builder builder = CreateTableRequest.newBuilder();
  builder.setTableSchema(hTableDesc.convert());
  if (splitKeys != null) {
    for (byte [] splitKey : splitKeys) {
      builder.addSplitKeys(ByteStringer.wrap(splitKey));
    }
  }
  builder.setNonceGroup(nonceGroup);
  builder.setNonce(nonce);
  return builder.build();
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:24,
代码来源:RequestConverter.java

示例8: getMutationBuilderAndSetCommonFields

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * Code shared by {@link #toMutation(MutationType, Mutation)} and
 * {@link #toMutationNoData(MutationType, Mutation)}
 * @param type
 * @param mutation
 * @return A partly-filled out protobuf'd Mutation.
 */
private static MutationProto.Builder getMutationBuilderAndSetCommonFields(final MutationType type,
    final Mutation mutation, MutationProto.Builder builder) {
  builder.setRow(ByteStringer.wrap(mutation.getRow()));
  builder.setMutateType(type);
  builder.setDurability(toDurability(mutation.getDurability()));
  builder.setTimestamp(mutation.getTimeStamp());
  Map<String, byte[]> attributes = mutation.getAttributesMap();
  if (!attributes.isEmpty()) {
    NameBytesPair.Builder attributeBuilder = NameBytesPair.newBuilder();
    for (Map.Entry<String, byte[]> attribute: attributes.entrySet()) {
      attributeBuilder.setName(attribute.getKey());
      attributeBuilder.setValue(ByteStringer.wrap(attribute.getValue()));
      builder.addAttribute(attributeBuilder.build());
    }
  }
  return builder;
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:25,
代码来源:ProtobufUtil.java

示例9: getUserPermissions

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * A utility used to get permissions for selected namespace.
 * <p>
 * It's also called by the shell, in case you want to find references.
 *
 * @param protocol the AccessControlService protocol proxy
 * @param namespace name of the namespace
 * @throws ServiceException
 */
public static List<UserPermission> getUserPermissions(RpcController controller,
    AccessControlService.BlockingInterface protocol,
    byte[] namespace) throws ServiceException {
  AccessControlProtos.GetUserPermissionsRequest.Builder builder =
    AccessControlProtos.GetUserPermissionsRequest.newBuilder();
  if (namespace != null) {
    builder.setNamespaceName(ByteStringer.wrap(namespace));
  }
  builder.setType(AccessControlProtos.Permission.Type.Namespace);
  AccessControlProtos.GetUserPermissionsRequest request = builder.build();
  AccessControlProtos.GetUserPermissionsResponse response =
    protocol.getUserPermissions(controller, request);
  List<UserPermission> perms = new ArrayList<UserPermission>(response.getUserPermissionCount());
  for (AccessControlProtos.UserPermission perm: response.getUserPermissionList()) {
    perms.add(ProtobufUtil.toUserPermission(perm));
  }
  return perms;
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:28,
代码来源:ProtobufUtil.java

示例10: toCell

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
public static CellProtos.Cell toCell(final Cell kv) {
  // Doing this is going to kill us if we do it for all data passed.
  // St.Ack 20121205
  CellProtos.Cell.Builder kvbuilder = CellProtos.Cell.newBuilder();
  kvbuilder.setRow(ByteStringer.wrap(kv.getRowArray(), kv.getRowOffset(),
      kv.getRowLength()));
  kvbuilder.setFamily(ByteStringer.wrap(kv.getFamilyArray(),
      kv.getFamilyOffset(), kv.getFamilyLength()));
  kvbuilder.setQualifier(ByteStringer.wrap(kv.getQualifierArray(),
      kv.getQualifierOffset(), kv.getQualifierLength()));
  kvbuilder.setCellType(CellProtos.CellType.valueOf(kv.getTypeByte()));
  kvbuilder.setTimestamp(kv.getTimestamp());
  kvbuilder.setValue(ByteStringer.wrap(kv.getValueArray(), kv.getValueOffset(),
      kv.getValueLength()));
  return kvbuilder.build();
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:17,
代码来源:ProtobufUtil.java

示例11: toCompactionDescriptor

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
@SuppressWarnings("deprecation")
public static CompactionDescriptor toCompactionDescriptor(HRegionInfo info, byte[] regionName,
    byte[] family, List<Path> inputPaths, List<Path> outputPaths, Path storeDir) {
  // compaction descriptor contains relative paths.
  // input / output paths are relative to the store dir
  // store dir is relative to region dir
  CompactionDescriptor.Builder builder = CompactionDescriptor.newBuilder()
      .setTableName(ByteStringer.wrap(info.getTableName()))
      .setEncodedRegionName(ByteStringer.wrap(
        regionName == null ? info.getEncodedNameAsBytes() : regionName))
      .setFamilyName(ByteStringer.wrap(family))
      .setStoreHomeDir(storeDir.getName()); //make relative
  for (Path inputPath : inputPaths) {
    builder.addCompactionInput(inputPath.getName()); //relative path
  }
  for (Path outputPath : outputPaths) {
    builder.addCompactionOutput(outputPath.getName());
  }
  builder.setRegionName(ByteStringer.wrap(info.getRegionName()));
  return builder.build();
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:22,
代码来源:ProtobufUtil.java

示例12: toFlushDescriptor

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
public static FlushDescriptor toFlushDescriptor(FlushAction action, HRegionInfo hri,
    long flushSeqId, Map<byte[], List<Path>> committedFiles) {
  FlushDescriptor.Builder desc = FlushDescriptor.newBuilder()
      .setAction(action)
      .setEncodedRegionName(ByteStringer.wrap(hri.getEncodedNameAsBytes()))
      .setRegionName(ByteStringer.wrap(hri.getRegionName()))
      .setFlushSequenceNumber(flushSeqId)
      .setTableName(ByteStringer.wrap(hri.getTable().getName()));

  for (Map.Entry<byte[], List<Path>> entry : committedFiles.entrySet()) {
    WALProtos.FlushDescriptor.StoreFlushDescriptor.Builder builder =
        WALProtos.FlushDescriptor.StoreFlushDescriptor.newBuilder()
        .setFamilyName(ByteStringer.wrap(entry.getKey()))
        .setStoreHomeDir(Bytes.toString(entry.getKey())); //relative to region
    if (entry.getValue() != null) {
      for (Path path : entry.getValue()) {
        builder.addFlushOutput(path.getName());
      }
    }
    desc.addStoreFlushes(builder);
  }
  return desc.build();
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:24,
代码来源:ProtobufUtil.java

示例13: toBulkLoadDescriptor

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * Generates a marker for the WAL so that we propagate the notion of a bulk region load
 * throughout the WAL.
 *
 * @param tableName         The tableName into which the bulk load is being imported into.
 * @param encodedRegionName Encoded region name of the region which is being bulk loaded.
 * @param storeFiles        A set of store files of a column family are bulk loaded.
 * @param bulkloadSeqId     sequence ID (by a force flush) used to create bulk load hfile
 *                          name
 * @return The WAL log marker for bulk loads.
 */
public static WALProtos.BulkLoadDescriptor toBulkLoadDescriptor(TableName tableName,
    ByteString encodedRegionName, Map<byte[], List<Path>> storeFiles, long bulkloadSeqId) {
  BulkLoadDescriptor.Builder desc = BulkLoadDescriptor.newBuilder()
      .setTableName(ProtobufUtil.toProtoTableName(tableName))
      .setEncodedRegionName(encodedRegionName).setBulkloadSeqNum(bulkloadSeqId);

  for (Map.Entry<byte[], List<Path>> entry : storeFiles.entrySet()) {
    WALProtos.StoreDescriptor.Builder builder = StoreDescriptor.newBuilder()
        .setFamilyName(ByteStringer.wrap(entry.getKey()))
        .setStoreHomeDir(Bytes.toString(entry.getKey())); // relative to region
    for (Path path : entry.getValue()) {
      builder.addStoreFile(path.getName());
    }
    desc.addStores(builder);
  }

  return desc.build();
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:30,
代码来源:ProtobufUtil.java

示例14: regionSequenceIdsToByteArray

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * @param regionLastFlushedSequenceId the flushed sequence id of a region which is the min of its
 *          store max seq ids
 * @param storeSequenceIds column family to sequence Id map
 * @return Serialized protobuf of <code>RegionSequenceIds</code> with pb magic prefix prepended
 *         suitable for use to filter wal edits in distributedLogReplay mode
 */
public static byte[] regionSequenceIdsToByteArray(final Long regionLastFlushedSequenceId,
    final Map<byte[], Long> storeSequenceIds) {
  ClusterStatusProtos.RegionStoreSequenceIds.Builder regionSequenceIdsBuilder =
      ClusterStatusProtos.RegionStoreSequenceIds.newBuilder();
  ClusterStatusProtos.StoreSequenceId.Builder storeSequenceIdBuilder =
      ClusterStatusProtos.StoreSequenceId.newBuilder();
  if (storeSequenceIds != null) {
    for (Map.Entry<byte[], Long> e : storeSequenceIds.entrySet()){
      byte[] columnFamilyName = e.getKey();
      Long curSeqId = e.getValue();
      storeSequenceIdBuilder.setFamilyName(ByteStringer.wrap(columnFamilyName));
      storeSequenceIdBuilder.setSequenceId(curSeqId);
      regionSequenceIdsBuilder.addStoreSequenceId(storeSequenceIdBuilder.build());
      storeSequenceIdBuilder.clear();
    }
  }
  regionSequenceIdsBuilder.setLastFlushedSequenceId(regionLastFlushedSequenceId);
  byte[] result = regionSequenceIdsBuilder.build().toByteArray();
  return ProtobufUtil.prependPBMagic(result);
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:28,
代码来源:ZKUtil.java

示例15: convert

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * Convert a HRegionInfo to a RegionInfo
 *
 * @param info the HRegionInfo to convert
 * @return the converted RegionInfo
 */
public static RegionInfo convert(final HRegionInfo info) {
  if (info == null) return null;
  RegionInfo.Builder builder = RegionInfo.newBuilder();
  builder.setTableName(ProtobufUtil.toProtoTableName(info.getTable()));
  builder.setRegionId(info.getRegionId());
  if (info.getStartKey() != null) {
    builder.setStartKey(ByteStringer.wrap(info.getStartKey()));
  }
  if (info.getEndKey() != null) {
    builder.setEndKey(ByteStringer.wrap(info.getEndKey()));
  }
  builder.setOffline(info.isOffline());
  builder.setSplit(info.isSplit());
  builder.setReplicaId(info.getReplicaId());
  return builder.build();
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:23,
代码来源:HRegionInfo.java

示例16: toByteArray

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * @return The filter serialized using pb
 */
public byte[] toByteArray() {
  FilterProtos.MultiRowRangeFilter.Builder builder = FilterProtos.MultiRowRangeFilter
      .newBuilder();
  for (RowRange range : rangeList) {
    if (range != null) {
      FilterProtos.RowRange.Builder rangebuilder = FilterProtos.RowRange.newBuilder();
      if (range.startRow != null)
        rangebuilder.setStartRow(ByteStringer.wrap(range.startRow));
      rangebuilder.setStartRowInclusive(range.startRowInclusive);
      if (range.stopRow != null)
        rangebuilder.setStopRow(ByteStringer.wrap(range.stopRow));
      rangebuilder.setStopRowInclusive(range.stopRowInclusive);
      range.isScan = Bytes.equals(range.startRow, range.stopRow) ? 1 : 0;
      builder.addRowRangeList(rangebuilder.build());
    }
  }
  return builder.build().toByteArray();
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:22,
代码来源:MultiRowRangeFilter.java

示例17: convert

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
FilterProtos.SingleColumnValueFilter convert() {
  FilterProtos.SingleColumnValueFilter.Builder builder =
    FilterProtos.SingleColumnValueFilter.newBuilder();
  if (this.columnFamily != null) {
    builder.setColumnFamily(ByteStringer.wrap(this.columnFamily));
  }
  if (this.columnQualifier != null) {
    builder.setColumnQualifier(ByteStringer.wrap(this.columnQualifier));
  }
  HBaseProtos.CompareType compareOp = CompareType.valueOf(this.compareOp.name());
  builder.setCompareOp(compareOp);
  builder.setComparator(ProtobufUtil.toComparator(this.comparator));
  builder.setFilterIfMissing(this.filterIfMissing);
  builder.setLatestVersionOnly(this.latestVersionOnly);

  return builder.build();
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:18,
代码来源:SingleColumnValueFilter.java

示例18: createProtobufOutput

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
@Override
public byte[] createProtobufOutput() {
  CellSet.Builder builder = CellSet.newBuilder();
  for (RowModel row: getRows()) {
    CellSet.Row.Builder rowBuilder = CellSet.Row.newBuilder();
    rowBuilder.setKey(ByteStringer.wrap(row.getKey()));
    for (CellModel cell: row.getCells()) {
      Cell.Builder cellBuilder = Cell.newBuilder();
      cellBuilder.setColumn(ByteStringer.wrap(cell.getColumn()));
      cellBuilder.setData(ByteStringer.wrap(cell.getValue()));
      if (cell.hasUserTimestamp()) {
        cellBuilder.setTimestamp(cell.getTimestamp());
      }
      rowBuilder.addValues(cellBuilder);
    }
    builder.addRows(rowBuilder);
  }
  return builder.build().toByteArray();
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:20,
代码来源:CellSetModel.java

示例19: serialize

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * Serializes each {@link HRow} to byte[] through protobuf.
 * @param topic
 * @param row
 * @return
 */
@Override
public byte[] serialize(String topic, HRow row) {
    final HRowProtos.Row.Builder rowBuilder = HRowProtos.Row.newBuilder();
    row.getColumns().stream().forEach(column -> {
     HColumnProtos.HColumn.Builder cellBuilder = HColumnProtos.HColumn.newBuilder();
     cellBuilder.setFamily(ByteStringer.wrap(column.getFamily()));
     cellBuilder.setQualifier(ByteStringer.wrap(column.getQualifier()));
     cellBuilder.setValue(ByteStringer.wrap(column.getValue()));
     cellBuilder.setTimestamp(column.getTimestamp());
     rowBuilder.addColumn(cellBuilder.build());
    });
    rowBuilder.setOp(HRowProtos.RowOp.valueOf(row.getRowOp().name()));
   rowBuilder.setRow(ByteStringer.wrap(row.getRowKey()));
    return rowBuilder.build().toByteArray();
}
 

开发者ID:mravi,
项目名称:hbase-connect-kafka,
代码行数:22,
代码来源:HRowProtobufSerde.java

示例20: getUserPermissions

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * A utility used to get permissions for selected namespace.
 * <p>
 * It's also called by the shell, in case you want to find references.
 *
 * @param protocol the AccessControlService protocol proxy
 * @param namespace name of the namespace
 * @throws ServiceException
 */
public static List<UserPermission> getUserPermissions(
    AccessControlService.BlockingInterface protocol,
    byte[] namespace) throws ServiceException {
  AccessControlProtos.GetUserPermissionsRequest.Builder builder =
    AccessControlProtos.GetUserPermissionsRequest.newBuilder();
  if (namespace != null) {
    builder.setNamespaceName(ByteStringer.wrap(namespace));
  }
  builder.setType(AccessControlProtos.Permission.Type.Namespace);
  AccessControlProtos.GetUserPermissionsRequest request = builder.build();
  AccessControlProtos.GetUserPermissionsResponse response =
    protocol.getUserPermissions(null, request);
  List<UserPermission> perms = new ArrayList<UserPermission>(response.getUserPermissionCount());
  for (AccessControlProtos.UserPermission perm: response.getUserPermissionList()) {
    perms.add(ProtobufUtil.toUserPermission(perm));
  }
  return perms;
}
 

开发者ID:grokcoder,
项目名称:pbase,
代码行数:28,
代码来源:ProtobufUtil.java

示例21: toCompactionDescriptor

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
public static CompactionDescriptor toCompactionDescriptor(HRegionInfo info, byte[] family,
    List<Path> inputPaths, List<Path> outputPaths, Path storeDir) {
  // compaction descriptor contains relative paths.
  // input / output paths are relative to the store dir
  // store dir is relative to region dir
  CompactionDescriptor.Builder builder = CompactionDescriptor.newBuilder()
      .setTableName(ByteStringer.wrap(info.getTableName()))
      .setEncodedRegionName(ByteStringer.wrap(info.getEncodedNameAsBytes()))
      .setFamilyName(ByteStringer.wrap(family))
      .setStoreHomeDir(storeDir.getName()); //make relative
  for (Path inputPath : inputPaths) {
    builder.addCompactionInput(inputPath.getName()); //relative path
  }
  for (Path outputPath : outputPaths) {
    builder.addCompactionOutput(outputPath.getName());
  }
  builder.setRegionName(ByteStringer.wrap(info.getRegionName()));
  return builder.build();
}
 

开发者ID:grokcoder,
项目名称:pbase,
代码行数:20,
代码来源:ProtobufUtil.java

示例22: toFlushDescriptor

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
public static FlushDescriptor toFlushDescriptor(FlushAction action, HRegionInfo hri,
    long flushSeqId, Map<byte[], List<Path>> committedFiles) {
  FlushDescriptor.Builder desc = FlushDescriptor.newBuilder()
      .setAction(action)
      .setEncodedRegionName(ByteStringer.wrap(hri.getEncodedNameAsBytes()))
      .setFlushSequenceNumber(flushSeqId)
      .setTableName(ByteStringer.wrap(hri.getTable().getName()));

  for (Map.Entry<byte[], List<Path>> entry : committedFiles.entrySet()) {
    WALProtos.FlushDescriptor.StoreFlushDescriptor.Builder builder =
        WALProtos.FlushDescriptor.StoreFlushDescriptor.newBuilder()
        .setFamilyName(ByteStringer.wrap(entry.getKey()))
        .setStoreHomeDir(Bytes.toString(entry.getKey())); //relative to region
    if (entry.getValue() != null) {
      for (Path path : entry.getValue()) {
        builder.addFlushOutput(path.getName());
      }
    }
    desc.addStoreFlushes(builder);
  }
  return desc.build();
}
 

开发者ID:grokcoder,
项目名称:pbase,
代码行数:23,
代码来源:ProtobufUtil.java

示例23: toRegionEventDescriptor

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
public static RegionEventDescriptor toRegionEventDescriptor(
    EventType eventType, HRegionInfo hri, long seqId, ServerName server,
    Map<byte[], List<Path>> storeFiles) {
  RegionEventDescriptor.Builder desc = RegionEventDescriptor.newBuilder()
      .setEventType(eventType)
      .setTableName(ByteStringer.wrap(hri.getTable().getName()))
      .setEncodedRegionName(ByteStringer.wrap(hri.getEncodedNameAsBytes()))
      .setLogSequenceNumber(seqId)
      .setServer(toServerName(server));

  for (Map.Entry<byte[], List<Path>> entry : storeFiles.entrySet()) {
    StoreDescriptor.Builder builder = StoreDescriptor.newBuilder()
        .setFamilyName(ByteStringer.wrap(entry.getKey()))
        .setStoreHomeDir(Bytes.toString(entry.getKey()));
    for (Path path : entry.getValue()) {
      builder.addStoreFile(path.getName());
    }

    desc.addStores(builder);
  }
  return desc.build();
}
 

开发者ID:grokcoder,
项目名称:pbase,
代码行数:23,
代码来源:ProtobufUtil.java

示例24: regionSequenceIdsToByteArray

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * @param regionLastFlushedSequenceId the flushed sequence id of a region which is the min of its
 *          store max seq ids
 * @param storeSequenceIds column family to sequence Id map
 * @return Serialized protobuf of <code>RegionSequenceIds</code> with pb magic prefix prepended
 *         suitable for use to filter wal edits in distributedLogReplay mode
 */
public static byte[] regionSequenceIdsToByteArray(final Long regionLastFlushedSequenceId,
    final Map<byte[], Long> storeSequenceIds) {
  ZooKeeperProtos.RegionStoreSequenceIds.Builder regionSequenceIdsBuilder =
      ZooKeeperProtos.RegionStoreSequenceIds.newBuilder();
  ZooKeeperProtos.StoreSequenceId.Builder storeSequenceIdBuilder =
      ZooKeeperProtos.StoreSequenceId.newBuilder();
  if (storeSequenceIds != null) {
    for (Map.Entry<byte[], Long> e : storeSequenceIds.entrySet()){
      byte[] columnFamilyName = e.getKey();
      Long curSeqId = e.getValue();
      storeSequenceIdBuilder.setFamilyName(ByteStringer.wrap(columnFamilyName));
      storeSequenceIdBuilder.setSequenceId(curSeqId);
      regionSequenceIdsBuilder.addStoreSequenceId(storeSequenceIdBuilder.build());
      storeSequenceIdBuilder.clear();
    }
  }
  regionSequenceIdsBuilder.setLastFlushedSequenceId(regionLastFlushedSequenceId);
  byte[] result = regionSequenceIdsBuilder.build().toByteArray();
  return ProtobufUtil.prependPBMagic(result);
}
 

开发者ID:grokcoder,
项目名称:pbase,
代码行数:28,
代码来源:ZKUtil.java

示例25: callExecService

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
@Override
protected Message callExecService(Descriptors.MethodDescriptor method, Message request,
    Message responsePrototype) throws IOException {
  if (LOG.isTraceEnabled()) {
    LOG.trace("Call: " + method.getName() + ", " + request.toString());
  }
  final ClientProtos.CoprocessorServiceCall call =
      ClientProtos.CoprocessorServiceCall.newBuilder()
          .setRow(ByteStringer.wrap(HConstants.EMPTY_BYTE_ARRAY))
          .setServiceName(method.getService().getFullName()).setMethodName(method.getName())
          .setRequest(request.toByteString()).build();
  CoprocessorServiceResponse result =
      ProtobufUtil.execRegionServerService(connection.getClient(serverName), call);
  Message response = null;
  if (result.getValue().hasValue()) {
    response =
        responsePrototype.newBuilderForType().mergeFrom(result.getValue().getValue()).build();
  } else {
    response = responsePrototype.getDefaultInstanceForType();
  }
  if (LOG.isTraceEnabled()) {
    LOG.trace("Result is value=" + response);
  }
  return response;
}
 

开发者ID:grokcoder,
项目名称:pbase,
代码行数:26,
代码来源:RegionServerCoprocessorRpcChannel.java

示例26: sum

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
private Map<byte [], Long> sum(final Table table, final byte [] family,
    final byte [] qualifier, final byte [] start, final byte [] end)
throws ServiceException, Throwable {
  return table.coprocessorService(ColumnAggregationProtos.ColumnAggregationService.class,
      start, end,
    new Batch.Call<ColumnAggregationProtos.ColumnAggregationService, Long>() {
      @Override
      public Long call(ColumnAggregationProtos.ColumnAggregationService instance)
      throws IOException {
        CoprocessorRpcUtils.BlockingRpcCallback<ColumnAggregationProtos.SumResponse> rpcCallback =
            new CoprocessorRpcUtils.BlockingRpcCallback<>();
        ColumnAggregationProtos.SumRequest.Builder builder =
          ColumnAggregationProtos.SumRequest.newBuilder();
        builder.setFamily(ByteStringer.wrap(family));
        if (qualifier != null && qualifier.length > 0) {
          builder.setQualifier(ByteStringer.wrap(qualifier));
        }
        instance.sum(null, builder.build(), rpcCallback);
        return rpcCallback.get().getSum();
      }
    });
}
 

开发者ID:apache,
项目名称:hbase,
代码行数:23,
代码来源:TestCoprocessorEndpoint.java

示例27: getUserPermissions

点赞 3

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * A utility used to get permissions for selected namespace.
 * <p>
 * It's also called by the shell, in case you want to find references.
 *
 * @param protocol the AccessControlService protocol proxy
 * @param namespace name of the namespace
 * @throws ServiceException
 */
public static List<UserPermission> getUserPermissions(RpcController controller,
    AccessControlService.BlockingInterface protocol,
    byte[] namespace) throws ServiceException {
  AccessControlProtos.GetUserPermissionsRequest.Builder builder =
      AccessControlProtos.GetUserPermissionsRequest.newBuilder();
  if (namespace != null) {
    builder.setNamespaceName(ByteStringer.wrap(namespace));
  }
  builder.setType(AccessControlProtos.Permission.Type.Namespace);
  AccessControlProtos.GetUserPermissionsRequest request = builder.build();
  AccessControlProtos.GetUserPermissionsResponse response =
      protocol.getUserPermissions(controller, request);
  List<UserPermission> perms = new ArrayList<>(response.getUserPermissionCount());
  for (AccessControlProtos.UserPermission perm: response.getUserPermissionList()) {
    perms.add(toUserPermission(perm));
  }
  return perms;
}
 

开发者ID:apache,
项目名称:hbase,
代码行数:28,
代码来源:AccessControlUtil.java

示例28: getDataToWriteToZooKeeper

点赞 2

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * Creates the labels data to be written to zookeeper.
 * @param existingLabels
 * @return Bytes form of labels and their ordinal details to be written to zookeeper.
 */
public static byte[] getDataToWriteToZooKeeper(Map<String, Integer> existingLabels) {
  VisibilityLabelsRequest.Builder visReqBuilder = VisibilityLabelsRequest.newBuilder();
  for (Entry<String, Integer> entry : existingLabels.entrySet()) {
    VisibilityLabel.Builder visLabBuilder = VisibilityLabel.newBuilder();
    visLabBuilder.setLabel(ByteStringer.wrap(Bytes.toBytes(entry.getKey())));
    visLabBuilder.setOrdinal(entry.getValue());
    visReqBuilder.addVisLabel(visLabBuilder.build());
  }
  return ProtobufUtil.prependPBMagic(visReqBuilder.build().toByteArray());
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:16,
代码来源:VisibilityUtils.java

示例29: getUserAuthsDataToWriteToZooKeeper

点赞 2

import org.apache.hadoop.hbase.util.ByteStringer; //导入依赖的package包/类
/**
 * Creates the user auth data to be written to zookeeper.
 * @param userAuths
 * @return Bytes form of user auths details to be written to zookeeper.
 */
public static byte[] getUserAuthsDataToWriteToZooKeeper(Map<String, List<Integer>> userAuths) {
  MultiUserAuthorizations.Builder builder = MultiUserAuthorizations.newBuilder();
  for (Entry<String, List<Integer>> entry : userAuths.entrySet()) {
    UserAuthorizations.Builder userAuthsBuilder = UserAuthorizations.newBuilder();
    userAuthsBuilder.setUser(ByteStringer.wrap(Bytes.toBytes(entry.getKey())));
    for (Integer label : entry.getValue()) {
      userAuthsBuilder.addAuth(label);
    }
    builder.addUserAuths(userAuthsBuilder.build());
  }
  return ProtobufUtil.prependPBMagic(builder.build().toByteArray());
}
 

开发者ID:fengchen8086,
项目名称:ditb,
代码行数:18,
代码来源:VisibilityUtils.java


版权声明:本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系管理员进行删除。
喜欢 (0)