• 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏吧

Java ChunkIterator类的典型用法和代码示例

java 2次浏览

本文整理汇总了Java中org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.ChunkIterator的典型用法代码示例。如果您正苦于以下问题:Java ChunkIterator类的具体用法?Java ChunkIterator怎么用?Java ChunkIterator使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。

ChunkIterator类属于org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader包,在下文中一共展示了ChunkIterator类的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: merge

点赞 2

import org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.ChunkIterator; //导入依赖的package包/类
@Override
public int merge(MergeState mergeState) throws IOException {
  int docCount = 0;
  int idx = 0;

  for (AtomicReader reader : mergeState.readers) {
    final SegmentReader matchingSegmentReader = mergeState.matchingSegmentReaders[idx++];
    CompressingStoredFieldsReader matchingFieldsReader = null;
    if (matchingSegmentReader != null) {
      final StoredFieldsReader fieldsReader = matchingSegmentReader.getFieldsReader();
      // we can only bulk-copy if the matching reader is also a CompressingStoredFieldsReader
      if (fieldsReader != null && fieldsReader instanceof CompressingStoredFieldsReader) {
        matchingFieldsReader = (CompressingStoredFieldsReader) fieldsReader;
      }
    }

    final int maxDoc = reader.maxDoc();
    final Bits liveDocs = reader.getLiveDocs();

    if (matchingFieldsReader == null
        || matchingFieldsReader.getVersion() != VERSION_CURRENT // means reader version is not the same as the writer version
        || matchingFieldsReader.getCompressionMode() != compressionMode
        || matchingFieldsReader.getChunkSize() != chunkSize) { // the way data is decompressed depends on the chunk size
      // naive merge...
      for (int i = nextLiveDoc(0, liveDocs, maxDoc); i < maxDoc; i = nextLiveDoc(i + 1, liveDocs, maxDoc)) {
        Document doc = reader.document(i);
        addDocument(doc, mergeState.fieldInfos);
        ++docCount;
        mergeState.checkAbort.work(300);
      }
    } else {
      int docID = nextLiveDoc(0, liveDocs, maxDoc);
      if (docID < maxDoc) {
        // not all docs were deleted
        final ChunkIterator it = matchingFieldsReader.chunkIterator(docID);
        int[] startOffsets = new int[0];
        do {
          // go to the next chunk that contains docID
          it.next(docID);
          // transform lengths into offsets
          if (startOffsets.length < it.chunkDocs) {
            startOffsets = new int[ArrayUtil.oversize(it.chunkDocs, 4)];
          }
          for (int i = 1; i < it.chunkDocs; ++i) {
            startOffsets[i] = startOffsets[i - 1] + it.lengths[i - 1];
          }

          // decompress
          it.decompress();
          if (startOffsets[it.chunkDocs - 1] + it.lengths[it.chunkDocs - 1] != it.bytes.length) {
            throw new CorruptIndexException("Corrupted: expected chunk size=" + startOffsets[it.chunkDocs - 1] + it.lengths[it.chunkDocs - 1] + ", got " + it.bytes.length);
          }
          // copy non-deleted docs
          for (; docID < it.docBase + it.chunkDocs; docID = nextLiveDoc(docID + 1, liveDocs, maxDoc)) {
            final int diff = docID - it.docBase;
            startDocument();
            bufferedDocs.writeBytes(it.bytes.bytes, it.bytes.offset + startOffsets[diff], it.lengths[diff]);
            numStoredFieldsInDoc = it.numStoredFields[diff];
            finishDocument();
            ++docCount;
            mergeState.checkAbort.work(300);
          }
        } while (docID < maxDoc);

        it.checkIntegrity();
      }
    }
  }
  finish(mergeState.fieldInfos, docCount);
  return docCount;
}
 

开发者ID:lamsfoundation,
项目名称:lams,
代码行数:72,
代码来源:CompressingStoredFieldsWriter.java


版权声明:本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系管理员进行删除。
喜欢 (0)