我有一个包含1MB blob的表。范围查询期间的Cassandra OutOfMemoryError
CREATE TABLE blobs_1( 关键文本, 版本BIGINT, 块INT, object_blob一滴, object_size INT, PRIMARY KEY(键,版本,块) )
每个LOB散布关于100块。 以下查询导致OutOfMemory错误:
从blobs_1中选择object_size,其中key ='key1'和version = 1;
以下是错误:
java.lang.OutOfMemoryError:Java堆空间 在org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:344) 在org.apache。 cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392) 在org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355) 在org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer。 java:124) at org.apache.cassandra.db.OnDiskAtom $ Serializer.deserializeFromSSTable(OnDiskAtom.java:85) at org.apache.cassandra.db.Column $ 1.computeNext(Column的.java:75) 在org.apache.cassandra.db.Column $ 1.computeNext(Column.java:64) 在com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) 在COM。 google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88) at org.apache.cassandra.db.columniterator。 SimpleSliceReader.computeNext(SimpleSliceReader.java:37) com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82) 在org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82) 在org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59) 在com.google。 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) (org.apache.cassandra.db.filter.QueryFilter $ 2.get) (QueryFilter.java:157) at org.apache.cassandra.db.filter.QueryFilter $ 2.hasNext(QueryFilter.java:140) at org.apache.cassandra.utils.MergeIterator $ Candidate.advance(MergeIterator.java: 144) at org.apache.cassandra.utils.MergeIterator $ ManyToOne.advance(MergeIterator.java:123) at org .apache.cassandra.utils.MergeIterator $ ManyToOne.computeNext(MergeIterator.java:97) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator .hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185) at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java :122) 在org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80) 在org.apache.cassandra.db.RowIteratorFactory $ 2.getReduced(RowIteratorFactory.java:101) 在有机apache.cassandra.db.RowIteratorFactory港币$ 16。getReduced(RowIteratorFactory.java:75) at org.apache.cassandra.utils.MergeIterator $ ManyToOne.consume(MergeIterator.java:115) at org.apache.cassandra.utils.MergeIterator $ ManyToOne.computeNext(MergeIterator.java: 98)
这发生在2.0.2上。令人沮丧的是,单个查询如此轻易地崩溃了服务器。 – user3025533