I don't quite get the concept of block compression in hadoop. Let's say i have a 1Gb of Data that I want to write as block-compressed sequencefile and the default HDFS Blocksize of 128Mb.
Does it mean, my data gets split into 8 compressed blocks on HDFS and each of these Blocks can be decompressed later independently?