Skip to content

Commit

Permalink
Spark 1916
Browse files Browse the repository at this point in the history
The changes could be ported back to 0.9 as well.
Changing in.read to in.readFully to read the whole input stream rather than the first 1020 bytes.
This should ok considering that Flume caps the body size to 32K by default.

Author: David Lemieux <[email protected]>

Closes apache#865 from lemieud/SPARK-1916 and squashes the following commits:

a265673 [David Lemieux] Updated SparkFlumeEvent to read the whole stream rather than the first X bytes.
(cherry picked from commit 0b769b7)

Signed-off-by: Patrick Wendell <[email protected]>
  • Loading branch information
David Lemieux authored and conviva-zz committed Sep 4, 2014
1 parent 6ad47a5 commit b048d7c
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ class SparkFlumeEvent() extends Externalizable {
def readExternal(in: ObjectInput) {
val bodyLength = in.readInt()
val bodyBuff = new Array[Byte](bodyLength)
in.read(bodyBuff)
in.readFully(bodyBuff)

val numHeaders = in.readInt()
val headers = new java.util.HashMap[CharSequence, CharSequence]
Expand Down

0 comments on commit b048d7c

Please sign in to comment.