42

I have the following code that I'm trying to improve:

BigDecimal total = entity.getAssociate().stream().map(Associates::getPropertyA)
    .reduce(BigDecimal.ZERO, BigDecimal::add);
total = entity.getAssociate().stream().map(Associates::getPropertyB)
    .reduce(total, BigDecimal::add);
total = entity.getAssociate().stream().map(Associates::getPropertyC)
    .reduce(total, BigDecimal::add);
total = entity.getAssociate().stream().map(Associates::getPropertyD)
    .reduce(total, BigDecimal::add);

It works, but it really feels like there is a better way of doing this. Can anybody enlighten me on the matter?

Henkes
  • 438
  • 4
  • 8

5 Answers5

59

If all these properties are of the same type (it seems they are all BigDecimal), you can use flatMap to create a single Stream of them all and then reduce it to the total sum:

BigDecimal total = 
    entity.getAssociate()
          .stream()
          .flatMap (a -> Stream.of(a.getPropertyA(),a.getPropertyB(),a.getPropertyC(),a.getPropertyD()))
          .reduce(BigDecimal.ZERO, BigDecimal::add);
Eran
  • 387,369
  • 54
  • 702
  • 768
32

You can simply chain-add all properties inside map:

BigDecimal total = entity.getAssociate().stream()
            .map(a -> a.getPropertyA()
                    .add(a.getPropertyB())
                    .add(a.getPropertyC())
                    .add(a.getPropertyD()))
            .reduce(BigDecimal.ZERO, BigDecimal::add);

Mind that this changes the order by which numbers are added.

Manos Nikolaidis
  • 21,608
  • 12
  • 74
  • 82
  • Mind you, changing the order *shouldn't* matter in the case given. Addition should be commutative on *most* types (eg, 1 + 2 = 2 + 1). But naturally it's up to the author to determine if addition is commutative on their type (eg, string concatenation is not commutative: "a" + "b" != "b" + "a"). Notably floating point can have some weirdness with order even though addition on numbers is commutative in theory. – Kat Jul 10 '17 at 19:08
  • @Kat `BigDecimal` in Java is an arbitrary precision floating point number and the weirdness you mention is exactly why I made the comment. – Manos Nikolaidis Jul 10 '17 at 19:36
23

If you could add the following method to the Associates class:

public BigDecimal getSubtotal() {
    return propertyA.add(propertyB).add(propertyC).add(propertyD);
}

Then, doing the task would be easy:

BigDecimal total = entity.getAssociate().stream()
    .map(Associate::getSubtotal)
    .reduce(BigDecimal::add)
    .orElse(BigDecimal.ZERO);
fps
  • 33,623
  • 8
  • 55
  • 110
16

The words "better" or "best" should refer to some metric. Performance? Readability? Elegance?

The answer by Eran shows one approach, namely creating small streams containing the property values A, B, C and D for each associate, and flat-mapping these values into a larger stream. The order of summation in this approach is

A0 + B0 + C0 + D0  +  A1 + B1 + C1 + D1  + ... +  An + Bn + Cn + Dn

Another option would be to create individual streams of the properties A, B, C and D, and concatenating these streams before applying the reduction. This could be done with nested Stream#concat calls, but more elegantly and flexibly using flatMap with an identity function:

Stream<BigDecimal> stream = Stream.of(
    entity.getAssociate().stream().map(Associates::getPropertyA),
    entity.getAssociate().stream().map(Associates::getPropertyB),
    entity.getAssociate().stream().map(Associates::getPropertyA),
    entity.getAssociate().stream().map(Associates::getPropertyC))
    .flatMap(Function.identity());

BigDecimal total = stream.reduce(BigDecimal.ZERO, BigDecimal::add);

The key point is that in this case, the summation order is

A0 + A1 + ... + An + B0 + B1 + ... + Bn + C0 + C1 + ... + Cn 

(It may not technically make a large difference. But it is an approach that is conceptually different to the ones that have been proposed so far (in terms of the summation order), and thus, may be worth mentioning as one option - additionally, it is more similar to the approach that you currently use, but without the broken identity value for the reduction)

Marco13
  • 53,703
  • 9
  • 80
  • 159
  • The metric I need was actually elegance, which translates to readability in my case. I forgot to mention that order of addition was not important, so I went with the sortest solution. – Henkes Jul 07 '17 at 17:56
6

Or simply a forEach:

BigDecimal[] total = new BigDecimal[] { BigDecimal.ZERO };
entity.getAssociate().stream().forEach(a -> {
    total[0] = total[0].add(a.getPropertyA());
    // ... and so on for all others
});

As a side-not your current implementation is wrong since you are violating the identity of the reduction.

Eugene
  • 117,005
  • 15
  • 201
  • 306
  • 3
    well in case I do deserve a down vote here, a comment would be appreciated also – Eugene Jul 05 '17 at 12:36
  • It wasn't me who downvoted. I presume putting a `BigDecimal` into an array as a workaround for not being (effectively) final ins't very popular with people ? The downvoter could enlighten us with his/her reasons. – Manos Nikolaidis Jul 05 '17 at 13:17
  • maybe who down vote it that don't like this way. :), but Indeed it solved the OP's problem. you have mine vote. – holi-java Jul 05 '17 at 13:37
  • 1
    This is what you deserve, because it really solved the OP's problem. if it isn't, no my vote. :) – holi-java Jul 05 '17 at 13:40
  • 1
    I don't like this approach because it works by means of side-effects, which I tend to avoid when working with streams. Despite this, I haven't downvoted. Besides, I think you don't need the `stream()` call, since `getAssociate()` seems to return a list or a collection, over which you might call `forEach` directly. – fps Jul 05 '17 at 16:38
  • @FedericoPeraltaSchaffner you're right about side-effects here obviously, still there are comment like this: https://stackoverflow.com/questions/33557251/java-lambda-expression-mapping-and-then-modifying-a-list/33557322#comment54894478_33557322 that sort of make this rule a *mainly*, not *exclusively* rule. – Eugene Jul 05 '17 at 18:00
  • @FedericoPeraltaSchaffner also, from my point of view this is a safe side-effect (taking sequential consistency into account) - all I care is that *all* additions happen on the variable, I don't care the *order* in which they are done. This slightly reminds of relaxed atomics here... – Eugene Jul 05 '17 at 18:02
  • And `BigDecimal` is immutable, so no worries, you are correct. I have upvoted because this is a correct answer, even though I don't like its approach very much ;) A `HoldingConsumer` that makes the sum and keeps the subtotal in a field would be better, or prettier, I think... – fps Jul 05 '17 at 18:20
  • 1
    @FedericoPeraltaSchaffner note that `forEach()` can only be useful by means of side-effects. But I would avoid such a solution too. – Didier L Jul 05 '17 at 19:48
  • 4
    I downvoted. The question asks to do something in a certain way. To me, this means "I know what I have at my disposal (reduce), I don't know how to use it". So while I agree this answer effectively tackles the question, it's not answering the question as intended. Secondly this answer introduces a burden that is handling a reference (the array in this occurrence, but that could be anything, even an `AtomicReference`). Speaking of which, an `AtomicReference` would have been much better to handle proper parallelism, should it have been needed. All in all, it's a very poor stream usage. – Olivier Grégoire Jul 06 '17 at 07:17
  • 1
    @OlivierGrégoire thank you, this is my favorite down vote to date; because of proper reasoning. – Eugene Jul 06 '17 at 07:19
  • 1
    @OlivierGrégoire generally reduce would be much worse, then the reference handling. reduce would need to create immutable objects one each of the accumulator and combiner phases – Eugene Jul 06 '17 at 07:22
  • @Eugene If only you had used a custom collector instead of `forEach`... – fps Jul 06 '17 at 11:59
  • @FedericoPeraltaSchaffner :) you seem to care more about this then me... – Eugene Jul 06 '17 at 12:01
  • @Eugene This is long for a comment, but I'll do my best... Your code is correct. It works both for sequential and for parallel streams. It has a side effect, of course. Actually, side effects is what makes it work. My concern with this approach has nothing to do with your particular solution for this question. The problem, as I see it, has more to do with the language and its verbosity and lack of expressiveness for common tasks such as summing `BigDecimal`s. As the language has these weaknesses, different tricks and hacks arise, in order to solve common programming problems easily... – fps Jul 06 '17 at 14:05
  • @Eugene ... Java has shown this pattern historically. Even today there are many questions in job interviews that rely on some of these tricks and hacks. And the interviewer will be expecting that you know these tricks, otherwise you won't get the job. Now, you are a very clever and experienced programmer, and you know exactly what you are doing. You know when a side effect is harmless and when it must not be used. I've seen many of your answers, and I've found them inspiring both as a curiosity and for my day-to-day work. And I've been programming for more than 25 years... – fps Jul 06 '17 at 14:10
  • @Eugene Now consider what happens when a novice or a junior or even some semi senior developer comes to SO and sees your answers. They say *Aha! This is the trick I was looking for!* And they copy your answer verbatim and adapt it to their code. And they say to themselves *Today I've learned something new. This guy Eugene from Eastern Europe is using this trick, so why would I not use it as well?* But they don't have all your knowledge and experience. They can't discern whether side effects are harmless for their solution or not. They don't know anything about interference... – fps Jul 06 '17 at 14:16
  • @Eugene ... What is more, they then parallelize the stream because it's more efficient, they say, and nasty and unexpected bugs occur. They even say *But I'm smart and know what I'm doing, so I'll use an `AtomicReference` instead of an array. And they don't have the least idea about the crappy code they have developed. Because they don't have the experience to know that a good parallel solution requires low thread contention. They don't even know what contention actually means. But, as you Eugene have used this trick here in SO, they will take it for good for all possible scenarios... – fps Jul 06 '17 at 14:27
  • @Eugene Now, I know that you're not responsible for the crappy code of other developers out there. It's just that you are surely going to be taken as an expert, and your code will be used over the world as an expert's code, but will actually introduce hard-to-detect bugs. My point is that we should try to save a headache to less experienced developers. I've also shown tricks and hacks here, and I've also used other people's hacks. And I've introduced bugs and learned the hard way... So that's why I now try to respond without tricks, though I not always succeed... – fps Jul 06 '17 at 14:43
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/148525/discussion-between-eugene-and-federico-peralta-schaffner). – Eugene Jul 06 '17 at 16:17