From: andrew cooke <andrew@...>

Date: Fri, 22 Mar 2013 19:00:05 -0300

[This is partly esprit d'escalier and party notes to try again on Monday,
because I had this conversation yesterday and completely failed to make my
points.]

So, yesterday, I was talking to someone (let's call him S) about testing.  And
why S's employees didn't do it (or various other "best practices").

S argued that he was making progress, but that it was difficult because adding
extra time for tests, CI, etc made prices higher, when the client was already
very demanding.

At this point maybe it would have good to ask S why he thought I wrote tests.
I promise it's not because I find them a good way to spend extra time.  In
fact, I think they help me do my work faster.

Which perhaps sheds some light on S's problem.

If good (excuse my lack of modesty here; I'll return to this below) engineers
use tests to do their work faster then of course the client is going to be
unhappy.  Why are they being charged more for something that means less work?

Now S isn't dumb.  There's a reason he needs more time for tests.  The reason
is that currently his engineers don't use them.  So this time is, effectively,
on-the-job training.  That's the positive take.  The negative take is that the
tests are checklist items that his engineers will do without conviction or
understanding - rote work with little gain.

Which is putting the horse before the cart.  You can't go to a client and say
"my engineers suck, so I am going to charge you extra."  Instead, S needs to
improve his engineers.  And fast.

So what to do?

One thing you can do is ask the engineers why they don't test.  One answer I
got was that there was never enough time.  And I can undestand this.  If
you've never used tests it can seem counter-intuitive that they save time.  So
maybe what that engineer needs is the encouragement and freedom to explore a
little: give him some extra time so he can learn to save time in the future.

Now, back to my modesty.  In my previous job I was just an average programmer.
What I was praised for was my moderation - that I knew when to compromise, and
how to find a balance between "best practices" and "getting shit done".

Which is ironic, because yesterday I found myself shouting at S, in the role
of the extremist developer, foaming at the mouth about lack of standards.

On Monday I hope I have the energy to try again.

Andrew

PS. One more statement I forgot to pick up on: "I write tests, but only use
them to solve the problem at the time".  Which is doing 99% of the work for
only 50% of the gain.  So I really need to explain how tests help refactoring.

### Re: Arguing About Tests, Still

From: Michiel Buddingh <michiel@...>

Date: Fri, 22 Mar 2013 23:40:34 +0100

Experiences may also greatly differ depending on the kind of
programming you do.  Tests are almost indespensible when writing
mathematical code or complex business logic.  They are increasingly
hard to write, for diminishing gains, as the number of side effects
increases.  Defining a 'side effect' as a result of the code that is
not, or not consistently measurable from other code.

Unit tests aren't going to find a race condition, and they're not
going to pick up browser incompatibilities, or the fact that the UI
suddenly feels 'sluggish' after the lastest bugfix.  And there are
many software engineers who spend far more time fixing these 'side
effects' than they spend on traditional bugs.

--
Michiel

### Do What Makes Sense

From: andrew cooke <andrew@...>

Date: Sat, 23 Mar 2013 08:33:52 -0300

Sure the idea is to do what makes sense.  I said above - simply writing tests
to complete a checkbox is not the answer.

But to do that you need to have some experience.  And to do that you need to
invest some time to learn.  And to get that to happen... well, that's the hard
part, because it involves changing the working culture.

As for where tests help - I don't think there's hard and fast rules there,
either.  You oterate; at the end of a job you can look back and ask what went
wrong and how it could be done better.

ON a recent project I was supplying one small part.  It was a cache for a
client-server (as described at http://www.acooke.org/portfolio/cache/).  I
wrote a pile of integration tests that included a script that set up and
pulled down servers.  It worked really well and was a huge help.  But
returning to the project a year later, everything is broken because the rest
of the project has moved on.

So I am asking myself what I could have done better.  End-to-end tests were
really useful, but including other libraries in the tests made them fragile.
Should I have forked the rest of the system?  Used CI to detect changes and
then fix incrementally?  I don't know - there are lots of hard technical
issues (the work is behind a VPN with an uncooperative client).

Related to the above - there's a group benefit to all doing tests.  When
you're the only person in the room using them it's harder (practically and
psychologically) than when you're in a supportive culture.

Andrew