Matt Archer's Blog

  • LinkedIn

  • Twitter

  • Twitter Updates

  • New Book

    I'm working on a new book. View the home page for the 'Tips for manual testers working in an agile environment' book
  • Enter your email address to receive notifications of new posts by email.

  • Subscribe

15 controversial testing tweets from #CAST2012

Posted by Matt Archer on July 24, 2012

I wasn’t able to attend the Conference of the Association for Software Testing (CAST), but I did follow the event closely on Twitter (#CAST2012). During the event, a few times each day, I browsed the #CAST2012 Twitter stream and added any tweets that caught my eye to my list of favourites. When the conference ended, I looked at the tweets I had within my list of favourites and something stood out. To many of the Testers, Test Managers and even Heads of Testing that I meet, a large majority of the tweets would be seen as controversial. Not because they are crazy ideas (far from it, at least from my perspective), but because they go against what many people within testing still believe are our “best practices”.

I happen to agree with all of the tweets that I have listed at the bottom of this post. What do you think? Do the tweets sit well with your testing beliefs or do they leave you feeling uneasy? To put it another way, how many of the statements that I have written immediately below do you (or those around you) believe to be true? I ask this because the tweets that follow place each of these statements in doubt (as a minimum, questioning the universally applicable manner in which these types of statement are often described or suggested).

Pitting each of the statements below against the tweets I’ve selected would consume pages of discussion. That is not my intention here. Instead, I encourage anyone who finds themselves reading one of the statements below and thinking “I assumed every tester believed that; that’s the best practice” to take 15 minutes from their day and begin to research. Research the person who published the tweet and the people they reference. Do this to understand why their views of testing differ from your own and ultimately following Christin Wiedemann’s advice of “Question your approach. Continuously challenge and question methods, techniques, and core beliefs. @c_wiedemann #CAST2012” (tweeted by Claire Moss (@aclairefication) at #CAST2012).

How many of the following statements do you believe to be true?

1. You should estimate the number of tests required for a release.  Once complete, you should report the number of tests passed and failed.

2. Create automated tests to replace your manual tests

3. Learn how to programme (write code) to further your testing career

4. Developers can’t find bugs in their own code

5. The expected behaviour of a system is defined by people we broadly refer to as Analysts

6. Adhering to a known testing standard will improve your chances of achieving your testing goal

7. Test automation is a long term investment

8. Outsourcing your testing will reduce your costs

9. When the build is green the product is of sufficient quality to release

10. Introducing a test tool will help you manage and improve your testing

11. Whenever possible, you should hire Testers with testing certifications

12. Analyse your bugs and give them relative priorities (e.g. P1, P2, etc.)

13. Create a set of templates that can be used by every project in your organisation

14. Write your automated tests using the same programming language that the developers use

15. Always test against sources of information that have been validated by the customer

Now read the tweets below. Do they alter how you think about the statements above?

Note: It is easy to misinterpret tweets, so if you are the author (or subject) of one of the tweets below and believe I have taken your tweet out of context, please feel free to contact me and I will post an update, or if you prefer, remove the tweet entirely.

1. You should estimate the number of tests required for a release.  Once complete, you should report the number of tests passed and failed.

Direct Link

18 Jul 2012

Alex Bantz ‏@alexjbantz

How many tests? is a silly & useless question. Better question is “How should you test this?” @michaelbolton #criticalthinking #CAST2012

2. Create automated tests to replace your manual tests

Direct Link

18 Jul 2012

Claire Moss ‏@aclairefication

You’ve taught your automation to be impatient? to be frustrated? All these things that people are good at doing? Need manual tests #CAST2012

3. Learn how to programme (write code) to further your testing career

Direct Link

18 Jul 2012

Michael Larsen ‏@mkltesthead

Not every tester should code. We end up taking awesome testers and turn some into awful programmers @testobsessed #CAST2012

4. Developers can’t find bugs in their own code

Direct Link

17 Jul 2012

Thomas Vaniotis ‏@tvaniotis

Devs who investigate even if imperfectly ARE testing. Belittling their contributions reduces one’s credibility #cast2012 @briangerhardt

5. The expected behaviour of a system is defined by people we broadly refer to as Analysts

Direct Link

17 Jul 2012

Claire Moss ‏@aclairefication

We are defining requirements by the act of filing a bug: a difference in the observed behavior and the expectation. @testobsessed#CAST2012

6. Adhering to a known testing standard will improve your chances of achieving your testing goal

Direct Link

17 Jul 2012

Geordie Keitt ‏@geordiekeitt

Standards are not incompatible with thinking. Unfortunately they are not incompatible with not thinking. #cast2012

7. Test automation is a long term investment

Direct Link

17 Jul 2012

Phil McNeely ‏@AdvInTesting

Don’t be afraid to have throwaway automation @mkltesthead #CAST2012

8. Outsourcing your testing will reduce your costs

Direct Link

16 Jul 2012

Michael Bolton ‏@michaelbolton

“Outsourcing often *really means* ‘outsourcing your waste'”. -Tripp Babbitt #CAST2012 #testing

9. When the build is green the product is of sufficient quality to release

Direct Link

16 Jul 2012

Tony Bruce ‏@tonybruce77

@testinggeek Paraphrased ‘Take green as information rather than a good pass’ #cast2012

10. Introducing a test tool will help you manage and improve your testing

Direct Link

18 Jul 2012

Anne-Marie Charrett ‏@charrett

TestLink & QTP encourage bad testing @adamgoucher So true#CAST2012

11. Whenever possible, you should hire Testers with testing certifications

Direct Link

16 Jul 2012

Ben Simo ‏@QualityFrog

“Certification doesn’t create zombie testers; but it is like spray-on credibility for them.” – @BenjaminKelly #CAST2012 #Testing

12. Analyse your bugs and give them relative priorities (e.g. P1, P2, etc.)

Direct Link

16 Jul 2012

Markus Gärtner ‏@mgaertne

Anyone with children knows there is no point in prio 1 or 2 bugs. You eventually have to fix both. @benjaminkelly #cast2012

13. Create a set of templates that can be used by every project in your organisation

Direct Link

16 Jul 2012

Wade Wachs ‏@WadeWachs

One flavor of the testing dead – the template weenie @benjaminkelly#cast2012

14. Write your automated tests using the same programming language that the developers use

Direct Link

16 Jul 2012

Anand and Komal ‏@testinggeek

Selecting tool set and language which make sense – don’t use same language as development for the sake of it – @adamgoucher#CAST2012

15. Always test against sources of information that have been validated by the customer

Direct Link

17 Jul 2012

Jason Coutu ‏@jcoutu

From @testobsessed “just cause we’re making things up, doesn’t mean we’re wrong” #CAST2012

5 Responses to “15 controversial testing tweets from #CAST2012”

  1. Great list and some interesting perspectives. I’d like to add a little bit to the “throwaway automation” comment (Since I made it 😉 ). The idea of throwaway automation isn’t just “write automation and discard it”, it’s to write automation that can take you some place to explore, and then once you have done that, don’t be afraid to tweak it and/or make something different to take it somewhere else.

    Consider it to be the difference between a train and a Taxi cab (which is the metaphor I used to describe “throwaway automation”). A train can be very powerful and can move a lot of people or payload, but it is limited to running on the rails it travels on. By contrast, a taxi cab can go anywhere (for a price) and can drop you off and let you explore an idea or location, and then you can cal up another taxi to take you somewhere else and explore other areas.

    • Thanks Michael, I really like the way you describe throwaway automation as something you can use to explore some aspects of the system and then once you have finished exploring, tweak the automation to help you explore somewhere else. From a personal perspective, I find this type of throwaway automation much easier (and faster) to create when the team is also using a “train”. I think this is because the train has often already put in place the things that make any journey easier, like good testability or a common way of reporting complex objects from the system in a way that makes them easy to manually explore / investigate.

    • There are several aspects of what one can call “throwaway automation”:
      1. I believe that automation should be handled in 2 parallel processes,
      1.1. An organized automation, creating a common infrastructure available to testers as well as developers – done once, used by many, in many test cases, regressions, CI etc.
      1.2. Ad-Hoc “throwaway automation” – made either by the one who needs it, or with assistance of automation developers, but not intentionally based on the common infrastructure – Quick & Dirty.
      2. I think not enough testers use Automation Aided testing – i.e. As suggested above, make use of existing automation infrastructure to reach desired point or validate a complex expected result or just answer any tedious task which comes along the Manual testers daily work.
      This issue can be improved quite easily, especially where KDT like ( / structured automation SW) infrastructure exists,
      All that it requires, is supplying the testers with a simple GUI, enabling them to run a:
      2.1 Specific function available by automation Infra. (by selecting it from a list, and adding required parameters)
      2.2 A list of functions from an Excel/CSV file (for easy manipulation of complex Ad-Hoc tasks).
      2.3 A batch of such Excel/CSV files
      2.4 Execute N x times of any of the above items.
      This shall bring the power of automation into manual testers hands.

      Al these methods should co-exist – its not that one is better then the other – each as the right to exist on its own.

      Kobi Halperin (@halperinko)

  2. Søren Harder said

    I have experienced some very heated debates in testing, and I think that we have to accept there is a rift between the ‘best practice’ and the ‘context-driven’ approach. When I read your blog, I just skimmed it and I had a very interesting experience. I first read your list of statements, and thought they were the ones, you called ‘controversial’, and did not discover until later that the controversial list came afterwards. Generally, I find the blog-posts to be much closer to what we as testers should take as our starting point (except possibly 14 and 9), so I was happy when you called these repeat-them-enough-times-and-they-will-be-true axioms “controversial”, but I was disappointed. I think it is important for us as a testing profession, not just to build a rigid ‘best practice’ with scraps from theoretical project management, but open up for the lessons learned approach, where a collective experience from an multitude of different actual projects is shared.

  3. “Developers can’t find bugs in their own code”

    I believe that good developers code defensively – test assumptions and function return values inside their code – and provide clear reasons for anticipated failures so that they and others can quickly understand the cause when something has failed; a good developer, however, could never assume that they have considered all possible uses of their code and it is for this reason external testers can provide much value in a different point of view; even the most vigilant developer may make mistakes – having a safety net in the form of a separate test team allows a second chance to discover problems before the customers do.

Leave a comment