This discussion is archived
1 2 Previous Next 18 Replies Latest reply: Nov 22, 2012 1:06 AM by TPD-Opitz-Consulting-com RSS

JUnit: test that functions were performed in a specific order

968411 Newbie
Currently Being Moderated
Hi,

Given the following class...

class Foobar {
public static void main(String args[]) {...}
public void A() {...}
public void B() {...}
public void C() {...}
public void D() {...}
}

how do I write a JUnit test for this requirement: "Upon calling 'main', perform the following functions in order: A, B, C, D."

I am defining "in order" as: A starts and completes before B ever starts, B starts and completes before C ever starts, etc.

Right now my test looks like this...

@Test
public void testOrder() {
Foobar.main(new String[0]);

final Object[] expectedOrder = ...; // something to indicate [A, B, C, D]
final Object[] actualOrder = getPerformedOrder();
assertArrayEquals(expectedOrder, actualOrder);
}

Not sure how to implement the "getPerformedOrder()" part. Thinking about using AspectJ to catch the beginning/end of method calls, but wondering if Java already has something else that I'm missing.

Is there an easy way to see what date/time a function was called? Or somehow determine what order the methods were called in?


Thank you.
  • 1. Re: JUnit: test that functions were performed in a specific order
    TPD-Opitz-Consulting-com Expert
    Currently Being Moderated
    965408 wrote:
    Is there an easy way to see what date/time a function was called? Or somehow determine what order the methods were called in?
    Not sure if JMock could help with this but what I'd do is something like this:
    @Test
    public void testOrder() {
     final List<String> actualOrder = new ArrayList<String>();
     Foobar testDummy = new Foobar(){
        public void A(){ actualOrder.add("A"); }
        public void B(){ actualOrder.add("B"); }
     };
     testDummy.compoundMethodExtractedFromMain({"aruments","list"});
     assertEquals("methods called in correct order","[A, B]", actualOrder.toString());
    }
    On the other hand test should cover business requirements and the order of methods calls doesn't look like that....

    bye
    TPD
  • 2. Re: JUnit: test that functions were performed in a specific order
    968411 Newbie
    Currently Being Moderated
    TPD Opitz-Consulting com wrote:
    965408 wrote:
    Is there an easy way to see what date/time a function was called? Or somehow determine what order the methods were called in?
    Not sure if JMock could help with this but what I'd do is something like this:
    @Test
    public void testOrder() {
    final List<String> actualOrder = new ArrayList<String>();
    Foobar testDummy = new Foobar(){
    public void A(){ actualOrder.add("A"); }
    public void B(){ actualOrder.add("B"); }
    };
    testDummy.compoundMethodExtractedFromMain({"aruments","list"});
    assertEquals("methods called in correct order","[A, B]", actualOrder.toString());
    }
    On the other hand test should cover business requirements and the order of methods calls doesn't look like that....

    bye
    TPD
    Thanks for the quick response TPD.

    Yeah, I guess this would be more of a functional requirement than a business requirement? Maybe even more specific than a functional requirement? Either way it's the customer telling us how to code, which is always fun. :-D

    Yeah, I totally forgot about even just sub-classing it. So at least that gives me an option around using JAspect.

    But still wondering if there's a more black-boxed approach I can take (i.e. without adding sub-classed code to the code I'm testing).

    I'm still wondering if there's some Java library that allows one to see what/when methods were called (without having to add too much Thirdparty jars/software to my run).

    Thanks again.
  • 3. Re: JUnit: test that functions were performed in a specific order
    TPD-Opitz-Consulting-com Expert
    Currently Being Moderated
    iAmSoFoobarred wrote:
    Yeah, I totally forgot about even just sub-classing it.
    Sometimes the woods are hard to see among the trees... ;o)
    But still wondering if there's a more black-boxed approach I can take (i.e. without adding sub-classed code to the code I'm testing).
    This is a contradiction!
    If you do a blackbox test you don't know what methods exist.
    How do you proove the correct order of something you don't know it even exists?

    Also: This kind of JUnitTest are not supposed to be "blackbox" test. They are only suppost not to depent (in compilation terms) on the implementation of the tested code. (In logical terms test always depent on the tested code since they shall fail if the tested code produces wrong results...)

    bye
    TPD
  • 4. Re: JUnit: test that functions were performed in a specific order
    gimbal2 Guru
    Currently Being Moderated
    iAmSoFoobarred wrote:
    But still wondering if there's a more black-boxed approach I can take (i.e. without adding sub-classed code to the code I'm testing).
    Does there need to be? The case you have is code which functions in a specific way and you need to add additional logic to it to be able to properly track & trace it in a unit testing environment. You have two options;

    a) modify the program logic to keep track of that information even when it is not used in the application code itself
    b) subclass in unit test code path so you can hook methods and add the information that you need.

    I see b) as the only real option as the extra tracking information is part of the unit tests. And subclassing works because this just happens to be an object oriented language.
  • 5. Re: JUnit: test that functions were performed in a specific order
    968411 Newbie
    Currently Being Moderated
    Thanks for the responses TPD and gimbal2.

    TPD, I agree it's a contradiction. As you figured, I'm just trying to make sure my test isn't running logic (e.g. the sub-classing/hook code) that helps me cheat the test, even if it is just simple print statements or what-not.

    Was looking for something like what gimbal2 mentioned in part (a), BUT is also part of the already delivered Java code [so once again I avoid running extra test-related code]. (Maybe something that lives in the StackTrace package that's always keeping track of all method calls ever made? Maybe I need to ask this in a different thread?)

    Of course, I might just be overkilling this.

    Thanks again.
  • 6. Re: JUnit: test that functions were performed in a specific order
    rp0428 Guru
    Currently Being Moderated
    >
    how do I write a JUnit test for this requirement: "Upon calling 'main', perform the following functions in order: A, B, C, D."
    >
    You don't - that isn't how tests are done. Tests are implemented to test functionality and business logic that already exists.

    What purpose would it serve to hard-code calls to A, B, C and D in a test if a user of your code could call those methods in any desired order?
    That wouldn't test anything.

    If those four methods are ONLY supposed to be called in order you would make them private and write a new method E that calls the methods in the order required by your business rule.

    Then you would write JUnit test to test method E.

    Don't embed business rules in your tests. The business rules should be implemented in the code you are testing.
  • 7. Re: JUnit: test that functions were performed in a specific order
    TPD-Opitz-Consulting-com Expert
    Currently Being Moderated
    rp0428 wrote:
    [...]Tests are implemented to test functionality and business logic that already exists.
    No.
    http://en.wikipedia.org/wiki/Test-driven_development
    Don't embed business rules in your tests. The business rules should be implemented in the code you are testing.
    But each test should clearly state which business rule it is testing.

    bye
    TPD
  • 8. Re: JUnit: test that functions were performed in a specific order
    968411 Newbie
    Currently Being Moderated
    But wouldn't the "ensure methods called in this order" test have some value in preventing future-developers from unknowingly breaking the code (i.e. changing the order of the methods)? They would see the test suddenly fails and then realize what they've done.

    Or is the better approach to figure out "why" that specific order is necessary and then write a test based on that?

    Thanks again.
  • 9. Re: JUnit: test that functions were performed in a specific order
    TPD-Opitz-Consulting-com Expert
    Currently Being Moderated
    iAmSoFoobarred wrote:
    But wouldn't the "ensure methods called in this order" test have some value in preventing future-developers from unknowingly breaking the code (i.e. changing the order of the methods)? They would see the test suddenly fails and then realize what they've done.
    No.
    This is the unwanted testing of an implementation detail. Lets suppose you choosed an inappropriate name (which is <tt>A()</tt> since it does not follow Java naming conventions...) and another Coder wants to change it. She must also change your test. (OK, this in not so hard knowing modern IDE's refactorings, but I think you get my point...)
    Or is the better approach to figure out "why" that specific order is necessary and then write a test based on that?
    Yes.
    Because you could produce the same result without doing a method call at all (which would be bad programming style but still legal...)
    Calling a method on an object must have an effect either on the objects state itself or the state of another object used by your testdummy. When unittesting you usually mock those "third party" objects used by your testdummy for both reasens: control there behavior as it influences the tested unit and check their resulting states. Setting up mock objects is way easier and faster and more reliable than setting up, lets say, test data in a database...

    bye
    TPD
  • 10. Re: JUnit: test that functions were performed in a specific order
    rp0428 Guru
    Currently Being Moderated
    >
    No.
    http://en.wikipedia.org/wiki/Test-driven_development
    >
    Yes - you cannot test that which does not exist. Test-driven development is not the panacea or 'one-size-fits-all' solution that some people like to believe it is. And in order to try to conform to some of its rules it requires that you contort and distort the very meaning of well-understood terms.

    I assume one of things you refer to is this statement
    >
    In test-driven development, each new feature begins with writing a test. This test must inevitably fail because it is written before the feature has been implemented
    >
    Well that statement assumes you have some sort of reference to this 'new feature'. But the 'feature' has to exist in some form in order to be referenced. The very meaning of 'test' is an action or decision made on something that preexists. You can't test air for it's oxygen content if there isn't something called air that exists and has attributes.

    Science fiction will let you do that. You can imagine something called air and write an imaginary test that tests the imaginary air. But that is a fiction that, in test-driven development is conveniently ignored by stating that the 'test must inevitably fail because it is written before the feature has been implemented.' No - the test must fail because there is nothing that exists yet to test.

    Otherwise you would not need to say 'each new feature begins with writing a test'. One test that tests nothing and fails would be all you need for any new feature: a universally failing test would satisfy that condition.

    That article also states this
    >
    In test-driven development a developer creates automated unit tests that define code requirements
    >
    If it does that is certainly the wrong approach. Tests should not dictate the code requirements. It isn't appropriate to write a test that dictates whether a parameter to a method is a primitive or an object or an array of objects. The test should be written to validate that the business rules that apply to that parameter are being followed: the nullability of the parameter, min/max values and so on. But the test shouldn't be specifying the actual datatype.

    And this quote is almost laughable
    >
    This is a differentiating feature of test-driven development versus writing unit tests after the code is written: it makes the developer focus on the requirements before writing the code, a subtle but important difference.
    >
    No it doesn't. In my experience more people trying to use test-driven development blindly write test cases without having vetted the requirements at all. They write tests that purport to test features that end up not being related to the requirements at all.

    The most important, and still mostly overlooked, step is to define the requirements before you do anything, test writing or code writing.

    If you review the most current literature you will find that we are slowly coming full circle to what we used to do all the time in the mainframe development area: flow-charting and pseudo-coding of the flowchart. Your doc link even includes a flowchart.

    These are the historical steps that were typically followed

    1. Define the requirements (first draft attempt)
    2. Flow-chart the logic flow that meets those requirements
    3. Pseudo-code the flowchart - equivalent to method calls with simple return statements or loops with a NULL statement.
    4. Begin coding based on the pseudo-code
    5. Test after the first round of coding is done.
    6. Go back to step 3 or 4 based on test results.


    Test-driven development, when properly implemented, should basically swap steps 4 and 5

    4. write test cases based on the psuedo-code or actual code implementation
    5. write code based on the pseudo-code
    6. go back to step 4 and retest

    Sometimes you have to go back to step 1 and modify the requirements. Sometimes you have to just go back to step 3 and revise the flowchart (add additional paths or exception handling.

    So even for test-driven development it should be requirements first, flow-chart/logic diagram next, pseudo-code next AND THEN TEST CASES.
    The test cases may not be build on much at first (pseudo-code with empty methods) but they are based on SOMETHING and they do NOT contain business logic; they test business logic.

    If you look at it from that perspective then OPs question has four dummy pseudo-coded methods A, B, C, D that each perform some unstated business funtion. Another business function is to call those methods in order. You would create new pseudo-code for that; method E that calls methods A, B, C, D in order. Then you would write a test case for method E. And that test case should SUCCEED, not fail. Those mentioned 'mock' objects are basically equivalent to pseudo-code; that is a way to get the 'something' that is needed to base a test on.

    Then as you flesh out the actual code for methods A, B, C, D (which should have their own initial tests) you perform ALL tests and make sure that all tests SUCCEED.

    The goal is that from beginning to end: ALL TESTS SHOULD SUCCEED.

    That whole idea of test-driven development to start out with your tests failing is the biggest bunch of malarkey I've ever seen. As I state above that is a contrivance to make the definition work.
  • 11. Re: JUnit: test that functions were performed in a specific order
    TPD-Opitz-Consulting-com Expert
    Currently Being Moderated
    rp0428 wrote:
    Yes - you cannot test that which does not exist.
    This depends on what you call a "failing test". In TDD a Test that has compile errors because the unit to be tested is not yet existing is a failing test. So yes: I can write a test that has compile errors to express what interface my unit should expose.
    Test-driven development is not the panacea or 'one-size-fits-all' solution that some people like to believe it is.
    And in order to try to conform to some of its rules it requires that you contort and distort the very meaning of well-understood terms.
    Its true that, with TDD, the code you write to confirm the complience of your production code with the requirements should not be called "tests" anymore to avoid this fussiness. This should be called "fixures" instead. But both the framework and the devlopers have a history and that makes it hard to change this.
    I assume one of things you refer to is this statement
    >
    In test-driven development, each new feature begins with writing a test. This test must inevitably fail because it is written before the feature has been implemented
    >
    Well that statement assumes you have some sort of reference to this 'new feature'. But the 'feature' has to exist in some form in order to be referenced. The very meaning of 'test' is an action or decision made on something that preexists. You can't test air for it's oxygen content if there isn't something called air that exists and has attributes.
    But I can create a test (or fixture) for the ammount of oxygen in the air before the device mixing more or less oxygen into the air exists. And that's the point: Im testing (or better: define a requirement in an executable form for) the mixing device yet to build.
    Otherwise you would not need to say 'each new feature begins with writing a test'. One test that tests nothing and fails would be all you need for any new feature: a universally failing test would satisfy that condition.
    But this test would never be passed. But that's the next step in TDD: make your production code pass the new test (without braking others).
    That article also states this
    >
    In test-driven development a developer creates automated unit tests that define code requirements
    >
    If it does that is certainly the wrong approach. Tests should not dictate the code requirements. It isn't appropriate to write a test that dictates whether a parameter to a method is a primitive or an object or an array of objects. The test should be written to validate that the business rules that apply to that parameter are being followed: the nullability of the parameter, min/max values and so on. But the test shouldn't be specifying the actual datatype.
    Youre right: in TDD a "test" is just another expression of a requitrement we got from the customer. The special thing about a "test" in TDD is that it is executable and that its execution ends with the decision if the verry special aspect of the requirements is correctly implemented.
    "Correctly" means: what the writer of the test thougth to be ment by the existing requirement documentation.
    And this quote is almost laughable
    >
    This is a differentiating feature of test-driven development versus writing unit tests after the code is written: it makes the developer focus on the requirements before writing the code, a subtle but important difference.
    >
    No it doesn't. In my experience more people trying to use test-driven development blindly write test cases without having vetted the requirements at all. They write tests that purport to test features that end up not being related to the requirements at all.
    OK, there are a lot of peole hitting their thumb rather than the nail. Should we not use hammers at all?

    It's true that every tool works best when used with "sense and sanity". And the average developer is as failable as any other craftsman on earth.
    The most important, and still mostly overlooked, step is to define the requirements before you do anything, test writing or code writing.
    Full agreement!
    If you review the most current literature you will find that we are slowly coming full circle to what we used to do all the time in the mainframe development area: flow-charting and pseudo-coding of the flowchart. Your doc link even includes a flowchart.

    These are the historical steps that were typically followed

    1. Define the requirements (first draft attempt)
    2. Flow-chart the logic flow that meets those requirements
    3. Pseudo-code the flowchart - equivalent to method calls with simple return statements or loops with a NULL statement.
    4. Begin coding based on the pseudo-code
    5. Test after the first round of coding is done.
    6. Go back to step 3 or 4 based on test results.
    Test-driven development, when properly implemented, should basically swap steps 4 and 5
    Again: full agreement!

    >
    4. write test cases based on the psuedo-code or actual code implementation
    5. write code based on the pseudo-code
    6. go back to step 4 and retest

    Sometimes you have to go back to step 1 and modify the requirements. Sometimes you have to just go back to step 3 and revise the flowchart (add additional paths or exception handling.

    So even for test-driven development it should be requirements first, flow-chart/logic diagram next, pseudo-code next AND THEN TEST CASES.
    The test cases may not be build on much at first (pseudo-code with empty methods) but they are based on SOMETHING and they do NOT contain business logic; they test business logic.
    Yes.
    If you look at it from that perspective then OPs question has four dummy pseudo-coded methods A, B, C, D that each perform some unstated business funtion. Another business function is to call those methods in order. You would create new pseudo-code for that; method E that calls methods A, B, C, D in order. Then you would write a test case for method E. And that test case should SUCCEED, not fail. Those mentioned 'mock' objects are basically equivalent to pseudo-code; that is a way to get the 'something' that is needed to base a test on.
    I could have written my sample test only based on the prosa description of the problem. This test then would not compile but in terms of TDD not compileable test means <i>failed test</i>.
    Then as you flesh out the actual code for methods A, B, C, D (which should have their own initial tests) you perform ALL tests and make sure that all tests SUCCEED.

    The goal is that from beginning to end: ALL TESTS SHOULD SUCCEED.
    Yes, when you've finnished to implement the actual part of the requirement. before starting the next.
    That whole idea of test-driven development to start out with your tests failing is the biggest bunch of malarkey I've ever seen. As I state above that is a contrivance to make the definition work.
    No.
    Its not ment to be a contrivance to the definition work, i's ment as its extension.
    How fahr would we have come if our anchestors didn't learn to make fire because they saw some others burned their fingers?

    Software developer are craftsmen (not artists). We choose our tools that we think will help us with our work.
    TDD is just another tool in our toolbox, nothing more and nothing less.

    Your post shows that you have all the knowledge needed to successfully do TDD.
    Sadly you choosed to decline.
    But that's OK, its a free world... ;o)

    For me TDD works because it helps me to focus on the requirements and (e.g.) to avoid overengeneering.

    bye
    TPD
  • 12. Re: JUnit: test that functions were performed in a specific order
    gimbal2 Guru
    Currently Being Moderated
    OK, there are a lot of peole hitting their thumb rather than the nail. Should we not use hammers at all?
    He he, I've been using that one for a long time now :)

    The answer: people who hit their thumb constantly without learning from it should not be allowed to hold a hammer.
  • 13. Re: JUnit: test that functions were performed in a specific order
    rp0428 Guru
    Currently Being Moderated
    >
    Your post shows that you have all the knowledge needed to successfully do TDD.
    Sadly you choosed to decline.
    >
    Not at all! We are pretty much in agreement as to the process that should be followed.

    As you have correctly pointed out my main issue is with some of the terminology used and the view held by some that tests are appropriate and can be constructed without having any actual object to test. In TDD projects that I manage we always review the process to be used and discuss it pretty throughly whenever any new hires are brought on board so all of these 'terminlogy' issues are pretty well vetted then.

    IMO a 'test' (e.g. a JUnit test) is a separate and distinct entity and it needs a second separate and distinct entity to perform the 'test' on. I do not agree that a test that won't compile is a valid test at all; the test can't even be run if it won't compile. Otherwise you would only need one 'universal' syntax-ridden piece of code that could be used as the initial test for literally anything. That doesn't make any real sense to me.

    So, yes, my perspective is sometimes at odds with the literal 'reading of the rules' of TDD but not with the goal and basic process. At its most basic for a new project you could use NetBeans to create a new project with one new method that has nothing but an empty 'main' method and write the initial test on that. But in the process we use that test should SUCCEED; it successfully executes the entire application from beginning to end without error.

    And then we go from there expanding both the codebase and the testbase pretty much in lock step as the pseudo-code/mocks are fleshed out.

    I appreciate you taking to the time to try to understand what I was really trying to say rather than just see the words I used to say it. To me, that's the mark of an experienced professional. That's the most valuable kind of thinking you can have on any project.

    Of course, that is just my opinion! :D
  • 14. Re: JUnit: test that functions were performed in a specific order
    TPD-Opitz-Consulting-com Expert
    Currently Being Moderated
    rp0428 wrote:
    IMO a 'test' (e.g. a JUnit test) is a separate and distinct entity and it needs a second separate and distinct entity to perform the 'test' on. I do not agree that a test that won't compile is a valid test at all; the test can't even be run if it won't compile.
    This is the the point where we differ.
    Otherwise you would only need one 'universal' syntax-ridden piece of code that could be used as the initial test for literally anything. That doesn't make any real sense to me.
    The aim is always to make our production code pass all test at the end of the day. And how would you make such a 'universal' test pass?
    So, yes, my perspective is sometimes at odds with the literal 'reading of the rules' of TDD but not with the goal and basic process. At its most basic for a new project you could use NetBeans to create a new project with one new method that has nothing but an empty 'main' method and write the initial test on that. But in the process we use that test should SUCCEED; it successfully executes the entire application from beginning to end without error.
    The point in having a failing test before the implementation of a business rule is: This makes sure I'm really testing the work on that business rule. And I have the Test results history to prove that. How do you proove what your tests test?
    I appreciate you taking to the time to try to understand what I was really trying to say rather than just see the words I used to say it.
    You made an effort to write that stuff so its a question of politeness to take that seriously. And maybe I meight learn something this way... ;o)

    bye
    TPD
1 2 Previous Next

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points