This content has been marked as final. Show 10 replies
I believe that only outcomes and "user-set attributes" are displayed on a test script specification.
By "user-set attributes" I mean true base attributes and intermediate attributes that you manually set yourself for the purposes of unit testing etc.
...B is true and
...C is true
C is true if
...D is true and
...E is true
A is the outcome. C is the intermediate. B, D and E are base attributes.
If you set B, D and E then A, B, D and E will show up in the test script spec, not C.
This is because only B, D and E affect the outcome of the rules. The value of intermediates is never stored in the test script XML or the XDS export from the debugger simply because it would override the rule logic in the rulebase and cause some very unexpected results when it is re-imported and the rules have changed! By "storing" B, D and E in the test case, OPA can always infer C (via forward-chaining) once the script is executed.
C itself has no relevance to the test case specification (since it is pointless to send in C if you know B, D and E!)
As you know OPM allows you to manually set the value of C. If you do this (for the purposes of unit testing etc) then C becomes "user set" and will appear on the test script specification, since it WILL decide the outcome of the test case.
If you want C to appear in the test script specification but dont want to manually set it, then just set C as an outcome and it will appear (an outcome is always considered relevant since it is what you want to calculate)
So I don't think it's a bug. Displaying every intermediate in some rulebases (with 1,000s of attributes) would render the specification useless.
Hope this helps... let me know if anything is not clear.
Just to summarize my query:
When the same attribute is used as both, an expected outcome and is set input in different test cases that are contained within the same test script file, the Test Script Specification tool omits the input version when displaying the detail on that test script.
When you say "As you know OPM allows you to manually set the value of C. If you do this (for the purposes of unit testing etc) then C becomes "user set" and will appear on the test script specification, since it WILL decide the outcome of the test case"
The attribute C is not displayed on the test script specification as it is an outcome attribute. (This is an outcome for some other test case in the same test script)
I've just had a look at a regression test script in 10.4.2 and here are my comments...
Do you have expected values set for the intermediate attributes you've added as expected outcomes? If not, they will not appear in the Test Script Specification. This is true for top level goals as well. They need to have expected values set otherwise they will not appear in the Test Case Specification.
If the expected outcomes have values set for some test cases, but not others, then those expected outcomes will only appear in the Test Case Specification for the particular cases where they do have values set.
The expected value for any outcome is set only when that attribute needs to be displayed in the regression tester report.
In this scenario, i want the intermediate attribute to act as input for proving some other top level goal.
As per your comment "If the expected outcomes have values set for some test cases, but not others, then those expected outcomes will only appear in the Test Case Specification for the particular cases where they do have values set", is there an appropriate way to handle this situation?
Is there any specific reason as to why the intermediate attributes are not displayed for the above scenario?
Let me check that I understand correctly... You have an intermediate attribute and you want to manually set its actual value in the test case, rather than letting it be inferred by setting the required base attributes? And you want that intermediate attribute to appear in the Test Script Specification Report, but not in the Test Report? This is exactly how the various reports display by default based on the quick experiment I just did.
I tried it with two regression test cases in the same test script:
Case 1. Actual values set only for base attributes.
Case 2. No actual values set for base attributes. Instead I set actual values for some intermediate attributes. I did not add these intermediate attributes to the Expected Outcomes tab.
(In both cases the expected values were set for the expected outcome attributes.)
Test Script Specification:
Case 1. Report shows base attributes which had actual values set and expected outcomes which had expected values set.
Case 2. Report shows intermediate attributes which had actual values set and expected outcomes which had expected values set.
(i.e. cases 1 and 2 looked different and reported the actual values manually set, regardless of whether they were base attributes or intermediate attributes.)
Case 1. Shows outcome attributes
Case 2. Shows outcome attributes
(i.e. cases 1 and 2 look the same in the Test Report)
1. the sun is shining if
the sky is clear and
the temperature is warm
2. the sky is clear if
the clouds are absent and
the rains are missing
the sun is shining: Top Level Output Attribute
the sky is clear: Intermediate Attribute
the temperature is warm, the rains are missing, the clouds are absent : Base Level Attributes
I have a test script which has, consider, n number of test cases. The test cases have "the sun is shining" and "the sky is clear" added in the expected outcome tab.
Few test cases are to test for attribute "the sun is shining" and others for "the sky is clear"
Consider Test Case 1: To test attribute "the sun is shining"
For the above test Case, I provided values to attributes "the temperature is warm" and "the sky is clear".
The attribute "the sky is clear" is set manually and not derived using base attributes.
Now, when you 'View the Test Script Specification' for the Test Case 1, Only "the temperature is warm" is displayed as input; the attribute "the sky is clear" is not displayed at all.
In order to add attribute "the sky is clear" to the test script specification, either of the two techniques can be used:
1. Remove attribute "the sky is clear" from the expected outcomes tab
If this approach is followed, for the other test cases in the test script, attribute "the sky is clear" is not visible in the regression tester report.
2. Set the expected outcome value for attribute "the sky is clear" even if only attribute "the sun is shining" needs to be tested.
If this approach is followed, then attribute "the sky is clear" is visible even in the test cases where it is not required in the regression tester report.
But, what is the approach to be followed if we want both the attributes 'the sun is shining' & 'the sky is clear' as EXPECTED outcomes in a single Test Script file?
I didn't realise you were trying to make the same inferred attribute both an Expected Outcome and a manually set (artificial) base attribute in the same test script. This seems a bit odd, and also counter to good practice. Regression test cases should generally rely on manually setting base attributes, not inferred attributes.
Assuming you have a good reason for manually setting actual values for inferred attributes, then those particular test cases should probably be put into a separate test script anyway. Are you aware that a single Test Report can be generated from multiple test script files? Same is true for the Test Script Specification. So if your desire to have a single test script file is because you want to generate one Test Report and one Test Script Specification, you can do this easily right now:
- For Test Reports: Reports | Run Multiple Test Scripts | Select test script files to be combined into one Test Report
- For Test Script Specification: Reports | View Test Script Specification | Select Combine all test scripts into one report
I don't believe there is any product bug here. I think what you're seeing is expected behaviour based on the usual way to use regression tester.