Newer
Older

Angelina Elizabeth Uno-Antonison
committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
# Testing Definitions and Standards
Outlined within this document are the definitions and criteria for various types of testing that we
practice within the CGDS. Note, as a project becomes more public-facing, the degree by which we QA
our code goes up.
Coverage Level | Personal | In-House | External | Open Source
--- | --- | --- | --- | ---
Code coverage | NA | Core Components | 60% | 90%
Tests serve two core functions.
1. Helps to ensure our code is doing what we want it to do by catching defects.
2. Helps to protect functionality from accidentally being changed within team settings by informing us
if there was a change.
A good test should be thought about before a design phase is done and any coding has started. We want
to capture the highest amount errors for the least amount of compute resources and dev time and effort spent.
A good test will:
* Likely not cover every flow/scenario of scenarios, but will likely cover bounds.
* Will cover exceptions, if they are essential and unique to business logic.
* Ensure complex scenarios are correct more so that of simple logic flows.
## Unit Tests
Unit tests cover the base unit of a functional code. They should require minimal setup and run very
fast (< 5 seconds for an entire suite). Unit tests will run on every commit. It is a good practie to run
unit tests before pushing code up to our git repository.
Some tidbits on Unit Tests
A good unit test will:
* Test the essence of a function, not neccessarily a test for each component that composes a function.
(eg for code structure, it may make sense to break a function into 5 or 6 sub functions -- testing the core
function should still be the aim rather than a test for each sub function).
* Be independent of other toolings and libraries (We're testing integration or other toolings code at that point).
* Be very fast
* Be able to run in parallel
## Integration Tests
Integration tests are writen for assurance that multiple modules work together, without needing a system
up and running to fully verify. Integration tests as testing the assembling of functions/units together.
Integration tests usually take longer to run than unit tests due to their complexity, but should still
take minute to run as opposed to several dozen minutes or hours. These tests will be run everytime
code is pushed to our git repository.
A good integration test will:
* Not be dependent on external entities being present to test (ie not needing to hit a third party endpoint).
* Verify various state flows.
* Verify pairs of combinations (as opposed to every possible combination).
* Be cohesive with other parts of code, but flexible enough so it doesn't break at the changing out of various
units/functions.
* Be able to run in parallel
## System Tests
Unlike Integration Tests, System Tests take a look at functionality from an external to code vantage point.
As such, system level tests likely will map to Project Charters, specifically their success criteria and/or
formal requirements and their success criteria. System level tests do no need to be automated, but for our
purposes, they will be automated.
System tests will be run on merge-requests within our git repository. It's ok for these to take longer,
upwards of a couple hours to run a full suite. These will run on against target code after it's staged/deployed
to our development cluster.
A good system test will:
* Ensure proper regression of the system.
* Ensure Project objectives and system requirements are met and retained.
* Require no knowledge of the actual implementation of the code.
* Easily configurable to run against various deployed enviornments.
* Will reset state upon completion of each test or suite of similar tests
## Acceptance Tests (UAT)
Acceptance Tests ("User Acceptance Tests") differ from System tests in that, a UAT is required to pass before
a version of code can be signed off on by the users. These tests can be automated, but more likely are manual
tests performed by the users, or inspired/written from the users before a system will be accepted into production.
Think of the System tests as the tests we as developers need to be satisfied our work functions as expected and
that "expected" closely aligns with Users expectations. Think of UAT as tests users need to be convinced our code
does what they want our code to do.
Note, especially within the clinical setting, UAT is also our team's legal defense that our test plan is up to
par with what it needs to be for a clinical process.
A Good UAT will:
* Cover concerns of users
* Ensure the business logic and core functionality works as expected
* Usually will have some overlap with System Tests
* Will have a test log/tangible and auditable process of execution and analysis
## Additional Types of Testing
### API Testing
API Testing is a testing technique for ensuring developer facing code is of quality. It can cover Unit, Integration
as well as System Tests, though Integration and System Tests likely cover the basis. Unlike the various levels of
coverage highlighted above, API Testing inherintely covers closer to 100% of the various functions and exceptions
that come along with the business logic for an API.
In additiona to business logic coverage of a given API, standard protocols should also be tested, such as returning
proper HTTP codes with various inputs/headers/params/etc. Similar to Integration tests, dependencies should be mocked
to avoid calling out to external dependencies.
Like system tests, extended thought should be given to setting up and tear down of tests to avoid state issues.
### Smoke Tests
A Smoke test is a form of a System Test. A smoke test is also a form of an End-to-end test. A smoke test is intended to
be a very small test that can run from the beginnning to the end of a various and comprehensive workflow in the system.
It is not intended to catch bugs per so, instead it's goal is to ensure the system was stood up correctly and gives
the green checkmark to proceed with more comprehensive tests (such as the full system test suite or start UAT).
### End-To-End Tests
As the name implies, this test goes through the code at a system level, from an entry state to a desired end state.
### Equivalance Tests
This is the most common form of testing. It typically involves creating fixtures, that is known I/O, and verifying that
executed code reaches the expected Outputs for known Inputs.
### State Transition Testing
This form of testing lives somewhere between Integration and System Tests. It's an excellent design choice, to make
a state diagram showing the various ways state can be achieved/transveresed. With such a diagram, it makes for a means
of writing End-to-end tests.
### Pairwise Tests
This is a type of testing that looks at combinations of data. Instead of writing tests to cover every combination of
data, subsets of combinations are met so that each option is paired with every other option at least once.
An example would be only writing 8 tests for the following scenario:
Our code has 4 sets of data, data set A has 4 options (A1, A2, A3, A4), data set B has 2 options (B1, B2), data set C has
2 options (C1, C2), and lastly data set D has 2 options (D1, D2).
Instead of writing 36 tests
A1:B1:C1:D1
A1:B2:C1:D1
A1:B1:C2:D1
A1:B2:C2:D1
A1:B1:C1:D2
A1:B2:C1:D2
A1:B1:C2:D2
A1:B2:C2:D2
A2:B1:C1:D1
A2:B2:C1:D1
A2:B1:C2:D1
A2:B2:C2:D1
A2:B1:C1:D2
A2:B2:C1:D2
A2:B1:C2:D2
A2:B2:C2:D2
A3:B1:C1:D1
A3:B2:C1:D1
A3:B1:C2:D1
A3:B2:C2:D1
A3:B1:C1:D2
A3:B2:C1:D2
A3:B1:C2:D2
A3:B2:C2:D2
A4:B1:C1:D1
A4:B2:C1:D1
A4:B1:C2:D1
A4:B2:C2:D1
A4:B1:C1:D2
A4:B2:C1:D2
A4:B1:C2:D2
A4:B2:C2:D2
We'd write 8 tests which ensure all pair combinations are met.
A1:B1:C1:D1
A1:B2:C2:D2
A2:B1:C2:D1
A2:B2:C1:D2
A3:B1:C1:D2
A3:B2:C2:D1
A4:B1:C1:D1
A4:B2:C2:D2