Automated Tests Use Case with CMC

Visual Studio Logo

Automated Tests Use Case with CMC

Working on the CMC Sensor Monitoring project, we found the test utilities in Visual Studio to be very helpful. One of the components of the CMC system, web service, was utilized by gateways in the local network. The gateways relayed data from a series of sensors, this data was then uploaded by the gateway to the web service. The QA team had a difficult time testing the web service component since there was no UI to initiate tests through. We considered using Fiddler to perform tests but quickly realized that it was difficult to script and to get adequate coverage. Furthermore, Fiddler was found to be time consuming for first time users to learn. As a result, we opted to use the test framework in Visual Studio.

Unit tests are often associated with the Visual Studio Testing Framework (VSTF). In fact, there are a variety of tests that can be automated using VSTF including integration, coded UI, ordered and generic. Since we needed to test the web service as experienced by the gateway, we opted to create automated integration tests.

The gateways were expected to interact with a set of endpoints on the web service and change state based on these interactions. We utilized VSTF to script these interactions which resulted in set of 10 tests:

These tests were not only run periodically during development by one of the team developers, they were also run before any deployment to QA or UAT.

Test Details

The tests were focused on covering all the available endpoints. Further, following the ideology of integration testing, we wanted to test the full stack from request submission to response.

Using VSTF, we created a test class along with a test method for each test:

The tests required test data to run against. There needed to be test instances of a gateway, sensors and all the graph of objects supporting them. A script was created to activate a test data set and deactivate the data set when the tests were completed.

The individual tests were arranged using the standard Arrange, Act, Assert sequence. Generally speaking, test data was retrieved and objects were created in the Arrange step. Requests were sent and responses were captured in the Act step. The responses were analyzed in the Assert step. If an error code was received, or the application entered an unexpected state, the Assert step triggered a test failure.

Since we were testing a live web service, threading was required to send requests and receive responses.

using System.Threading.Tasks;

All the requests were sent asynchronously and responses captured using the Task.awaitmechanism.

In some cases, the tests were scripted to access multiple endpoints and changed state in the same way the gateways were expected to. Using this approach we were able to mimic the process by which a gateway may interact with the web service. For example, there was a routine that the gateway must follow to access the web service. It must first send its ID in encrypted form using a common key. Then use the response to retrieve further data. After performing a series of steps against the end points, the gateway will have all the data it needs to pass authentication. It must then send a message to the web service in encrypted form using a private key. This somewhat complicated interaction was scripted in its entirety in one of the tests. Receiving a ‘pass’ on this test offered a fair amount of confidence that the web service was running well (at least in part).


There was a notable development cost associated with creating these tests and corresponding data scripts. The cost of such an endeavor in the future should be reduced as developers gain more familiarity with VSTF. It could be argued that having these tests in place absorbed the costs of development. A considerable amount of QA time was undoubtedly saved, not to mention the time saved tracking down mysterious bugs that would have been missed otherwise. Also, we were able to adapt the tests into a portable benchmarking tool to aid in tracking down performance issues in the client’s environment.