We are going to do a walkthrough of how the unit tests are built and how you can easily guard and tests your customization using unit tests.
Topics
- What is a Unit Test?
- What do we use Unit Testing for?
- Creating a Unit Test
- Creating a Unit Test for Tasks
- Bonus Content - Coverage
What is a Unit Test?
Unit testing is a type of testing which we try to isolate the components or object of testing. In this type of test we are testing, not the interaction with multiple components, but the most atomic set of functional code. For example, testing a workflow execution and seeing if a request for MES is at least an interaction between several tasks (which are blocks of code components) and an external system, like an MES or even a persistence layer.
In this post I want to talk about testing an IoT Task. An IoT Task is something that is mostly contained, it may or may not have settings, inputs and outputs and it may have colateral effects. Typically, a task behaves mostly as a function with inputs and outputs and some configurations, so it’s a prime target for Unit Testing.
The main advantage of unit testing are the consistency checks, speed of execution and “noise reduction”. A unit test will maintain that give conditions A a task will always give output B and will not be burden by externalities, if for some reason a change is introduced in the task and now A is C, the consistency is no longer and the test will fail. They are very fast as they are only code executions or function executions, so its small scope and focus ensure atomic executions.
As mentioned previously this allows you to concentrate in testing the component without having to worry about interactions with other systems or components, of course they don’t replace End to End tests or integration tests as those interactions are outside the scope of this type of test.
Another advantage is that they can be run at commit time and not at build time. Being very fast, they can be run using git hooking tools like husky. This can be a big advantage, because if the tests are going to fail we want them to fail as soon as possible and not wait for a cumbersome build to fail. So we can save time and resources.
I hope it is clear what are the advantages and disadvantages. For more information about testing I would check GeeksForGeeks or Atlassian.
What do we use Unit Testing for?
The IoT Tasks scaffolding already gives everything you need to start testing. It will generate a /test/
folder in your taskPackage, also in also creates the package.json with all the devDependencies you need to run the tests.
It will mainly use two tools mocha and chai.
Mocha is a test framework for javascript tests. The tests will have a test suite under describe
, they can have some steps before executing in a beforeEach
, before testing is where the containers for the normal components should be overriden by the test components, more on this further ahead. A test is defined by it
followed by what it should do. For example, it('should return a string with Hello World', ... )
.
The addition of chai is to further strenghten our assertion to be as word like as possible. For example, using chai we can write our test in way that is much clearer. We are not asserting that result result === "Hello World"
, we are expecting result to be equal to Hello World expect(result).to.equal('Hello World');
which is much more transparent.
it('should return a string', async () => {
(...)
expect(result).to.equal('Hello World');
expect(result).to.be.a('string');
});
There is more variation, I suggest reading the documentation on both these tools to unlock their full potential.
Creating a Unit Test
Let’s start with the simplest and most atomic business logic unit of IoT, the converter. I am using an example of a converter that was generated but no tests have been created yet, so it’s a good opportunity to use a real use-case. The template has generated something like this:
import "reflect-metadata";
import { Task, System, TYPES, DI, Converter } from "@criticalmanufacturing/connect-iot-controller-engine";
import EngineTestSuite from "@criticalmanufacturing/connect-iot-controller-engine/test";
import * as chai from "chai";
import { CustomArrayDeltaCalculationConverter } from "../../../../src/converters/customArrayDeltaCalculation/customArrayDeltaCalculation.converter";
describe("Array Delta Calculation converter", () => {
let converter: Converter.ConverterContainer;
beforeEach(async () => {
converter = await EngineTestSuite.createConverter({
class: CustomArrayDeltaCalculationConverter
});
});
it("should convert", async (done) => {
/* Example int to string*/
let result: string = await converter.execute(123, {
parameter: "something"
});
chai.expect(result).to.equal("123");
done();
});
});
Taking a glance at this we can see somethings that are familiar to us. We have the import of chai, so we know we can use our assertion methods, also we are importing the controller engine and its test package, finally we have our own custom converter. In the body we can see our describe
from mocha, so this is where our test suite begins, we see it’s already creating our converter container, and we already have a dummy test showing how we can invoke our converter.
In order to start building tests let’s see what our converter does. We can see in the comment it receives an array, a type and extracts the value from that position and typecasts it. This is common, converter tasks shouldn’t be bloated in terms of functionality, they should be straightforward.
/**
* Extracts a value from a position of an array and type casts it
* @param position position that we want to extract from array
* @param type type of the value to be extracted from the position
*/
transform(value: object, parameters: { [key: string]: any; }): any {
if (!(value instanceof Array)) {
throw new Error("Given input value is not an instance of an Array");
} else {
const result = Utilities.convertValueToType(value[parameters["position"]], parameters["type"], undefined);
return result;
}
}
Before starting to code, let’s add the vscode launch setting in order to be able to debug our tests. So in our case we have a custom task package, generically called utilities, we are saying that we want to run from the package, so our workspaceRoot is where our console will launch and we want to run mocha and our tests are under test and end in .test.js.
{
"name": "Run tests on controller-engine-custom-utilities-tasks",
"type": "node",
"request": "launch",
"program": "${workspaceRoot}/node_modules/mocha/bin/_mocha",
"stopOnEntry": false,
"args": ["test/**/*.test.js", "test/*.test.js", "--no-timeouts"],
"cwd": "${workspaceRoot}",
"runtimeExecutable": null,
"sourceMaps": true,
"outputCapture": "std"
}
In VSCode we can now Run and Debug
and select our test and add breakpoints and debug our test. A useful command that comes out of the box from the scaffolding is:
npm run watchTest
This command will always compile the tests when a change is detected. There is a watchPackage for the src and it behaves the same and also a watch which makes sure to compile both are always keep running the unit tests.
Our first test will be to make sure that if we receive an invalid input, in other words not an array, we want to receive an error. We send an integer value and catch the exception and compare the message to the expected value. This way we cover that our converter will only work when expected.
it("should fail if value is not an array", async () => {
chai.expect(
await converter.execute(123, {
type: Task.TaskValueType.Integer,
position: 0
}).catch(error => error.message)
).to.equal("Given input value is not an instance of an Array");
});
Awesome, very simple, now let’s do a happy test scenario, we will give it an array with two positions, select the second and cast it to number.
NOTE: Sometimes it’s useful to add it.only, to our test, this will make it so it only runs that particular test.
it("should succeed in retrieving second array position and converting it to number", async () => {
const result = await converter.execute(["test", 1], {
type: Task.TaskValueType.Integer,
position: 1
});
chai.expect(result).to.be.a('number');
chai.expect(result).to.equal(1);
});
In this test, I splitted the execution to a property to be more legible. The test validates the output is a number and its value is 1
. Pretty simple, right! 👏
Creating a Unit Test for Tasks
As you can imagine, tasks have more variability, but the good news is that the jist is more or less the same. So let´s give it a try. Again, there’s a lot that already comes in the scaffolding, so let’s take a look at what comes in the scaffolding, in this case this was generated for a task called CustomResolveMetadataFeature
.
import "reflect-metadata";
import { Task, System, TYPES, DI } from "@criticalmanufacturing/connect-iot-controller-engine";
import EngineTestSuite from "@criticalmanufacturing/connect-iot-controller-engine/test";
// import { DataStoreMock } from "@criticalmanufacturing/connect-iot-controller-engine/test/mocks/dataStore.mock";
import * as chai from "chai";
import {
CustomResolveMetadataFeatureTask,
CustomResolveMetadataFeatureSettings
} from "../../../../src/tasks/customResolveMetadataFeature/customResolveMetadataFeature.task";
import CustomResolveMetadataFeatureTaskModule from "../../../../src/tasks/customResolveMetadataFeature/index";
describe("CustomResolveMetadataFeature Task tests", () => {
// Optional: See container handling under customResolveMetadataFeatureTestFactory
// let dataStoreMock: DataStoreMock;
beforeEach(() => {
// dataStoreMock = new DataStoreMock();
});
const customResolveMetadataFeatureTestFactory = ( settings: CustomResolveMetadataFeatureSettings | undefined,
trigger: Function,
validate: Function): void => {
const taskDefinition = {
class: CustomResolveMetadataFeatureTaskModule,
id: "0",
settings: settings || <CustomResolveMetadataFeatureSettings>{
message: ""
}
};
EngineTestSuite.createTasks([
taskDefinition,
{
id: "1",
class: Task.Task({
name: "mockTask"
})(class MockTask implements Task.TaskInstance {
[key: string]: any;
_outputs: Map<string, Task.Output<any>> = new Map<string, Task.Output<any>>();
async onBeforeInit(): Promise<void> {
this["activate"] = new Task.Output<any>();
this._outputs.set("activate", this["activate"]);
// Create other custom outputs (for the Mock task) here
}
// Trigger the test
async onInit(): Promise<void> {
trigger(this._outputs);
}
// Validate the results
async onChanges(changes: Task.Changes): Promise<void> {
validate(changes);
}
})
}
], [
{ sourceId: "1", outputName: `activate`, targetId: "0", inputName: "activate", },
{ sourceId: "0", outputName: `success`, targetId: "1", inputName: "success", },
{ sourceId: "0", outputName: `error`, targetId: "1", inputName: "error", },
// Add more links needed here...
],
undefined,
(containerId) => {
// Change what you need in the container
// Example:
// containerId.unbind(TYPES.System.PersistedDataStore);
// containerId.bind(TYPES.System.PersistedDataStore).toConstantValue(dataStoreMock);
});
};
/**
* Instructions about the tests
* It is assumed that there are two tasks:
* 0 - CustomResolveMetadataFeature Task
* 1 - Mockup task
*
* All Outputs of Mock task are connected to the inputs of the CustomResolveMetadataFeature task
* All Outputs of CustomResolveMetadataFeature Task are connected to the Mock task inputs
*
* You, as the tester developer, will trigger the outputs necessary for the CustomResolveMetadataFeature to be activated
* and check the changes to see if the CustomResolveMetadataFeature task sent you the correct values
*
* Note: This is just an example about how to unit test the task. Not mandatory to use this method!
*/
it("should get success when activated", (done) => {
customResolveMetadataFeatureTestFactory(undefined,
(outputs: Map<string, Task.Output<any>>) => {
// Trigger an output
outputs.get("activate").emit(true);
}, (changes: Task.Changes) => {
// Validate the input
chai.expect(changes["success"].currentValue).to.equal(true);
// Report the test as a success
done();
});
});
});
Pretty intimidating, right? But already we see things that are familiar to us. We see a describe
for our test suite, we see a beforeEach
and an it
for our test run and also an EngineTestSuite.createTasks
, similar to the EngineTestSuite.createConverter we saw and used above. There is a lot of information that is comented and that can throw you in a loop, but let’s try and see if we can untangle it.
Philosophy: The philosophy behind this, is we kind of want to have our own workflow, but just for this task. We are able to inject it with inputs and then receive as inputs the output of our task.There’s a common practice to build a diagram to try and explain how the test will work:
/*
* +----------------------------------------+ +-----------------------------------------------------------+
* | ==mockTask (0)== | | ==CustomResolveMetadataFeature (1)== |
* | | | |
* (*1) ----> | () barcodeOut barcodeIn () | ----> | () barcodeIn barcodeOut () | ----> (*1)
* (*2) ----> | () entityOut entityIn () | ----> | () entityIn entityOut () | ----> (*2)
* (*3) ----> | () activateOut eventType () | ----> | () eventType activateOut () | ----> (*3)
* (*4) ----> | () notActivateOut inputs () | ----> | notActivateOut () | ----> (*4)
* |----------------------------------------| |-----------------------------------------------------------|
* (*5) ----> | () success activate () | ----> | () activate success () | ----> (*5)
* (*6) ----> | () error | | error () | ----> (*6)
* +----------------------------------------+ +-----------------------------------------------------------+
*/
The diagram makes clear what are the inputs and outputs of our task. I chose a somewhat complex task on purpose, so we could see some more challenges.
Right off the bat we see that our focus, before anything else should be on the createTasks, and trying to setup what will be our task execution Mock.
Ok, let’s fill in our settings with the settings of the task and some dummy values:
const taskDefinition = {
class: CustomResolveMetadataFeatureTaskModule,
id: "1",
settings: settings || <CustomResolveMetadataFeatureSettings>{
functionalityName: "functionalityName",
enableFunctionalityName: "IsEnabledFunctionalityName",
configurationTable: "TestSmartTable"
}
};
Next off, we are injecting the outputs of our mock task to our actual task, we need to add those to the EngineTestSuite.createTasks
in the onBeforeInit
to make sure they are instantianted. We can leave the onInit
and onChanges
as is.
async onBeforeInit(): Promise<void> {
// barcodeIn
this['barcodeIn'] = new Task.Output<string>();
this.outputs.set('barcodeIn', this['barcodeIn']);
// entityIn
this['entityIn'] = new Task.Output<EntityType>();
this.outputs.set('entityIn', this['entityIn']);
// eventType
this['eventType'] = new Task.Output<string>();
this.outputs.set('eventType', this['eventType']);
// activate
this['activate'] = new Task.Output<boolean>();
this.outputs.set('activate', this['activate']);
}
Adding links, the diagram is really helpful to understand this. We want to connect the mock task (0) to our task (1).
NOTE: Outputs and inputs have the same names, just for convenience but you can give it any name you want.
[
// Inputs of task ->
{ sourceId: '0', outputName: `barcodeIn`, targetId: '1', inputName: 'barcodeIn' },
{ sourceId: '0', outputName: `entityIn`, targetId: '1', inputName: 'entityIn' },
{ sourceId: '0', outputName: `eventType`, targetId: '1', inputName: 'eventType' },
{ sourceId: '0', outputName: `activate`, targetId: '1', inputName: 'activate' },
// Outputs of task ->
{ sourceId: '1', outputName: `entityOut`, targetId: '0', inputName: 'entityOut' },
{ sourceId: '1', outputName: `barcodeOut`, targetId: '0', inputName: 'barcodeOut' },
{ sourceId: '1', outputName: `notActivateOut`, targetId: '0', inputName: 'notActivateOut' },
{ sourceId: '1', outputName: `activateOut`, targetId: '0', inputName: 'activateOut' },
{ sourceId: '1', outputName: `success`, targetId: '0', inputName: 'success' },
{ sourceId: '1', outputName: `error`, targetId: '0', inputName: 'error' }
]
We are not overriding the driver proxy, so you can leave it undefined. Now the interesting bit, in this task it’s going to resolve a smart table and also persit data in the Connect IoT Persistency. So we need to inject our mock container for Datastore and for System.Proxy.
For Datastore, because this is very common the template helps you along. In the beforeEach
uncomment the commented code, also do the same in the import:
import { DataStoreMock } from "@criticalmanufacturing/connect-iot-controller-engine/test/mocks/dataStore.mock";
...
let dataStoreMock: DataStoreMock;
beforeEach(() => {
dataStoreMock = new DataStoreMock();
});
In our EngineTestSuite.createTasks
in the (containerId) => {
section we can bind our mock containers. For our Datastore we just unbind and bind ours, declared above, for the System.Proxy we need to declare what we are going to reply when we get the resolve table smart table request.
Binding the Datastore (I changed containerId to container):
(container) => {
container.unbind(TYPES.System.PersistedDataStore);
container.bind(TYPES.System.PersistedDataStore).toConstantValue(dataStoreMock);
}
In creating our new container to intercept the request, we need to specify it implementsSystem.Proxy, and it will require you to implement a set of functions. We will implemnt them and say if they are called throw an exception.
(container) => {
(...)
class MockSystemAPI implements System.SystemProxy {
getMetadata(): Promise<Foundation.BusinessObjects.AutomationController> {
throw new Error('Unexpected call');
}
subscribeActionGroup(actionGroup: Foundation.Common.DynamicExecutionEngine.ActionGroup | string,
callback: System.SystemEventCallback): void {
throw new Error('Unexpected call');
}
unsubscribeActionGroup(
actionGroup: Foundation.Common.DynamicExecutionEngine.ActionGroup | string,
callback: System.SystemEventCallback): void {
throw new Error('Unexpected call');
}
executeQuery(queryObject: System.QueryObject.QueryObject, parameterCollection?: System.QueryObject.QueryParameterCollection,
settings?: Utilities.SystemApiUtilsSettings): Promise<System.LBOS.System.Data.DataSet> {
throw new Error('Unexpected call');
}
getObjectById(id: string, type: string, levelsToLoad?: number, typeIsTypeId?: boolean,
settings?: Utilities.SystemApiUtilsSettings): Promise<any> {
throw new Error('Unexpected call');
}
getObjectByName(name: string, type: string, levelsToLoad?: number, typeIsTypeId?: boolean,
settings?: Utilities.SystemApiUtilsSettings): Promise<any> {
throw new Error('Unexpected call');
}
loadAttributes(entity: any, specificAttributes?: string[], settings?: Utilities.SystemApiUtilsSettings): Promise<any> {
throw new Error('Unexpected call');
}
adjustState(entity: any, newState: string, settings?: Utilities.SystemApiUtilsSettings): Promise<any> {
throw new Error('Unexpected call');
}
}
Now we implement our call:
NOTE: There may be imports you need to add (i.e import Foundation = System.LBOS.Cmf.Foundation;
)
(container) => {
(...)
class MockSystemAPI implements System.SystemProxy {
(...)
async call(input: System.LBOS.Cmf.Foundation.BusinessOrchestration.BaseInput): Promise<any> {
// Mock the execution of the LBO call
if (input instanceof System.LBOS.Cmf.Foundation.BusinessOrchestration.TableManagement.InputObjects.ResolveSmartTableInput) {
const output = new System.LBOS.Cmf.Foundation.BusinessOrchestration.TableManagement.OutputObjects.ResolveSmartTableOutput();
// Validate Service Requests
chai.expect(input.SmartTable.Name).to.be.equal("TestSmartTable");
chai.expect(input["Values"].get("Resource")).to.be.equal("Test");
chai.expect(input["Values"].get("Name")).to.be.equal("IsEnabledFunctionalityName");
// Inject reply
output.Result = {
[`T_ST_${input.SmartTable.Name}`]: [
{
"Resource": "Test",
"ResourceType": "",
"Name": "IsEnabledFunctionalityName",
"Value": "functionalityName",
"Area": "TestArea"
}
]
};
return Promise.resolve(output);
} else {
return Promise.reject('Unexpected input type.');
}
}
}
If the request is what we want to test, we will check it’s inputs and inject our mock output.
Ok, enough setting up, let’s build a test. I like to start with fail scenarios, in this case if the entityIn
input is not an entity the task will send an error saying “Please provide a valid entity”. Pretty obvious, right? Let’s do it!
it("should get fail due to incorrect entity input", () => {
customResolveMetadataFeatureTestFactory(undefined,
(outputs: Map<string, Task.Output<any>>) => {
// Trigger an entityIn with an invalid entity
outputs.get('entityIn').emit({});
outputs.get('activate').emit(true);
}, (changes: Task.Changes) => {
// Validate the input
chai.expect(changes["error"].currentValue.message).to.equal("Please provide a valid entity");
});
});
We have two phases, the trigger phase and validate phase. In the trigger we inject an activate
true and we give an invalid value for the entityIn
. In the validate phase we use chai to make an assertion saying that we are expecting a specific error message.
Now let’s move on to a happy test scenario.
it("should get success when activated", (done) => {
const entity = new System.LBOS.Cmf.Navigo.BusinessObjects.Resource();
entity.Name = "Test";
const barcode = "testBarcode";
customResolveMetadataFeatureTestFactory(undefined,
(outputs: Map<string, Task.Output<any>>) => {
// Trigger an output
outputs.get('barcodeIn').emit(barcode);
outputs.get('entityIn').emit(entity);
outputs.get('eventType').emit("testEvent");
outputs.get('activate').emit(true);
}, (changes: Task.Changes) => {
// Validate the input
chai.expect(changes["entityOut"].currentValue).to.equal(entity);
chai.expect(changes["barcodeOut"].currentValue).to.equal(barcode);
chai.expect(changes["activateOut"].currentValue).to.equal(true);
chai.expect(changes["notActivateOut"]).to.equal(undefined);
chai.expect(changes["success"].currentValue).to.equal(true);
// Validate the persistency
const persistedValue = Object.values(dataStoreMock["_persistedDataStore"].get('TestSmartTablePersisted'))[0][0]["result"];
chai.expect(persistedValue["Area"]).to.equal("TestArea");
chai.expect(persistedValue["Name"]).to.equal("IsEnabledFunctionalityName");
chai.expect(persistedValue["Resource"]).to.equal("Test");
chai.expect(persistedValue["Value"]).to.equal("functionalityName");
// Report the test as a success
done();
});
});
Wow, that’s a lot more code, but if you take a closer look, it’s all just assertions. Same as before, we have two phases, in the trigger phase we will send some values. In the validate phase we have two different validations. We validate the outputs of the task and also the persistency layer. Remember that this task, not only emits outputs, but also persists information. Also, now we create a know entity called Resource
to surpass the guard we saw in the previous test.
Bonus Content - Coverage
A very interesting advantage of these kinds of tests is that it’s very easy to extract metrics. The scaffolding already has in the package.json the command vs:test
defined. In order to extract coverage you just need to run npm run vs:test
.
Statements : 67.89% ( 203/299 )
Branches : 50.21% ( 117/233 )
Functions : 61.82% ( 34/55 )
Lines : 73.73% ( 188/255 )
I saw an interesting post talking about what this means feel free to take a look here softwaretestinghelp. In order to generate this report we are using cobertura and mocha-junit-reporter.
Author
Hello 👏 , my name is João Roque ✌️
I’ve been working for some years at Critical Manufacturing. I split my time working in the IoT Team and working with the project’s teams. You can visit me at https://j-roque.com/ or check me at LinkedIn
Skills: Connect IoT / DevOps