ModelRunner
This service is experimental and may change in the future.
Runs machine learning models.
Only models with a single input tensor and a single output tensor are supported at the moment. Input is provided by Sensor Aggregator service on the same device. Multiple instances of this service may be present, if more than one model format is supported by a device.
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
Commands
setModel
Open pipe for streaming in the model. The size of the model has to be declared upfront.
The model is streamed over regular pipe data packets.
The format supported by this instance of the service is specified in format
register.
When the pipe is closed, the model is written all into flash, and the device running the service may reset.
modelRunner.setModel(model_size: number): Promise<void>
Registers
autoInvokeEvery
When register contains N > 0
, run the model automatically every time new N
samples are collected.
Model may be run less often if it takes longer to run than N * sampling_interval
.
The outputs
register will stream its value after each run.
This register is not stored in flash.
type:
Register<number>
(packing formatu16
)read and write
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
const value = await modelRunner.autoInvokeEvery.read()
await modelRunner.autoInvokeEvery.write(value)
- track incoming values
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
modelRunner.autoInvokeEvery.subscribe(async (value) => {
...
})
write
and read
will block until a server is bound to the client.
reading
Results of last model invocation as float32
array.
type:
Register<any[]>
(packing formatr: f32
)track incoming values
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
modelRunner.reading.subscribe(async (value) => {
...
})
write
and read
will block until a server is bound to the client.
inputShape
The shape of the input tensor.
type:
Register<any[]>
(packing formatr: u16
)track incoming values
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
modelRunner.inputShape.subscribe(async (value) => {
...
})
write
and read
will block until a server is bound to the client.
outputShape
The shape of the output tensor.
type:
Register<any[]>
(packing formatr: u16
)track incoming values
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
modelRunner.outputShape.subscribe(async (value) => {
...
})
write
and read
will block until a server is bound to the client.
lastRunTime
The time consumed in last model execution.
type:
Register<number>
(packing formatu32
)read only
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
const value = await modelRunner.lastRunTime.read()
- track incoming values
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
modelRunner.lastRunTime.subscribe(async (value) => {
...
})
write
and read
will block until a server is bound to the client.
allocatedArenaSize
Number of RAM bytes allocated for model execution.
type:
Register<number>
(packing formatu32
)read only
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
const value = await modelRunner.allocatedArenaSize.read()
- track incoming values
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
modelRunner.allocatedArenaSize.subscribe(async (value) => {
...
})
write
and read
will block until a server is bound to the client.
modelSize
The size of the model in bytes.
type:
Register<number>
(packing formatu32
)read only
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
const value = await modelRunner.modelSize.read()
- track incoming values
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
modelRunner.modelSize.subscribe(async (value) => {
...
})
write
and read
will block until a server is bound to the client.
lastError
Textual description of last error when running or loading model (if any).
type:
Register<string>
(packing formats
)read only
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
const value = await modelRunner.lastError.read()
- track incoming values
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
modelRunner.lastError.subscribe(async (value) => {
...
})
write
and read
will block until a server is bound to the client.
format
The type of ML models supported by this service.
TFLite
is flatbuffer .tflite
file.
ML4F
is compiled machine code model for Cortex-M4F.
The format is typically present as first or second little endian word of model file.
type:
Register<number>
(packing formatu32
)constant: the register value will not change (until the next reset)
read only
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
const value = await modelRunner.format.read()
write
and read
will block until a server is bound to the client.
formatVersion
A version number for the format.
type:
Register<number>
(packing formatu32
)constant: the register value will not change (until the next reset)
read only
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
const value = await modelRunner.formatVersion.read()
write
and read
will block until a server is bound to the client.
parallel
If present and true this service can run models independently of other instances of this service on the device.
type:
Register<boolean>
(packing formatu8
)optional: this register may not be implemented
constant: the register value will not change (until the next reset)
read only
import { ModelRunner } from "@devicescript/core"
const modelRunner = new ModelRunner()
// ...
const value = await modelRunner.parallel.read()
write
and read
will block until a server is bound to the client.