Aiservermodel
AIServerModel
A comprehensive class for handling AI model inference using an AIServer
over WebSocket. Designed to provide a streamlined interface for sending data to the server for inference, receiving
processed results, and displaying or further processing these results as needed.
Features:
- WebSocket Communication: Handles the full lifecycle of a WebSocket connection for real-time data streaming.
- Preprocessing & Postprocessing: Integrates with PreProcess and PostProcess classes to prepare data for the model and visualize results.
- Queue Management: Uses AsyncQueue instances to manage inbound and outbound data flow.
- Concurrency Control: Ensures thread-safe operations through mutex usage.
- Dynamic Configuration: Allows runtime modification of model and overlay parameters.
- Callback Integration: Supports custom callback functions for handling results outside the class.
Kind: global class
- AIServerModel
- new AIServerModel(options, [additionalParams])
- .predict(imageFile, [info], [bypassPreprocessing]) ⇒
Promise.<Object> - .predict_batch(data_source, [bypassPreprocessing])
- .displayResultToCanvas(combinedResult, outputCanvasName, [justResults])
- .setModelParameter(key, value) ⇒
Promise.<void> - .getModelParameter(key) ⇒
* - .resetTimeStats()
- .getTimeStats() ⇒
string - .modelInfo() ⇒
Object - .labelDictionary() ⇒
Object - .processImageFile(combinedResult) ⇒
Promise.<Blob> - .cleanup()
new AIServerModel(options, [additionalParams])
Do not call the constructor directly. Use the loadModel method of an AIServerZoo instance to create an AIServerModel.
| Param | Type | Default | Description |
|---|---|---|---|
| options | Object |
Options for initializing the model. |
|
| options.modelName | string |
The name of the model to load. |
|
| options.serverUrl | string |
The URL of the server. |
|
| options.modelParams | Object |
The default model parameters. |
|
| [options.max_q_len] | number |
10 |
Maximum queue length. |
| [options.callback] | function |
|
Callback function for handling results. |
| [options.labels] | Object |
|
Label dictionary for the model. |
| [additionalParams] | Object |
|
Additional parameters for the model (e.g., |
Example (Usage:)
- Create an instance with the required model details and server URL.
let model = zoo.loadModel('some_model_name', {} );
- Use the `predict` method for inference with individual data items or `predict_batch` for multiple items.
let result = await model.predict(someImage);
for await (let result of model.predict_batch(someDataGeneratorFn)) { ... }
- Access processed results directly or set up a callback function for custom result handling.
- You can display results to a canvas to view drawn overlays.
await model.displayResultToCanvas(result, canvas);
aiServerModel.predict(imageFile, [info], [bypassPreprocessing]) ⇒ Promise.<Object>
Predicts the result for a given image.
Kind: instance method of AIServerModel
Returns: Promise.<Object> -
The prediction result.
| Param | Type | Default | Description |
|---|---|---|---|
| imageFile | Blob | File | string | HTMLImageElement | HTMLVideoElement | HTMLCanvasElement | ArrayBuffer | TypedArray | ImageBitmap |
||
| [info] | string |
"performance.now()" |
Unique frame information provided by user (such as frame num). Used for matching results back to input images within callback. |
| [bypassPreprocessing] | boolean |
false |
Whether to bypass preprocessing. Used to send Blob data directly to the socket without any preprocessing. |
Example
If callback is provided:
The WebSocket onmessage will invoke the callback directly when the result arrives.
If callback is not provided:
The function waits for the resultQ to get a result, then returns it.
let result = await model.predict(someImage);
aiServerModel.predict_batch(data_source, [bypassPreprocessing])
Predicts results for a batch of data. Will yield results if a callback is not provided.
Kind: instance method of AIServerModel
| Param | Type | Default | Description |
|---|---|---|---|
| data_source | AsyncIterable | ReadableStream |
Either an async iterable or a ReadableStream. |
|
| [bypassPreprocessing] | boolean |
false |
Whether to bypass preprocessing. |
Example
The function asynchronously processes results. If a callback is not provided, it will yield results.
for await (let result of model.predict_batch(data_source)) { console.log(result); }
aiServerModel.displayResultToCanvas(combinedResult, outputCanvasName, [justResults])
Overlay the result onto the image frame and display it on the canvas.
Kind: instance method of AIServerModel
| Param | Type | Default | Description |
|---|---|---|---|
| combinedResult | Object |
The result object combined with the original image frame. This is directly received from |
|
| outputCanvasName | string | HTMLCanvasElement | OffscreenCanvas |
The canvas to draw the image onto. Either the canvas element or the ID of the canvas element. |
|
| [justResults] | boolean |
false |
Whether to show only the result overlay without the image frame. |
aiServerModel.setModelParameter(key, value) ⇒ Promise.<void>
Updates a single parameter in the model's parameters. Use this function to set arbitrary parameters in the model's configuration. Get a list of available parameters using model.modelInfo()
Kind: instance method of AIServerModel
| Param | Type | Description |
|---|---|---|
| key | string |
The name of the parameter to set. |
| value | * |
The new value for the parameter. |
aiServerModel.getModelParameter(key) ⇒ *
Retrieves a model parameter from the local copy of the model parameters JSON.
Kind: instance method of AIServerModel
Returns: * -
The value of the parameter.
| Param | Type | Description |
|---|---|---|
| key | string |
The key of the parameter to retrieve. |
Example
aiServerModel.resetTimeStats()
Resets the internal performance statistics dictionary.
Kind: instance method of AIServerModel
aiServerModel.getTimeStats() ⇒ string
Returns the internal performance statistics as a string.
Kind: instance method of AIServerModel
Returns: string -
A string representation of the performance statistics.
aiServerModel.modelInfo() ⇒ Object
Returns a read-only copy of the model parameters.
Kind: instance method of AIServerModel
Returns: Object -
The model parameters.
aiServerModel.labelDictionary() ⇒ Object
Returns the label dictionary for this AIServerModel instance.
Kind: instance method of AIServerModel
Returns: Object -
The label dictionary.
aiServerModel.processImageFile(combinedResult) ⇒ Promise.<Blob>
Processes the original image and draws the results on it, return png image with overlayed results.
Kind: instance method of AIServerModel
Returns: Promise.<Blob> -
The processed image file as a Blob of a PNG image.
| Param | Type | Description |
|---|---|---|
| combinedResult | Object |
The result object combined with the original image frame. |
aiServerModel.cleanup()
Cleans up resources and closes the server connection.
Does so by following a destructor-like pattern which is manually called by the user.
Call this whenever switching models or when the model instance is no longer needed!
Kind: instance method of AIServerModel