Release Notes
Version 0.1.5 (8/12/2025)
New Features and Modifications
-
Improved Error Handling
-
Added automatic cleanup of verbose internal error messages from Core to make them more readable.
-
Socket transports for Model classes now disconnect immediately on unrecoverable result errors.
-
Expanded Postprocess Options
-
outputPostprocessType
now supports two additional values:"Null"
and"Dequantization"
.
Bug Fixes
-
Reliable Socket Reset
-
Fixed race conditions when resetting sockets to ensure proper reconnection for CloudServerModel.
-
Sequential Result Processing
-
Improved strict result processing during
predict_batch
for CloudServerModel. -
Color Table Generation
-
Fixed issue where models with no labels could produce black colors in visualizations by ensuring default colors are assigned when labels are missing.
Version 0.1.4 (7/7/2025)
New Features and Modifications
- Performance Optimizations:
- Improved the performance of
predict
andpredict_batch
by optimizing the handling of internal asynchronous operations and removing internal queue overhead. -
Internal timeout mechanisms have been optimized.
-
New Model Parameters:
Parameter | Type | Description |
---|---|---|
eagerBatchSize |
integer | Controls server-side maximum batch size. Use this to improve throughput on some models. |
outputPoseThreshold |
float | A dedicated confidence threshold for pose estimation models. |
outputPostprocessType |
string | The list of valid post-processing types has been expanded for full compatibility. |
inputShape |
Array |
(Advanced) Allows you to get or set the input shapes for models. The format is an array of shape arrays, e.g., [[1, 224, 224, 3]] . |
-
New Timing Statistics
-
Added timing statistics to be able to profile the full inference lifecycle of a frame inside DeGirumJS. The model.getTimeStats() method now includes more detailed metrics to help you pinpoint performance bottlenecks:
InputFrameConvert_ms: Time spent converting the input image to a usable format. EncodeEmit_ms: Time spent encoding and sending the payload. ResultProcessing_ms: Time spent on the client processing the result from the server. ResultQueueWaitingTime_ms: Time a result spent in the queue before being returned to your code. MutexWait_ms: Time spent waiting for the prediction lock (for single predict calls).
-
New
model.printLatencyInfo()
Method -
After running inference with
measureTime
enabled, you can call this new method to get a clean, human-readable breakdown of where time is being spent:Total End-to-End Latency Total Client-Side Processing Time (preprocessing, etc.) Total Server-Side Processing Time (inference, etc.)
Bug Fixes
- Cloud Model Stability: Automatic Parameter Hydration
- When a model is loaded from the cloud, the SDK now automatically hydrates the partial parameters received from the server, filling in any missing values with their correct defaults. This now lets CloudServerModel instances have any parameter be modified.
Version 0.1.3 (1/8/2025)
New Features and Modifications
- New drawing parameters for
autoScaleDrawing
in the model classes - Added two optional parameters,
targetDisplayWidth
andtargetDisplayHeight
, to specify a custom reference resolution whenautoScaleDrawing
is enabled. (previously, the reference resolution was fixed at 1920x1080) - Defaults to
1920x1080
if no values are provided. - Ensures consistent scaling of overlays (e.g., bounding boxes, labels, keypoints) across varying input image dimensions.
Bug Fixes
- Fixed a bug where backend errors were thrown asynchronously from
predict
andpredict_batch
functions. Now, the user can catch these errors and handle them gracefully.
Version 0.1.2 (1/7/2025)
New Features and Modifications
-
Lightweight
listModels()
function: Now, querying the list of models from the cloud (for CloudZoo classes) only fetches the names of the models. The parameters can now be fetched with a new function:getModelInfo(modelName)
. -
Updated
autoScaleDrawing
parameter for model classesdisplayResultToCanvas()
function: Now, the parameter is made to scale all results to optimal viewing for 1080p resolution.autoScaleDrawing
saves you from guesswork about how to size overlays for various input image dimensions by comparing the actual canvas size to a reference (e.g., 1080p) and scaling accordingly.
Version 0.1.1 (12/31/2024)
New Features and Modifications
- Asynchronous
dg.connect(...)
Thedg.connect(...)
method is now asynchronous. You should useawait dg.connect(...)
to properly wait for initialization.
This improvement ensures the AI Server or Cloud connections (and their respective zoo classes) are fully ready before returning objects.
let dg = new dg_sdk();
// Old:
// let zoo = dg.connect('ws://localhost:8779');
// New:
let zoo = await dg.connect('ws://localhost:8779');
predict_batch
Both
AIServerModel
and CloudServerModel
now accept a ReadableStream in addition to an async iterable for the predict_batch(...)
method.This makes it easier to stream frames or data chunks directly from sources like the new WebCodecs API or other stream-based pipelines.
-
predict()
andpredict_batch()
AcceptVideoFrame
These methods now also allowVideoFrame
objects as valid inputs. -
OffscreenCanvas Support in
displayResultToCanvas()
You can now draw inference results onto anOffscreenCanvas
as well as a standard<canvas>
element. -
Brighter Overlay Colors
Default generated overlay colors have been adjusted to be more visible on dark backgrounds. -
Support for SegmentationYoloV8 Postprocessing
Added the ability to draw results from models that use the SegmentationYoloV8 postprocessor.
Bug Fixes
-
Proper Overlay Color for Age Classification
Overlay colors for per-person text in age classification models are now correctly set. -
Postprocessing Improvements
Various fixes and optimizations have been implemented in the postprocessing code.
Version 0.1.0 (10/4/2024)
New Features and Modifications
- Optimized Cloud inference connection handling, now resources are used only when needed and released properly.
- New default color generation logic creates a more visually appealing set of colors for different types of models when viewing inference results.
Version 0.0.9 (9/17/2024)
New Features and Modifications
- Optimized Mask Drawing in displayResultToCanvas() for results from Detection models with masks per detected object.
Bug Fixes
- Postprocessing for Detection models that return masks now handles inputPadMethod options properly.