Getting Started with DeGirumJS
Welcome to the DeGirumJS: a JavaScript AI Inference SDK! This guide will help you get started with integrating AI capabilities into your web application.
Table of Contents
- Introduction & Core Concepts
- Setup
- A 5-Step Guide to Your First Prediction
- Step 1: Connect to an Inference Provider
- Step 2: Load a Model
- Step 3: Run Inference
- Step 4: Understand the Output
- Step 5: Visualize the Results
- Putting It All Together: A Complete Example
- Cleaning Up
- Where to Go Next
- API Reference
Introduction
DeGirumJS allows you to connect to AI Server or Cloud Zoo instances, load AI models, and perform inference on various data types. This guide provides a step-by-step tutorial on how to get started.
Core Concepts
There are 3 main objects that you will work with in DeGirumJS: - dg_sdk: The main entry point to the library. - zoo: Your connection to a model repository (either local or cloud). You use this to find and load models. - model: The loaded model instance that you use to run predictions.
Setup
Import the SDK
To start using the SDK, include the following script tag in your HTML file:
A 5-Step Guide to Your First Prediction
DeGirumJS allows you to load models from an AI server or Cloud Zoo and perform inference on the AI Server hardware or in the cloud. For AI server inference (local or LAN server), you need to have an AI Server running with http protocol enabled :
See AI Server documentationFor running cloud inference or to be able to load a model from the cloud, you need to specify your cloud token.
Where do I get my cloud token?
Step 1: Connect to an Inference Provider
Connect to an AI Server
Instantiate the dg_sdk
class and connect to the AI server using the connect
method. Provide the server's IP address and port.
let dg = new dg_sdk();
const AISERVER_IP = 'localhost:8779';
let zoo = await dg.connect(AISERVER_IP);
let dg = new dg_sdk();
const AISERVER_IP = 'localhost:8779';
const ZOO_URL = 'https://hub.degirum.com/degirum/public';
const secretToken = prompt('Enter secret token:');
let zoo = await dg.connect(AISERVER_IP, ZOO_URL, secretToken);
Connect to the Cloud
For running Cloud inference, specify 'cloud' as the first argument, and include the URL of the cloud zoo and your token:
let dg = new dg_sdk();
const ZOO_URL = 'https://hub.degirum.com/degirum/public';
const secretToken = prompt('Enter secret token:');
let zoo = await dg.connect('cloud', ZOO_URL, secretToken);
Step 2: Load a Model
Now, you can load a model using the zoo class instance's loadModel
method:
const MODEL_NAME = 'yolo_v5s_coco--512x512_quant_n2x_cpu_1';
const modelOptions = {
overlayShowProbabilities: true
// Any other custom options for your Model (see Model Options documentation)
};
let model = await zoo.loadModel(MODEL_NAME, modelOptions);
zoo.listModels()
as a way to discover models available for inference on the selected inference provider.
Step 3: Run Inference
Use the predict
method to perform inference on an input image.
The input for predict
is flexible and supports a variety of types, including Blob, File, base64 string, HTMLImageElement, HTMLVideoElement, HTMLCanvasElement, ArrayBuffer, TypedArray, ImageBitmap, URL to an image (full list of supported input types can be found in the Working with Input and Output Data documentation).
const image = ''; // Some input image
const result = await model.predict(image);
console.log('Result:', result);
Step 4: Understand the Output
The result object contains the results from the model and the original imageFrame. For more details, see the Result Object Structure documentation.
Step 5: Visualize the Results
You can display prediction results to a HTMLCanvasElement
or OffscreenCanvas
:
// Assuming your Canvas Element has the id 'outputCanvas'
let canvas = document.getElementById('outputCanvas');
model.displayResultToCanvas(result, canvas);
Putting It All Together: A Complete Example
To get started with a simple example page, we need the following HTML elements on the page:
The script tag to import DeGirumJS
A canvas element to display inference results.
An input element to browse and upload images.
Here is a HTML page that will perform inference on uploaded images and display the results:
<script src="https://assets.degirum.com/degirumjs/0.1.4/degirum-js.min.obf.js"></script>
<canvas id="outputCanvas" width="400" height="400"></canvas>
<input type="file" id="imageInput" accept="image/*">
<script type="module">
// Grab the outputCanvas and imageInput elements by ID:
const canvas = document.getElementById('outputCanvas');
const input = document.getElementById('imageInput');
// Initialize the SDK
let dg = new dg_sdk();
// Query the user for the cloud token:
const secretToken = prompt('Enter your cloud token:');
// Inference settings
const MODEL_NAME = 'yolo_v5s_coco--512x512_quant_n2x_cpu_1';
const ZOO_URL = 'https://hub.degirum.com/degirum/public';
const AISERVER_IP = 'localhost:8779';
// Connect to the cloud zoo
let zoo = await dg.connect(AISERVER_IP, ZOO_URL, secretToken);
// Model options
const modelOptions = {
overlayShowProbabilities: true
};
// Load the model with the options
let model = await zoo.loadModel(MODEL_NAME, modelOptions);
// Function to run inference on uploaded files
input.onchange = async function () {
let file = input.files[0];
// Predict
let result = await model.predict(file);
console.log('Result from file:', result);
// Display result to canvas
model.displayResultToCanvas(result, canvas);
}
</script>
Cleaning Up
To clean up a model instance to release resources when the model is no longer needed, use the cleanup
method:
Where to Go Next
Ready to keep going? Check out the next sections to learn about what else DeGirumJS has to offer! - Model Parameters - Connection Modes - Real-Time Batch Inference - Performance & Timing Statistics - Customizing Pre-processing and Visual Overlays - Working with Input and Output Data - Device Management for Inference - Result Object Structure + Examples - WebCodecs Example - Release Notes
API Reference
For detailed information on the SDK's classes, methods, and properties, refer to the API Reference.