Local AI

HomeGenie Server 2.0 brings true Artificial Intelligence directly to the Edge. By integrating both Generative AI (for understanding language) and Computer Vision (for understanding the physical world), it transforms your server into a cognitive hub that can see, hear, and reason—all while running 100% offline.


1. Generative AI (Lailama)

With the integrated Lailama program (Package ID: lailama, Program #940), you can run state-of-the-art Large Language Models (LLMs) like Llama 3, Phi-3, or DeepSeek.

Why Local?

AI Chat Widget

The chat interface running a local LLM model via HomeGenie.


Open media

The "AI as a Sensor" Concept

In HomeGenie, Lailama is a module. Just like a temperature sensor emits a value when heat changes, Lailama emits Tokens (text fragments) as they are generated. This architecture allows for powerful Post-Processing Pipelines.

Example: Listening to the AI (C#)

// Variable to accumulate the sentence
string buffer = "";

// Subscribe to the AI output stream
When.ModuleParameterChanged((module, parameter) => {
    
    if (module.Address == "940" && parameter.Name == "LLM.TokenStream") {
        string token = parameter.Value;
        buffer += token;

        // Simple example: Speak the sentence when complete
        if (token.Contains(".") || token.Contains("\n")) {
            Program.Say(buffer); 
            buffer = "";
        }
    }
    return true; 
});

Open Architecture

Lailama is not a "black box" plugin. It is a standard HomeGenie automation program paired with a custom widget.


2. Computer Vision (YoloSharp)

HomeGenie Server integrates YoloSharp, a high-performance wrapper for YOLO (You Only Look Once) models via ONNX Runtime. This allows your system to "see" and understand video feeds from your cameras in real-time.

Beyond Motion Detection

Traditional motion detection is dumb; it triggers on moving trees or shadows. HomeGenie's AI Vision understands what is moving. By enabling the specific ML features on your camera module, you can activate:

Object Detection

Identifying people and objects in real-time.


Open media

Vision Automation

The Vision system emits the analysis results via the Sensor.ObjectDetect.Subject parameter. You can consume this data in two ways: simple JSON parsing for basic logic, or accessing the strongly-typed YoloData objects for advanced math and tracking.

Example 1: Smart Security (JSON)

Trigger an alarm only if a person is detected with high confidence.

When.ModuleParameterChanged((module, parameter) => {
    if (module.Is("IpCamera 1") && parameter.Is("Sensor.ObjectDetect.Subject")) {
        
        // Parse the JSON string from parameter.Value
        // "dynamic" allows accessing properties like .Label without creating a class
        dynamic subjects = JsonConvert.DeserializeObject(parameter.Value);
        
        if (subjects == null) return true;

        foreach (var subject in subjects) {
            // Check Label and Confidence
            if ((string)subject.Label == "person" && (double)subject.Confidence > 0.7) {
                Modules.InGroup("Garden Lights").On();
                Program.Notify("Person detected in the garden!");
                break;
            }
        }
    }
    return true; 
});

Example 2: Active Tracking (Advanced C#)

Calculate the relative position of an object to drive a motor or a robotic eye.

const double ImageWidth = 640;

When.ModuleParameterChanged((module, parameter) => {
    if (module.Is("IpCamera 1") && parameter.Is("Sensor.ObjectDetect.Subject")) {
        
        // Access the raw List<YoloData.Detection> object via GetData() for high performance
        var detections = parameter.GetData() as List<YoloData.Detection>;
        
        if (detections == null) return true;

        // Find the first "teddy bear" in the frame
        var target = detections.FirstOrDefault(d => d.Label == "teddy bear");

        if (target != null) {
            // Calculate the Center X coordinate based on object bounds
            var cx = target.Bounds.Location.X + (target.Bounds.Width / 2);
            
            // Calculate relative position (0.0 to 1.0)
            // 0.5 = Center, 0.0 = Far Right, 1.0 = Far Left
            double position = 1 - (cx / ImageWidth);
            
            // Drive a servo motor to follow the object
            Modules.WithName("Servo Motor Horizontal").Level = position * 100;
        }
    }
    return true; 
});

Hardware Requirements

Running local AI requires computational resources.

Tip: For Single Board Computers like the Raspberry Pi, we recommend using "Quantized" LLM models (Q4_K_M) and smaller YOLO models (Nano or Small versions) to balance performance and speed.
menu_open Content index
forum Q & A discussion forum
HomeGenie
SERVER 2.0 — Documentation