AI Chat Widget
The chat interface running a local LLM model via HomeGenie.

Open media
HomeGenie Server 2.0 brings true Artificial Intelligence directly to the Edge. By integrating both Generative AI (for understanding language) and Computer Vision (for understanding the physical world), it transforms your server into a cognitive hub that can see, hear, and reason—all while running 100% offline.
With the integrated Lailama program (Package ID: lailama, Program #940), you can run state-of-the-art Large Language Models (LLMs) like Llama 3, Phi-3, or DeepSeek.
In HomeGenie, Lailama is a module. Just like a temperature sensor emits a value when heat changes, Lailama emits Tokens (text fragments) as they are generated. This architecture allows for powerful Post-Processing Pipelines.
Example: Listening to the AI (C#)
// Variable to accumulate the sentence
string buffer = "";
// Subscribe to the AI output stream
When.ModuleParameterChanged((module, parameter) => {
if (module.Address == "940" && parameter.Name == "LLM.TokenStream") {
string token = parameter.Value;
buffer += token;
// Simple example: Speak the sentence when complete
if (token.Contains(".") || token.Contains("\n")) {
Program.Say(buffer);
buffer = "";
}
}
return true;
});Lailama is not a "black box" plugin. It is a standard HomeGenie automation program paired with a custom widget.
#940) to modify the C# code. Here you can tweak the System Prompt to change the AI's personality, adjust inference parameters (like temperature), or add new API handlers.HomeGenie Server integrates YoloSharp, a high-performance wrapper for YOLO (You Only Look Once) models via ONNX Runtime. This allows your system to "see" and understand video feeds from your cameras in real-time.
Traditional motion detection is dumb; it triggers on moving trees or shadows. HomeGenie's AI Vision understands what is moving. By enabling the specific ML features on your camera module, you can activate:
ML.ObjectDetection): Identify and label objects (Person, Car, Dog, Handbag, etc.).ML.PoseEstimation): Detect the skeleton of a human body to understand posture.ML.InstanceSegmentation): Precisely identify the pixel contours of an object.The Vision system emits the analysis results via the Sensor.ObjectDetect.Subject parameter. You can consume this data in two ways: simple JSON parsing for basic logic, or accessing the strongly-typed YoloData objects for advanced math and tracking.
Trigger an alarm only if a person is detected with high confidence.
When.ModuleParameterChanged((module, parameter) => {
if (module.Is("IpCamera 1") && parameter.Is("Sensor.ObjectDetect.Subject")) {
// Parse the JSON string from parameter.Value
// "dynamic" allows accessing properties like .Label without creating a class
dynamic subjects = JsonConvert.DeserializeObject(parameter.Value);
if (subjects == null) return true;
foreach (var subject in subjects) {
// Check Label and Confidence
if ((string)subject.Label == "person" && (double)subject.Confidence > 0.7) {
Modules.InGroup("Garden Lights").On();
Program.Notify("Person detected in the garden!");
break;
}
}
}
return true;
});Calculate the relative position of an object to drive a motor or a robotic eye.
const double ImageWidth = 640;
When.ModuleParameterChanged((module, parameter) => {
if (module.Is("IpCamera 1") && parameter.Is("Sensor.ObjectDetect.Subject")) {
// Access the raw List<YoloData.Detection> object via GetData() for high performance
var detections = parameter.GetData() as List<YoloData.Detection>;
if (detections == null) return true;
// Find the first "teddy bear" in the frame
var target = detections.FirstOrDefault(d => d.Label == "teddy bear");
if (target != null) {
// Calculate the Center X coordinate based on object bounds
var cx = target.Bounds.Location.X + (target.Bounds.Width / 2);
// Calculate relative position (0.0 to 1.0)
// 0.5 = Center, 0.0 = Far Right, 1.0 = Far Left
double position = 1 - (cx / ImageWidth);
// Drive a servo motor to follow the object
Modules.WithName("Servo Motor Horizontal").Level = position * 100;
}
}
return true;
});Running local AI requires computational resources.