Scenes

With HomeGenie, you can create common automation tasks and scenes effortlessly. No need to learn or use any programming language to customize your smart system.

The Visual Program editor lets you build scenes and sophisticated tasks, including those powered by AI, through an intuitive interface. Simply interact with widgets or drag and drop commands from the Blocks toolbox located on the left side of the screen.

The logic and commands of your scene are visually represented as colored blocks, allowing you to easily modify and rearrange them with a touch – no coding required.

The visual editor automatically generates the code that powers your scenes. You can view this generated code by clicking the Generated Code tab. This is a great starting point for learning the Automation Programs API and building more complex scenes or advanced programs using languages like C#, JavaScript, and Python.

Visual programming

Live recording a scene

Using the visual editor toolbar you can enable widgets preview (preview) where is possible to display and interact with widgets of devices and other things involved in the scene's script.

It's also possible to manually select actors of the scene using the "Select modules" (dashboard_customize) button, even if they are not yet employed in the scene script. This is especially useful when enabling the capture commands (fiber_manual_record) functionality to record the script in real time by interacting with widgets.

When the Capture commands functionality is enabled any command executed on a widget in the preview area will automatically be added to the currently selected program block with the addition of a user-configurable pause between each command.

Using the Blocks toolbox

On the left side of the visual programming editor there is a toolbar containing all kind of blocks grouped by category.

Context Functions

Visual programs aren't limited to simple inputs like numbers or strings; they can also process structured or binary data (e.g., images, videos). Context Functions are used for this purpose. These are typically small code snippets designed to calculate, transform, or parse complex data, returning a simple type usable by visual blocks.

AI / Machine Learning example

Drawing a simple analogy from AI, think of each Visual Program as acting like an 'Artificial Neuron'. It receives inputs and uses its internal logic to produce an output or action. Context Functions are key here: they represent the specific calculations or data transformations happening inside the 'neuron' (program), often processing complex data to yield simpler results for the program's logic.

Consider the example where a visual program controls a robotic eye to follow an object detected by a Object Detection program (another 'neuron'). The eye-control program needs to process the complex detection data. It uses the following Context Function:

// Context Function: Internal processing within the eye-control 'neuron'
const double ImageWidth = 640;
double FollowX(ModuleParameter parameter)
{
    // Input: Complex data from the object detection 'neuron'
    var teddyBear = parameter.GetData() as YoloData.Detection; 
    // Internal Processing: Calculate center X based on object bounds
    var cx = teddyBear.Bounds.Location.X + (teddyBear.Bounds.Width / 2);
    // Return simple value for the main program logic
    return 1 - (cx / ImageWidth); // Output: Simple relative X position (0.0 to 1.0)
}

This FollowX function takes complex input (from the camera 'neuron'), performs the internal calculation, and returns a simple value. The visual program logic (the rest of the eye-control 'neuron') then uses this simple value to move the robotic eye (its output action). You can see the results in the video below.

As demonstrated in the video, input signals from the object detection program directly cause the eye to track the subject (teddy bear).

Additionally, the video shows secondary animations: the eye blinks and the eyelid moves.

These animations are handled by a distinct Visual Program executing in parallel. This program employs four custom blocks, with each block implementing a different eyelid motion. A loop repeatedly calls these blocks to simulate the seemingly unconscious blinking patterns of a human eye.

The video below shows how the eye animation was implemented with this separate program.

This program structure is inherently scalable. Its power comes from building sophisticated behaviors by interconnecting many simple visual programs. Imagine easily adding more specialized units to orchestrate increasingly complex tasks – controlling robotic movements and reactions, managing intricate home automation sequences, or driving advanced AI responses.

menu_open Content index
forum Q & A discussion forum
HomeGenie
SERVER 1.4 — Documentation