
Open media
With HomeGenie, you can create common automation tasks and scenes effortlessly. No need to learn or use any programming language to customize your smart system.
The Visual Program editor lets you build scenes and sophisticated tasks, including those powered by AI, through an intuitive interface. Simply interact with widgets or drag and drop commands from the Blocks toolbox located on the left side of the screen.
The logic and commands of your scene are visually represented as colored blocks, allowing you to easily modify and rearrange them with a touch – no coding required.
The visual editor automatically generates the code that powers your scenes. You can view this generated code by clicking the Generated Code tab. This is a great starting point for learning the Automation Programs API and building more complex scenes or advanced programs using languages like C#, JavaScript, and Python.
Using the visual editor toolbar you can enable widgets preview (
) where is possible to display and interact with widgets of devices and other things involved in the scene's script.It's also possible to manually select actors of the scene using the "Select modules" (
) button, even if they are not yet employed in the scene script. This is especially useful when enabling the capture commands ( ) functionality to record the script in real time by interacting with widgets.When the Capture commands functionality is enabled any command executed on a widget in the preview area will automatically be added to the currently selected program block with the addition of a user-configurable pause between each command.
On the left side of the visual programming editor there is a toolbar containing all kind of blocks grouped by category.
This category contains three program-related blocks.
This program block allows you to configure the start conditions for the Main block and define which input signals will be processed. A practical example is shown in the image, where the Main block is activated by the detection of motion.
You'll notice the visual code is designed to be intuitive, reading almost like a natural language sentence: "When the 'Multi Sensor' 'MotionDetect' value changes, execute the 'Main' program."
The Main block can trigger automatically in response to an input event, such as the motion detection shown previously, or it can be executed manually using either the "action button" widget or the "Play" button in the editor toolbar.
It can contain various elements, including commands, loops, conditional blocks, and calls to other Function blocks.
The Main block in the example above blinks the Porch Light. Therefore, if combined with the Setup block shown previously (which runs Main when motion is detected), the Porch Light will blink whenever motion occurs.
Custom function blocks provide an effective way to structure program logic into modular, named units that can be called as needed.
Function blocks may be utilized within Main blocks or other Function blocks. Conversely, they cannot be placed inside Setup blocks, which are restricted to using only Context Functions.
The example below demonstrates this by encapsulating the Porch Light blinking logic within a Function block named Blink, which is subsequently called twice by the Main block.
Note: while multiple Setup and Main blocks can be added, the generator merges blocks of the same type into single Setup and Main functions based on their visual order.
An exception is the When
event handler, which is always placed at the beginning of the generated Setup function, regardless of its visual placement.
This category contains blocks for controlling the program's logic flow. For example, the blocks below turn the lights on if the last "motion detected" value is greater than 0, and turn them off if the value is 0 (meaning the sensor is idle).
This category provides blocks that return values, usable in logical expressions or as input to Context Functions. This is illustrated in the previous example, which shows the Parameter Value block ("Sensor.MotionDetect") and the Number Input block ("0") – both are Value type blocks.
This category contains blocks used to repeat a specific sequence of commands. These loops allow you to execute actions multiple times, either a fixed number of times or while a certain condition remains true.
The example below uses a loop to call the Blink function repeatedly while motion is detected. This results in the Porch Light blinking continuously during motion.
This category provides blocks to control program execution and produce output. Use these to pause (Pause
), run/wait for programs, generate audio/speech (Play
, Say
), or call functions.
This category contains blocks specifically designed to control devices, modules, or interact with external APIs. Use these to select targets (Group/Module Select
), change states (On
, Off
, Toggle
), set values like level or color (Level
, Color
), manage specific devices (e.g., thermostats), or send other custom commands.
Visual programs aren't limited to simple inputs like numbers or strings; they can also process structured or binary data (e.g., images, videos). Context Functions are used for this purpose. These are typically small code snippets designed to calculate, transform, or parse complex data, returning a simple type usable by visual blocks.
Drawing a simple analogy from AI, think of each Visual Program as acting like an 'Artificial Neuron'. It receives inputs and uses its internal logic to produce an output or action. Context Functions are key here: they represent the specific calculations or data transformations happening inside the 'neuron' (program), often processing complex data to yield simpler results for the program's logic.
Consider the example where a visual program controls a robotic eye to follow an object detected by a Object Detection program (another 'neuron'). The eye-control program needs to process the complex detection data. It uses the following Context Function:
// Context Function: Internal processing within the eye-control 'neuron'
const double ImageWidth = 640;
double FollowX(ModuleParameter parameter)
{
// Input: Complex data from the object detection 'neuron'
var teddyBear = parameter.GetData() as YoloData.Detection;
// Internal Processing: Calculate center X based on object bounds
var cx = teddyBear.Bounds.Location.X + (teddyBear.Bounds.Width / 2);
// Return simple value for the main program logic
return 1 - (cx / ImageWidth); // Output: Simple relative X position (0.0 to 1.0)
}
This FollowX
function takes complex input (from the camera 'neuron'), performs the internal calculation, and returns a simple value. The visual program logic (the rest of the eye-control 'neuron') then uses this simple value to move the robotic eye (its output action). You can see the results in the video below.
As demonstrated in the video, input signals from the object detection program directly cause the eye to track the subject (teddy bear).
Additionally, the video shows secondary animations: the eye blinks and the eyelid moves.
These animations are handled by a distinct Visual Program executing in parallel. This program employs four custom blocks, with each block implementing a different eyelid motion. A loop repeatedly calls these blocks to simulate the seemingly unconscious blinking patterns of a human eye.
The video below shows how the eye animation was implemented with this separate program.
This program structure is inherently scalable. Its power comes from building sophisticated behaviors by interconnecting many simple visual programs. Imagine easily adding more specialized units to orchestrate increasingly complex tasks – controlling robotic movements and reactions, managing intricate home automation sequences, or driving advanced AI responses.