Agents can also run close to where data is generated.
Using Small Language Models (SLMs) and local inference, the platform can support deployments on edge devices or constrained environments, enabling AI capabilities even when connectivity to large cloud models is limited or unavailable.
For highly secure environments, agents can also operate using offline LLMs without external connectivity.