We are building Beddel, an open protocol and secure runtime designed to bring governance and predictability to AI Agents. When we started building agents, we realized that opaque "black box" execution loops are a nightmare for security and compliance, so we wanted a way to define agents that is transparent, auditable, and safe by default. Beddel treats Agent definitions as declarative YAML schemas rather than just code, allowing our runtime to wrap every interaction in a security layer that enforces rules before execution ever happens. In this alpha release, we are introducing a declarative protocol for portable agent definitions, built-in compliance engines (GDPR/LGPD) that automatically check PII handling and consent flows, real-time threat detection, and immutable audit trails for forensic logging. We believe the future of AI Agents requires a standard, safe protocol that enterprises can actually trust, and we are open-sourcing the core runtime to start this conversation. The repo includes the parser, compliance engines, and the isolated runtime environment—we’d love your feedback on the schema design and our approach to compliance enforcement: https://github.com/botanarede/beddel-alpha.
Finally, we designed Beddel to be the safe substrate for Generative Agent Creation. While current market standards often rely on LLMs generating executable code (like Python scripts) to spawn new behaviors—creating a massive sandboxing and security nightmare—Beddel solves this via its strict schema architecture. Because our agents are defined purely by YAML configuration rather than arbitrary code, an AI can safely act as an "Architect," composing and deploying new sub-agents on the fly without ever introducing arbitrary code execution risks. This enables self-organizing agent swarms where the logic is generated by GenAI, but the security boundaries remain rigidly enforced by our deterministic runtime.
We are building Beddel, an open protocol and secure runtime designed to bring governance and predictability to AI Agents. When we started building agents, we realized that opaque "black box" execution loops are a nightmare for security and compliance, so we wanted a way to define agents that is transparent, auditable, and safe by default. Beddel treats Agent definitions as declarative YAML schemas rather than just code, allowing our runtime to wrap every interaction in a security layer that enforces rules before execution ever happens. In this alpha release, we are introducing a declarative protocol for portable agent definitions, built-in compliance engines (GDPR/LGPD) that automatically check PII handling and consent flows, real-time threat detection, and immutable audit trails for forensic logging. We believe the future of AI Agents requires a standard, safe protocol that enterprises can actually trust, and we are open-sourcing the core runtime to start this conversation. The repo includes the parser, compliance engines, and the isolated runtime environment—we’d love your feedback on the schema design and our approach to compliance enforcement: https://github.com/botanarede/beddel-alpha.
Finally, we designed Beddel to be the safe substrate for Generative Agent Creation. While current market standards often rely on LLMs generating executable code (like Python scripts) to spawn new behaviors—creating a massive sandboxing and security nightmare—Beddel solves this via its strict schema architecture. Because our agents are defined purely by YAML configuration rather than arbitrary code, an AI can safely act as an "Architect," composing and deploying new sub-agents on the fly without ever introducing arbitrary code execution risks. This enables self-organizing agent swarms where the logic is generated by GenAI, but the security boundaries remain rigidly enforced by our deterministic runtime.