The AI-Native Engineering Workspace
Offline-First · Model-Agnostic · Desktop App

Meet Cyréna

A desktop AI engineering workspace that adapts to your workflow through extensions.
You choose the model. Cyréna orchestrates the workflow.

Works with your favourite models

Ollama OpenAI .NET Angular Website Arduino PlatformIO
Why We Built This

Built Out of Necessity

It started with a real problem. I needed internal tooling to manage finances — invoicing, billing, suppliers — but I am not an accountant. I wanted something I could actually use, while still making it easy for my accountant to export the data and import it into their more complicated software.

I did not have time to build these tools from scratch, as I would normally do. So I turned to Base44, an AI auto-app builder. It worked. The tools got built. But then came the realisation: I could not host any of it on my own infrastructure. I still needed Base44. The vendor lock-in was real.

That is where Cyréna was born. Offline-first. My own code. My own infrastructure. No vendor lock-in. A platform that helps you build what you need — and lets you keep it.

"I built it in Base44. Then I realised I couldn't actually host it myself. That's when I knew there had to be a better way."

The Origin of Cyréna
See It In Action

Prompt → Code → Build

Watch Cyréna take a natural language prompt, generate working code, and build it — all in one seamless loop. No copy-paste. No context switching. Just engineering workflow orchestration.

Describe what you want Cyréna writes the code It compiles and runs
What Cyréna Actually Is

A Desktop Platform, Not a Service

Cyréna is software that runs on your machine. It does not expose endpoints, talk to a server, or lock you into a single AI provider. Cyréna is the workspace. You bring the model, and with extensions, it adapts to your engineering domain.

Offline-First

Works entirely on your machine. Your code never leaves your local environment unless you choose to.

Model-Agnostic

Connect to any LLM you prefer. Ollama, OpenAI, or your own custom endpoint. Switch anytime.

Actually Compiles

Cyréna understands your project context and writes code that integrates with your existing codebase.

Extensible

Build custom extensions to adapt Cyréna to your workflow, your stack, and your team's needs.

Dogfooded Engineering

Cyréna Can Extend Its Own Workflow

Cyréna is not limited to one stack. Extensions define domain-specific behaviour, file structures, prompts, and tooling for different engineering workflows.

As a recent example, Cyréna was used to build the Website extension, then used that extension to rebuild this site as static HTML, CSS, and JavaScript. The point is not that Cyréna is a website builder. The point is that new engineering domains can be added, refined, and used immediately.

Domain Examples
  • .NET applications and services
  • Angular application workflows
  • Arduino and PlatformIO firmware
  • Static websites and public pages
Model Agnostic

Your AI, Your Choice

Cyréna is not an AI model. Cyréna is the platform that orchestrates models. Connect Ollama for offline work, OpenAI for cloud power, or switch between them mid-conversation. The choice is always yours.

Ollama

Local & Offline

Run large language models entirely on your own hardware. No internet required. No data leaves your machine.

  • 100% offline capability
  • No data ever leaves your machine
  • Run models on CPU or GPU
  • Pull any model from Ollama library
  • Zero subscription costs

Switch Anytime

In any chat,
at any moment

OpenAI

Cloud Powered

Tap into state-of-the-art cloud models when you need maximum capability. Always up to date, always powerful.

  • OpenAI API-compatible cloud models
  • Strong cloud reasoning capability
  • Access to current hosted model versions
  • Faster inference on complex tasks
  • Bring your own API key

Start a workflow with Ollama, switch to OpenAI mid-conversation, then go back. Cyréna keeps context across model changes.

What's New

Built for Control

Four new capabilities that put you in charge of how Cyréna thinks, what she knows, and when she acts.

Feature Activation
New

The model only knows what it needs to know.

Feature Activation lets you enable or disable tools and capabilities per chat. When a feature is off, it does not just hide — it ceases to exist for the model entirely. No confusion, no accidental tool use, no noise. Cyréna operates with focus on exactly what your current task requires.

Feature Activation screenshot
  • Enable or disable tools per chat
  • Disabled features are invisible to the model
  • No accidental tool calls or confusion
  • Surgical precision for every task
Dynamic System Prompts
New

The right instructions, at the right time.

As features activate and deactivate, Cyréna's instruction set updates automatically. Cyréna operates under the most relevant constraints for your current stack — Angular prompts for Angular work, firmware rules for firmware work. No static one-size-fits-all prompt. The context adapts with you.

  • Prompts update as features change
  • Stack-specific constraints automatically applied
  • No static, bloated system prompt
  • Context that adapts to your current task
Prompt Queuing
New

Load up your tasks. Go have a coffee.

Queue a sequence of instructions and let Cyréna work through them automatically. Each response completes before the next instruction fires. If something critical comes up mid-queue, Cyréna pauses and waits for your input before continuing. You stay in control without staying at your desk.

Prompt Queuing screenshot
  • Queue multiple instructions in sequence
  • Each response completes before the next fires
  • Auto-pause on critical input required
  • Work through tasks while you do something else
Chat Status
New

Always know what's happening, at a glance.

Cyréna shows the live status of every chat directly in the sidebar. See which chats have context loaded and ready, which are actively working, and which are idle — without switching between them. No more wondering if the AI is still running or if a chat needs to be reopened.

Chat Status screenshot
  • Live status in the sidebar for every chat
  • Unloaded — idle, context loads on open
  • Loaded — context in memory and ready
  • Working — AI is actively processing
The Four Pillars

What Cyréna Does

Four pillars of an AI-native engineering workspace that adapts to your workflow, not the other way around.

Code Generation

Write, refactor, and debug code across multiple languages and frameworks with full project context.

Architecture

Design system architecture, define service contracts, and maintain consistent patterns across your codebase.

Memory

Sticky Notes and API References live with your code. Any model picks up exactly where the last one left off.

Multi-Domain Support

A Polyglot Engineering Workspace

Cyréna is not locked into one ecosystem. Each domain has domain-specific constraints and prompts, but they all share the same core architecture principles.

Consistency across ecosystems. Whether you are building a .NET service, an Angular application, a firmware project, or a public website, Cyréna understands the idioms, the tooling, and the constraints.

Explore Architecture
.NET
  • Web API
  • Blazor
  • MAUI
  • Console
Angular
  • Docs UI
  • Components
  • Routing
  • App Flows
Website
  • Static HTML
  • CSS Architecture
  • SEO-first
  • Responsive Layouts
Arduino
  • Sketches
  • Libraries
  • Sensors
  • Serial
PlatformIO
  • ESP32
  • STM32
  • PIO Config
  • Debugging
Built-In Memory

Cyréna Remembers

A two-part memory system that lives with your code. Sticky Notes capture rules and reminders. API References hold the deep technical docs. Together, they let any model pick up exactly where the last one left off.

Sticky Notes

Quick Notes & Rules

Notes, rules, and reminders the AI writes for itself. Architectural decisions, coding conventions, things not to forget.

  • Capture decisions as they happen
  • Rules the AI must follow
  • Reminders about context and intent
  • Lightweight, fast to scan

API References

Deep Technical Docs

Intense coding and architectural documentation. Service contracts, integration patterns, real signatures grounded in actual code.

  • Service interfaces and contracts
  • Architecture rules and patterns
  • Integration contracts between components
  • Grounded in real implementation

The .cyrena Folder

Lives With Your Code

Both memory systems are saved in a .cyrena folder at your project's root. It is just files. Source control them. Share them. Another developer picks up where you left off.

my-project/
src/
.cyrena/
sticky-notes/
api-references/
package.json
Model Handoff, Zero Friction

Switch from Ollama to OpenAI mid-project and the new model reads the same memory. No need to re-explain the codebase. No need to rebuild context from scratch. The memory is already there.

Team Handoff, Same Story

Commit .cyrena to Git. Another developer clones the repo, opens Cyréna, and the AI already knows the architecture, the rules, and the intent. No onboarding docs required.

Reality Check

What Cyréna Is NOT

Setting expectations is important. Here is what we do not pretend to be.

A Cloud Service

An AI Model

Vendor Locked

A Subscription

Magic

Infallible

Cyréna will: read your code, respect your constraints, and help produce solutions that actually compile.

The Platform Vision

From Workspace to Platform

Cyréna starts as an engineering workspace on your desktop, but with extensions she becomes a platform. Your organization builds what it needs, distributes through your own servers, and your team gets a unified AI experience.

Custom Extensions
Private Distribution
Team Workflows
Unified AI Experience
Your Infrastructure
Open Source Core
Your Extensions
Custom Workflows
Cyréna Core
Your Machine

Open Source and Evolving

Cyréna is an open source project built by developers who were tired of AI tools that do not understand real codebases, architecture, and workflow constraints. Check out the repo, contribute, or just see how it works.

Star us on GitHub Contribute

Ready to Get Started?

Download Cyréna, connect your favourite model, and start building inside an AI-native workspace that understands your code, constraints, and workflows.

Get Started