A desktop AI engineering workspace that adapts to your workflow through extensions.
You choose the model. Cyréna orchestrates the workflow.
Works with your favourite models
It started with a real problem. I needed internal tooling to manage finances — invoicing, billing, suppliers — but I am not an accountant. I wanted something I could actually use, while still making it easy for my accountant to export the data and import it into their more complicated software.
I did not have time to build these tools from scratch, as I would normally do. So I turned to Base44, an AI auto-app builder. It worked. The tools got built. But then came the realisation: I could not host any of it on my own infrastructure. I still needed Base44. The vendor lock-in was real.
That is where Cyréna was born. Offline-first. My own code. My own infrastructure. No vendor lock-in. A platform that helps you build what you need — and lets you keep it.
"I built it in Base44. Then I realised I couldn't actually host it myself. That's when I knew there had to be a better way."
Watch Cyréna take a natural language prompt, generate working code, and build it — all in one seamless loop. No copy-paste. No context switching. Just engineering workflow orchestration.
Cyréna is software that runs on your machine. It does not expose endpoints, talk to a server, or lock you into a single AI provider. Cyréna is the workspace. You bring the model, and with extensions, it adapts to your engineering domain.
Works entirely on your machine. Your code never leaves your local environment unless you choose to.
Connect to any LLM you prefer. Ollama, OpenAI, or your own custom endpoint. Switch anytime.
Cyréna understands your project context and writes code that integrates with your existing codebase.
Build custom extensions to adapt Cyréna to your workflow, your stack, and your team's needs.
Cyréna is not limited to one stack. Extensions define domain-specific behaviour, file structures, prompts, and tooling for different engineering workflows.
As a recent example, Cyréna was used to build the Website extension, then used that extension to rebuild this site as static HTML, CSS, and JavaScript. The point is not that Cyréna is a website builder. The point is that new engineering domains can be added, refined, and used immediately.
Cyréna is not an AI model. Cyréna is the platform that orchestrates models. Connect Ollama for offline work, OpenAI for cloud power, or switch between them mid-conversation. The choice is always yours.
Run large language models entirely on your own hardware. No internet required. No data leaves your machine.
Switch Anytime
In any chat,
at any moment
Tap into state-of-the-art cloud models when you need maximum capability. Always up to date, always powerful.
Start a workflow with Ollama, switch to OpenAI mid-conversation, then go back. Cyréna keeps context across model changes.
Four new capabilities that put you in charge of how Cyréna thinks, what she knows, and when she acts.
The model only knows what it needs to know.
Feature Activation lets you enable or disable tools and capabilities per chat. When a feature is off, it does not just hide — it ceases to exist for the model entirely. No confusion, no accidental tool use, no noise. Cyréna operates with focus on exactly what your current task requires.
The right instructions, at the right time.
As features activate and deactivate, Cyréna's instruction set updates automatically. Cyréna operates under the most relevant constraints for your current stack — Angular prompts for Angular work, firmware rules for firmware work. No static one-size-fits-all prompt. The context adapts with you.
Load up your tasks. Go have a coffee.
Queue a sequence of instructions and let Cyréna work through them automatically. Each response completes before the next instruction fires. If something critical comes up mid-queue, Cyréna pauses and waits for your input before continuing. You stay in control without staying at your desk.
Always know what's happening, at a glance.
Cyréna shows the live status of every chat directly in the sidebar. See which chats have context loaded and ready, which are actively working, and which are idle — without switching between them. No more wondering if the AI is still running or if a chat needs to be reopened.
Four pillars of an AI-native engineering workspace that adapts to your workflow, not the other way around.
Write, refactor, and debug code across multiple languages and frameworks with full project context.
Design system architecture, define service contracts, and maintain consistent patterns across your codebase.
Sticky Notes and API References live with your code. Any model picks up exactly where the last one left off.
Cyréna is not locked into one ecosystem. Each domain has domain-specific constraints and prompts, but they all share the same core architecture principles.
Consistency across ecosystems. Whether you are building a .NET service, an Angular application, a firmware project, or a public website, Cyréna understands the idioms, the tooling, and the constraints.
Explore ArchitectureA two-part memory system that lives with your code. Sticky Notes capture rules and reminders. API References hold the deep technical docs. Together, they let any model pick up exactly where the last one left off.
Notes, rules, and reminders the AI writes for itself. Architectural decisions, coding conventions, things not to forget.
Intense coding and architectural documentation. Service contracts, integration patterns, real signatures grounded in actual code.
Both memory systems are saved in a .cyrena folder at your project's root.
It is just files. Source control them. Share them. Another developer picks up where you
left off.
Switch from Ollama to OpenAI mid-project and the new model reads the same memory. No need to re-explain the codebase. No need to rebuild context from scratch. The memory is already there.
Commit .cyrena to Git. Another developer clones the repo, opens
Cyréna, and the AI already knows the architecture, the rules, and the intent. No
onboarding docs required.
Setting expectations is important. Here is what we do not pretend to be.
A Cloud Service
An AI Model
Vendor Locked
A Subscription
Magic
Infallible
Cyréna will: read your code, respect your constraints, and help produce solutions that actually compile.
Cyréna starts as an engineering workspace on your desktop, but with extensions she becomes a platform. Your organization builds what it needs, distributes through your own servers, and your team gets a unified AI experience.
Cyréna is an open source project built by developers who were tired of AI tools that do not understand real codebases, architecture, and workflow constraints. Check out the repo, contribute, or just see how it works.
Download Cyréna, connect your favourite model, and start building inside an AI-native workspace that understands your code, constraints, and workflows.
Get Started