Lyra by Roxabi — Marketing Strategy

Brand Voice & Messaging Framework

How we talk about Lyra. The words we choose, the stories we tell, and the copy that makes people stop and read.

1
Marketing Voice Guide
How we sound when we talk about Lyra — not how Lyra talks.

1.1 Voice Attributes

Confident — not arrogant
This
"Lyra runs on your hardware. Your data never leaves."
Not this
"Lyra is the most powerful AI agent platform ever built."
Technical — not jargon-heavy
This
"A lightweight hub — 300 lines — that you can read in an afternoon."
Not this
"Leverages asyncio-based hub-and-spoke microkernel architecture for agent orchestration."
Warm — not sentimental
This
"It remembers who you are. What matters to you. What you're building."
Not this
"Your AI companion that loves you and is always by your side forever."
Declarative — not hedged
This
"It runs 24/7. It answers on Telegram. It uses your LLMs."
Not this
"Lyra can potentially help you explore various AI interactions across multiple platforms."
Principled — not preachy
This
"Your data stays on your machine. That's a design decision, not a feature."
Not this
"We believe deeply in your fundamental right to AI privacy and digital sovereignty."

1.2 Tone Spectrum

Context shifts the register — not the voice. Channel → Technical
Warm & inviting Crisp & clear Precise & evidenced Dense & rigorous
Landing Page
Warm, declarative, short sentences. Lead with the human benefit. Let architecture emerge naturally. No bullet lists in hero copy.
Tweet / Social
One sharp idea per post. Provocative or concrete — not both at once. Max two sentences before a line break.
Video VO
Slower pace. Spoken rhythm — no commas that don't pause. Open with the problem, not the product. Trust the visuals to carry the rest.
Docs / README
Precise and dry. Imperative verbs. No padding. Technical terms are fine here — the reader chose to be here.

1.3 Writing Rules

Do
Lead with the outcome, not the feature
BeforeLyra has a persistent memory system.
AfterIt remembers the context from three weeks ago. You don't have to repeat yourself.
Make the architecture tangible
BeforeLyra uses a hub-and-spoke model.
AfterOne message on Telegram. The same answer on Discord. One intelligence behind both.
Use contrasts to create clarity
BeforeLyra is better than ChatGPT in several ways.
AfterChatGPT resets every conversation. Lyra remembers.
Write in short, active sentences
Subject-verb-object. One idea per sentence. Full stop.
Use "your" and "you" deliberately
"Your hardware. Your data. Your agent." — possession signals sovereignty.
Don't
Don't use AI marketing clichés
Banned: "AI-powered", "next-generation", "cutting-edge", "seamless", "intelligent assistant", "game-changer", "revolutionize"
Don't bury the lede with architecture
BeforeLyra uses asyncio-based concurrent pool processing to handle…
AfterStart with what the user gains. Link to the architecture for people who want to verify it.
Don't promise autonomy you don't deliver
Lyra is not an autonomous agent. It acts when you message it. Don't write "Lyra handles everything" — write "Lyra handles it when you ask."
Don't use passive voice for key claims
BeforeYour data is kept private and never sent to the cloud.
AfterYour data never leaves your machine.

1.4 Vocabulary

Words We Use
  • personal— signals intimacy and ownership, not "consumer" (implies passive mass market)
  • runs on your hardware— concrete, physical; not "on-premise" (enterprise jargon)
  • memory— human, understood; not "persistent state" or "vector store"
  • adapters— specific to Lyra's model; not "integrations" or "plugins"
  • channels— reflects the lyre/hub metaphor; not "platforms" or "interfaces"
  • sovereignty— principled, not defensive; captures the full value of local-first
  • auditable— technical trust signal; not "transparent" (overused)
  • always-on— conveys 24/7 availability naturally; not "persistent" (abstract)
  • hub— core architectural term with visual resonance; use consistently
Words We Avoid
  • AI-powered— redundant; everything is AI-powered now
  • on-premise— enterprise procurement language, cold and distancing
  • consumer— strips the user of agency; use "personal" or "individual"
  • chatbot— implies simple, stateless interaction; Lyra is an agent engine
  • seamless— filler word; say exactly what works and how
  • leverage— corporate filler; use "use", "run", "build on"
  • ecosystem— vague; be specific about what connects to what
  • democratize— overused, hollow tech manifesto language
  • solution— use the actual word: agent, tool, system, engine

1.5 Sentence Style

Short sentences as the default
Target 8–14 words per sentence in hero copy. Long sentences only when the rhythm demands it — and earn them.
"It runs 24/7. It learns from every conversation."
Active voice for claims
The subject acts. "Lyra routes your message" not "Your message is routed by Lyra."
"Lyra connects, remembers, and responds."
No fluff openers
Never open with "We believe…", "In a world where…", or "Introducing…". Start with the problem or the product.
"Your AI forgets you every time. Lyra doesn't."
Concrete nouns over abstract concepts
Not "an intelligent knowledge system" — "an agent that remembers your projects, your conversations, your decisions."
"Your machine. Your models. Your rules."
The rule of three for rhythm
Three parallel items create momentum without needing a transition. Use sparingly — once per block is enough.
"One hub. Any channel. Every conversation remembered."
Monospace for code context only
CLI commands, config values, file paths. Never use monospace for general emphasis or decoration.
lyra agent init
2
Messaging Framework
The story we tell — consistently, across every surface.

2.1 Core Narrative Arc

P
Problem
Every AI you use resets the moment you close the tab. It doesn't know your projects, your preferences, or anything you told it last week. It lives on someone else's server — which means your data, your context, your work belongs to them by default.
A
Agitate
You rebuild context every single time. You explain the same project. You trust the same company with everything you're working on. You're renting intelligence from a system that will be updated, changed, or paywalled at someone else's discretion. You have no continuity. You have no control.
S
Solution
Lyra is a personal AI agent engine that runs 24/7 on your own hardware — connecting to the channels you already use, remembering everything across conversations, and growing more useful the longer it runs.
P
Proof
A 300-line auditable core. Stateless agents over stateful pools — no hidden side effects, no race conditions. Local LLM support: your most sensitive documents never leave your machine. Two messages from you are processed in order. Two messages from different people are processed in parallel. Zero configuration. Read the architecture in an afternoon — you'll find no magic.
Call to Action
Run it on your hardware this weekend. Clone the repo, follow the getting-started guide, and send your first message on Telegram in under an hour. Or read the architecture first — it's designed to be understood, not trusted blindly.
2
Messaging Pillars
Four load-bearing ideas. Every piece of copy should touch at least one.
Pillar 01 — Sovereignty
Your AI. Your hardware. Your data.
Lyra runs entirely on machines you control. No subscription. No cloud handoff. The inference, the memory, the logs — all of it stays in your home, under your rules.
Proof point: Local LLM support via OpenAI-compatible API on Machine 2. Sensitive documents (legal, medical) never leave the local network by design — not by policy.
"Your AI runs where you decide. On your machine. On your terms. Lyra never phones home — because it doesn't need to."
"Most AI tools: your data leaves the moment you type. Lyra: your data lives on your machine. Full stop."
"Every AI you've used? Your data left the building the moment you typed. Lyra runs at home. Your hardware. Your rules. Nothing else."
Pillar 02 — Memory
An AI that actually remembers you.
Lyra builds semantic memory across every conversation — projects, preferences, decisions, context. It grows more useful the longer it runs. You never start from zero again.
Proof point: Persistent memory namespaced per agent, semantic retrieval via embeddings, cross-conversation context. Lyra remembers what you told it last Tuesday without you mentioning it.
"ChatGPT forgets you the moment the tab closes. Lyra remembers your projects, your decisions, your context — indefinitely."
"The best AI interaction is one where you don't have to explain yourself again. Lyra remembers. That's the entire point."
"Every AI chat starts from scratch. You explain. It answers. You close the tab. You explain again tomorrow. Lyra breaks that loop. It holds the thread."
Pillar 03 — Always-On
Running while you sleep. Answering when you ask.
Lyra runs 24/7 on your hub machine. Telegram at 6am. Discord during the day. Whatever channel you prefer — one intelligence behind all of it, always available.
Proof point: Hub process with asyncio event loop and supervisord management. Sequential per-user processing with parallel cross-user handling. No queues to configure. No availability SLA to pay for.
"One message on Telegram. The same Lyra on Discord. Running on your hardware 24/7 — not on demand, not with rate limits. Just always there."
"Lyra doesn't sleep. It doesn't have rate limits. It doesn't have a free tier. It runs on your machine, on your schedule, forever."
"You message it on Telegram at midnight. You get an answer. Not from a cloud. From your machine, running in your home, where it's been running all along."
Pillar 04 — Extensible
A core you own. A system you extend.
Lyra's hub is 300 auditable lines. Every other capability — adapters, agents, skills, memory — plugs in without touching the core. Build exactly what you need. Nothing more.
Proof point: Clean extension model: adapter interface (Telegram, Discord, future channels), agent TOML configs, skills as composable units. Phase 3 envisions a LegalTech SaaS built on top of this same core.
"The core is 300 lines. Read it in an afternoon. Then extend it: a new channel adapter, a new agent, a new skill. You own all of it."
"LangChain gives you a framework. Lyra gives you a running system — and the architecture is clean enough to change without breaking anything."
"Most frameworks give you a foundation and wish you luck. Lyra gives you a working agent on day one — and a core you can read, understand, and extend without asking permission."
2
Tagline Exploration
Ten directions. One will feel right when you see it on the mark.
Sovereignty
The AI that never phones home.
Memory
Context doesn't reset. Neither does Lyra.
Personal Relationship
Personal AI, the way personal was always meant.
Technical Craft
300 lines. Zero magic. Fully yours.
Technical Craft
Auditable by design. Extensible by default.
Intelligence
One hub. Any channel. All remembered.

Highlighted cards are the recommended shortlist for testing.

3
Copy Examples
Ready-to-use copy across five surfaces. Three variations each.
Variant A — Sovereignty-first
Your AI. On your machine. Always on. Lyra is a personal AI agent engine that runs 24/7 on your hardware — remembering your context, answering on any channel, and keeping your data exactly where it belongs: with you. Run it yourself
Variant B — Memory-first
The AI that remembers you. No more starting from scratch. Lyra runs on your own hardware, connects to the channels you already use, and holds every conversation you've ever had with it. It grows more useful every day. Start in an afternoon
Variant C — Problem-first
Your AI forgets you. Lyra doesn't. ChatGPT resets every tab. Your data leaves with every prompt. Lyra runs at home, remembers everything, and never hands your context to someone else's server. Own your AI stack
Variant A — Direct
Personal AI agent engine. Runs 24/7 on your hardware. Remembers your context. Answers on Telegram & Discord. No cloud. No subscription. Fully yours.
Variant B — Contrast-led
Not a chatbot. Not a framework. Lyra is a running AI agent — on your machine, connected to your channels, with memory that doesn't reset. Built by Roxabi.
Variant C — Minimal
Personal AI agent engine. Your hardware. Your data. Your rules. → github.com/Roxabi/lyra
Variant A — Problem-first
Every AI you use resets when you close the tab. Lyra doesn't. It's a personal AI agent engine that runs 24/7 on your own hardware, connects to Telegram and Discord, and remembers your context across every conversation.
Variant B — Architecture-first
Lyra is a hub-and-spoke AI agent engine: a 300-line auditable core, clean adapter interfaces, persistent memory, and local LLM support. Runs on your hardware. Your data never leaves.
Variant C — Outcome-first
Lyra is a personal AI agent that runs on your machine and answers on any channel. It remembers. It doesn't reset. It never sends your data anywhere you didn't choose.
Variant A — Problem → Relief
You open a new chat. You explain everything. Again. The same project. The same context. The same person you were yesterday. Every AI you've used resets the moment you close the tab. Lyra doesn't. It runs on your machine. It remembers. It was waiting for you.
Variant B — Contrast
ChatGPT is brilliant — and it forgets you. Every. Single. Time. Your data leaves with every message. Your context starts over with every tab. Lyra runs at home. On your hardware. And it holds the thread.
Variant C — Architecture as story
There's a machine in your home — or there could be. Running 24/7. Waiting on Telegram. Ready on Discord. Holding every conversation you've had, every context you've built. That machine is running Lyra. And it's entirely yours.
Variant A — 30 seconds spoken — Problem-driven
"Every AI tool I've used has the same problem: it forgets me the moment I close the tab. And my data goes somewhere I didn't choose. So I built Lyra — a personal AI agent engine that runs 24/7 on my own hardware. It connects to Telegram and Discord, it remembers everything across every conversation, and it uses my own LLMs for sensitive work. The architecture is 300 lines — auditable in an afternoon. It's not a product I sell you. It's a system you own."
Variant B — 30 seconds spoken — Technical credibility
"Lyra is an AI agent engine you run on your own machine. Hub-and-spoke architecture — one central intelligence, multiple channels. It talks to you on Telegram, Discord, wherever you want. It keeps persistent semantic memory across conversations. And the whole core is 300 auditable lines — no magic, no black box. Your data never leaves. You can swap the LLM. You can add channels. You own the entire stack. That's the design principle: sovereignty first."