Published
Category
Developer
How to Write AI Prompts to Build Usable Bitcoin Lightning Apps
A guide for developers who want AI agents to build payment-enabled applications — not toy demos.
AI coding agents (Cursor, Windsurf, Claude Code, Copilot, etc.) can build full applications from a single prompt. But most prompts produce generic, half-broken prototypes. The gap between "build me an app" and "build me a production-ready app" is entirely in how you write the prompt.
We recently wrote prompts for two bitcoin lightning applications: ZapWall (instant paywalls for any content) and ZapVoice (professional invoicing for freelancers). In the process we learned a lot about what makes AI-generated code actually work. This post breaks down the principles, with concrete examples from both projects.
The Two Apps
ZapWall is a dead-simple paywall tool. Creators paste content (text, images, links, files), set a price in sats, and get a shareable URL. Visitors pay a lightning invoice to unlock the content. No accounts, no sign-ups.
ZapVoice is invoicing for freelancers and small businesses. Create a polished invoice, send a link to your client, they pay in bitcoin via lightning. Money arrives in seconds instead of days, without payment processor fees for the receiver.
Both are full-stack apps (React + Express + SQLite) that handle real payments via NWC (Nostr Wallet Connect). The prompts that generated them are around 300-400 lines each. That length is intentional.
Why Long Prompts Work Better
A short prompt like "Build me a paywall app with lightning payments" gives the AI too much room to improvise. It'll make assumptions about your tech stack, invent its own payment flow, and probably hallucinate API methods that don't exist.
Longer prompts aren't about being verbose. They're about being *precise* where it matters and leaving room where it doesn't. The goal is to constrain the decisions that are hard to fix later while giving the agent freedom on implementation details.
The Anatomy of a Good App Prompt
After writing and iterating on these prompts, a clear structure emerged. Here's what matters and why.
1. Start With the "Why" — Not the "What"
Both prompts open with a clear description of what the app does and *why it matters*:
ZapWall: "Think of it as 'Linktree meets a cash register' perfect for selling guides, exclusive memes, spicy takes, recipes, discount codes, or literally anything you want to put behind a sats wall."
ZapVoice: “This solves a massive pain point: freelancers (especially international ones) wait days/weeks for bank transfers, lose 3-5% to payment processors, and deal with chargebacks. Bitcoin on the lightning network fixes all of it."
Why this matters: AI agents make better design decisions when they understand the product context. An agent that knows ZapVoice targets "corporate clients" will make the payment page look professional and trustworthy. One that knows ZapWall is for "exclusive memes" will keep the UI minimal and fun. The "why" shapes hundreds of micro-decisions throughout the codebase.
2. Lock the Tech Stack Explicitly
Both prompts specify exact technologies with version numbers:
Frontend: React + TypeScript (Vite)Backend: Node.js + TypeScript (Express)Database: SQLite (via better-sqlite3)Payments: @getalby/sdk (v7.0.0), @getalby/lightning-tools (v6.1.0)
Why this matters: Without pinned versions, AI agents will use whatever they were trained on, which might be outdated APIs with breaking changes. Specifying `@getalby/sdk (v7.0.0)` prevents the agent from generating code for v3 with completely different method signatures.
3. Show the Actual Code — Don't Describe It
This is the single biggest difference between prompts that produce working apps and prompts that produce broken ones. Instead of writing "use NWC to create an invoice," I include the actual code:
typescript
import { NWCClient } from "@getalby/sdk/nwc";
const client = new NWCClient({ nostrWalletConnectUrl: nwcUrl });
const invoice = await client.makeInvoice({
amount: priceSats * 1000, // millisats
description: `ZapWall: "${paywall.title}"`,
expiry: 600,
});
Why this matters: AI models are great at *pattern-matching* and terrible at *inventing APIs they haven't seen*. When you show the exact import paths, method names, and parameter shapes, the agent replicates them correctly throughout the codebase. When you just say "create an invoice using NWC," it'll guess and might guess wrong.
The ZapVoice prompt includes code snippets for:
Wallet initialization and info retrieval
Invoice creation with locked vs. dynamic sats pricing
Payment detection via both notifications and polling
Fiat conversion with proper import paths
Balance queries and transaction listing
Each snippet establishes the correct patterns and is a working example the agent can build on.
4. Specify Import Paths — Seriously
Both prompts end with this line:
Prefer ES module imports from subpaths (e.g. `@getalby/sdk/nwc`, `@getalby/lightning-tools/fiat`, `@getalby/lightning-tools/lnurl`).
And the SKILL.md of the Alby skill explicitly states:
Do NOT import from the dist directory.
Why this matters: Wrong import paths are the #1 source of build failures in AI-generated code. The agent might write `import { NWCClient } from "@getalby/sdk"` instead of `import { NWCClient } from "@getalby/sdk/nwc"`. The app won't compile. Subpath exports are a relatively new pattern that many models don't default to. Being explicit here saves hours of debugging.
5. Define the Data Model
Both prompts include complete SQL schemas:
sql
CREATE TABLE paywalls (
id TEXT PRIMARY KEY,
slug TEXT UNIQUE NOT NULL,
title TEXT NOT NULL,
content_type TEXT NOT NULL,
content TEXT NOT NULL,
price_sats INTEGER NOT NULL,
...
);
Why this matters: The data model is the backbone of the app. If the agent invents its own schema, you'll spend more time refactoring the database than you saved by using AI. By defining the schema explicitly, you ensure consistent field names, proper relationships, and the right constraints across the entire codebase — from backend queries to frontend types.
6. Describe the User Flows, Not Just Features
Instead of listing features ("paywall creation, payment, analytics"), both prompts walk through the complete user journey:
ZapWall Creator Flow:
Connect Wallet → paste NWC URL
Create a Paywall → pick content type, set price, get shareable URL
Dashboard → see all paywalls, revenue, stats
ZapWall Visitor Flow:
Open link → see title, preview, price
Scan/pay lightning invoice
Content unlocks instantly
Why this matters: User flows reveal the interactions between components. A feature list says "payment detection." A user flow says "visitor sees 'Waiting for payment...' → '⚡ Payment received!' → content reveales with a blur-to-clear animation." The agent needs to understand the choreography, not just the parts.
7. Include Security Constraints
The ZapWall prompt includes critical security decisions:
Protected content is NEVER sent to the client before payment verification.
NWC URL stored only in the creator's browser (encrypted localStorage), never on the server.
The ZapVoice prompt specifies:
NWC URL encrypted in database (server-side) using a derived key.
Invoice content only accessible via unique slug (UUID-based, not sequential).
Why this matters: AI agents will happily generate insecure code if you don't tell them not to. Without the "content is NEVER sent before payment" constraint, the agent might embed the paywall content in the HTML and "hide" it with CSS — which any user could inspect to bypass payment. Security constraints must be *stated explicitly* because the agent optimizes for "working" not "secure."
8. Provide Test Infrastructure
Both prompts include a complete testing setup using faucet wallets:
typescript
async function createTestWallet() {
const response = await fetch("https://faucet.nwc.dev?balance=10000", { method: "POST" });
const nwcUrl = (await response.text()).trim();
return { nwcUrl };
}
And then list specific test scenarios:
Create paywall → open link → pay → verify content unlocks
Test all content types
Test expiry and purchase limits
Test error cases (expired invoice, insufficient balance)
Why this matters: Without test infrastructure, the agent can't verify its own work. The faucet wallets let the agent run actual end-to-end payment flows without spending real bitcoin. This is crucial because it turns the agent from "code generator" into "code generator that can debug itself."
9. UI/UX Direction — Enough to Guide, Not Enough to Constrain
The prompts describe *what* the UI should achieve without specifying pixel-level layouts:
ZapVoice payment page:
Must look professional and trustworthy — no 'crypto bro' vibes. Your business branding (logo, colors). Mobile-optimized (clients will open this on phones from email).
ZapWall visitor page:
Smooth reveal animation (blur → clear, or slide-down). After unlock: content + optional 'Tip more' button.
Why this matters: Too much UI detail makes the prompt enormous and constrains the agent unnecessarily. Too little means the agent produces a generic Bootstrap-looking page. The sweet spot is *design intent* — tell the agent what feeling the UI should convey, and let it figure out the CSS.
10. End With a Clear Deliverables List
Both prompts end with a numbered list of expected outputs. ZapVoice has 13 deliverables, from "Full-stack web app" to "E2E tests using faucet test wallets" to "README with setup instructions."
Why this matters: This is the agent's checklist. Without it, the agent will decide it's "done" when the main feature works, skipping tests, documentation, error handling, and edge cases.
How the Alby Bitcoin Builder Skill Makes This Possible
All of this works because of the Alby Bitcoin Builder Skill — an agent skill that gives AI coding agents the knowledge to build bitcoin lightning applications correctly.
Here's the problem: AI models know about bitcoin conceptually, but they don't know the current APIs, library versions, or correct import paths. They'll hallucinate methods, use deprecated patterns, or mix up libraries. The Alby skill solves this by providing:
Structured API References
The skill includes detailed reference files for:
NWC Client (`@getalby/sdk`): How to connect wallets, create invoices, subscribe to payment notifications, pay invoices, handle hold invoices for conditional payments
Lightning Tools (`@getalby/lightning-tools`): Fiat-to-sats conversion, lightning address resolution, BOLT-11 invoice parsing and verification
The docs are structured specifically for AI consumption, with TypeScript type definitions, import paths, and working examples.
TypeScript Typings as Ground Truth
The skill includes actual `.d.ts` type definition files. When an AI agent reads these, it knows *exactly* what methods exist, what parameters they take, and what they return. No hallucination possible — the types are the contract.
Test Wallet Infrastructure
The skill provides the faucet wallet pattern (`https://faucet.nwc.dev`) that both prompts use. This is critical because it enables:
Self-verifying agents: The agent can test its own code end-to-end
Reproducible tests: Each test run creates fresh wallets with known balances
No real money risk: Test wallets exist in an isolated system
Security Guidance Built In
The skill explicitly states:
Never share NWC connection strings
Never log connection secrets
Handle NWC URLs like secure API keys
This means every app the agent builds inherits these security practices — you don't have to remember to mention them in every prompt.
The Skill's Role in Prompt Writing
When writing the ZapWall and ZapVoice prompts, we didn't have to explain what NWC is, how lightning invoices work, or what the Alby SDK API looks like. The skill already provides all of that context to the agent. Our prompts could focus on application-specific logic such as the paywall mechanics, the invoice workflow or the UI requirements. The skill handled the payment infrastructure knowledge.
This is the real power of agent skills: they're reusable domain knowledge. Write a good skill once, and every prompt that builds on it starts from a higher baseline. The 300 lines in our ZapWall prompt would have been 600+ without the skill handling the payment layer.
Practical Tips for Your Own Prompts
1. Include working code for any API the model might not know well. If you're using a library released after the model's training cutoff, show the actual usage patterns.
2. Pin dependency versions. `@getalby/sdk (v7.0.0)` is not optional. It prevents the agent from generating code for a version that doesn't exist.
3. Define the data model yourself. Don't let the AI invent your schema. You'll live with it longer than the AI will.
4. State security constraints as rules, not suggestions. "Content is NEVER sent before payment" is better than "make sure the content is protected."
5. Provide test infrastructure that works with real (test) services. Mocks are fine for unit tests, but the agent needs to run actual payment flows to verify the integration works.
6. Describe user journeys, not just feature lists. The agent needs to understand the sequence of interactions, not just the individual capabilities.
7. Give UI direction as intent, not specification. For example: "Professional and trustworthy, no crypto bro vibes".
8. Use agent skills for reusable domain knowledge. If you're building multiple apps on the same stack, extract the common knowledge into a skill so you don't repeat yourself in every prompt.
9. End with deliverables. Give the agent a definition of "done" — including tests, docs, and error handling.
10. Show the imports. It sounds trivial, but correct import paths prevent more build failures than any other single thing in the prompt.
The Bottom Line
AI agents are good at filling in details, but they need the right constraints. A well-written prompt doesn't micromanage. It makes the hard decisions (architecture, data model, security rules, API patterns) and lets the agent handle the implementation. The Alby Bitcoin Builder Skill extends this by providing battle-tested payment infrastructure knowledge, so your prompts can focus on what makes your app unique.
The prompts for ZapWall and ZapVoice are available as open examples. Use them as templates, adapt the patterns to your own apps, and let the AI do what it's good at: writing the code between the constraints you set.
