I spent years doing solution architecture by hand. Discovery calls, whiteboard sessions, integration proposals. The good ones took weeks. Most of that time went to understanding the customer's world before building anything in it.

So we turned the process into an AI Skill.

A Skill is an agent capability installed into an IDE — Claude Code, Cursor, Windsurf, others. One command:

npx skills add personizeai/personize-skills

Once installed, the AI has domain expertise it didn't have before. Not a chatbot with a prompt — structured knowledge: frameworks, decision trees, industry patterns, constraint systems, and nine areas of production checklist. The AI uses all of it to reason about your specific situation.

The Solution Architect encodes the full journey from "I have a product and I want to add governed memory and personalization" to "here's a production-ready integration." That journey has seven phases. This post covers every one of them.


In this article:


Phase 1: DISCOVER

The skill doesn't start with a product pitch. It starts by understanding your product.

Core user journey. Interaction surfaces (web app, mobile, email, Slack, SMS, dashboards). Existing data landscape. What your team does manually that involves knowing about a specific user. What you wish the product knew that it doesn't.

If you share your codebase, the skill reads it. It looks for signals that humans produce when building for users: every user.email, every event.track(), every sendEmail(), every notify(). Each one is a personalization opportunity that already exists in production.

The skill maps what it finds into three categories:

  • Structured fields — stored directly with upsert(), no extraction needed (user ID, plan tier, company domain)
  • Unstructured content — stored via memorize() with extractMemories: true, AI pulls the signal out (support tickets, sales notes, call transcripts)
  • Real-time events — memorized individually as they happen, tagged for context (feature activations, page views, purchases)

The first time a user ran this on a codebase we'd never seen, the discovery document mapped data flows we would have needed three meetings to find. That was the moment this stopped feeling like automation and started feeling like a different kind of analysis entirely.

Discovery outputs a structured brief: product description, user personas, existing data sources, missing data, current personalization surface, recommended starting point. That brief drives everything that follows.


Phase 2: PROPOSE

After discovery, the skill proposes personalization opportunities. But it applies a filter before proposing anything: Could they build this with a CRM and a basic LLM call?

If yes, the skill doesn't propose it. Swapping a CTA based on user role is template logic. Writing a unique CTA that references the visitor's industry, journey stage, open support tickets, and your brand voice — that requires unified memory, governance, and generation working together. The skill only proposes things that need all three layers.

Proposals are organized across five surface areas:

Software / Web App — Personalized dashboards, smart onboarding that adapts to what the user already knows, contextual feature tips triggered by usage patterns, AI-generated insights surfaced inline, smart defaults pre-populated from memory. The onboarding example is often the fastest win: instead of a generic checklist, the first screen references what the AI already knows about the user's role, company size, and stated goals from the signup flow.

Marketing Campaigns — Hyper-personalized cold outreach using memory of the prospect's role, company stage, and known pain points. Intelligent nurture sequences that change message and cadence based on engagement. ABM playbooks for high-value accounts where each message references company-specific context. Personalized landing pages where the headline and body adapt to the visitor's segment.

Mobile App — Smart push notifications that know not just what to say but when to say it, based on usage patterns stored in memory. A personalized home screen that surfaces the features most relevant to the user's job to be done. Contextual in-app messages that appear when the AI detects a pattern indicating the user is stuck.

NotificationssmartDigest() powers daily or weekly digest emails that assemble the most relevant updates from across memory. Escalation notifications that know which support issues are still open. Product usage nudges that only fire when the AI judges the timing is right based on behavioral history.

Customer Success — AI-powered ticket routing based on product knowledge and account health stored in memory. Proactive churn prevention that surfaces early signals before the CSM would otherwise see them. QBR prep: every customer's usage, tickets, and milestones assembled into a briefing in seconds.


Phase 3: PLAN

When you say "let's do it," the skill generates a phased implementation roadmap. Not generic advice — actual TypeScript using your specific data models and field names from the discovery session.

The plan is structured across five areas:

Data Ingestion — What to memorize, which fields, whether to use extractMemories: true or false, which collection to store to. The skill distinguishes data that needs AI processing from data that doesn't, because running extraction on structured numeric fields wastes budget.

Personalization Points — Every touchpoint mapped: UI components, email sends, notification triggers. For each: the SDK method chain, the prompt template skeleton, the delivery integration.

Governance Setup — Variables to create (brand voice, ICP definition, tone guidelines, compliance rules), with draft content. These become the shared layer every agent fetches before generating.

Architecture — Where Personize sits in the stack. Caching strategy. Rate limit budget (always calculated from client.me(), never hardcoded).

Phased Timeline:

  • Week 1npm install @personize/sdk, client.me() to verify auth, client.collections.list() to find real collection IDs, first memorizeBatch() script for the primary data source
  • Week 2 — First personalized feature in production. One touchpoint, measured against the non-personalized baseline.
  • Weeks 3–4 — Full pipeline: cron scheduling, delivery integration (SendGrid, Slack, Twilio, or webhook), feedback loop where generated output gets memorized back
  • Ongoing — Advanced: cross-entity context, governance refinement, second and third collections, audit against the integration checklist

The plan includes the 10-step agentic loop adapted to the specific use case:

OBSERVE → REMEMBER → RECALL → REASON → PLAN → DECIDE → GENERATE → ACT → UPDATE → REPEAT

Each step maps to a specific SDK call. The skill writes the loop out with the developer's actual entity types and field names.


Phase 4: SCHEMA

Property description quality is the single biggest factor in AI extraction accuracy. This is where most teams get it wrong.

The skill helps design two things: collections (entity types) and properties (fields within those entities).

Designing Collections

Collections are the entity types your product revolves around. The skill recommends starting schemas based on product type:

| Product Type | Recommended Collections | |---|---| | B2B SaaS | Contact, Company, Deal | | E-commerce / DTC | Customer, Product, Order | | Marketplace | Buyer, Seller, Listing | | Healthcare / HealthTech | Patient, Provider, Appointment | | Recruiting / HR Tech | Candidate, Position, Client | | EdTech | Student, Instructor, Course | | Real Estate / PropTech | Buyer, Seller, Property, Agent | | Financial Services | Client, Portfolio, Advisor |

Each collection has a slug, a primaryKeyField (usually email or an ID), and an identifierColumn — the field the AI uses to refer to the entity in prose.

Designing Properties

Properties come in six types: text, number, date, boolean, options, and array. Two decisions matter most per property:

autoSystem: true or falsetrue means the AI auto-extracts this field during memorization. Use it for everything you want populated from unstructured content: job title from a LinkedIn profile, pain point from a support ticket, key milestone from a sales note.

Update mode: replace or append — Replace for current-state fields (current job title, deal stage, plan tier). Append for accumulating data (pain points mentioned, topics discussed, features requested). Getting this backwards creates stale memory.

What Good Description Looks Like

This is the most important design decision and the least obvious one:

❌ Weak: "Job title"

✅ Strong: "The person's current professional title as it appears on LinkedIn or in their email signature.
           Examples: 'VP of Engineering', 'Head of Growth', 'Senior Product Manager'.
           Do not infer or guess — only extract when explicitly stated.
           If multiple titles are mentioned, use the most recent one."

The weak version tells the AI nothing about what counts as a valid value. The strong version defines source, format, examples, and edge cases. The difference in extraction accuracy is measurable.

Update Mode: Replace vs Append

// Replace — always the current value
{
  propertyName: 'deal_stage',
  description: 'Current stage in the sales pipeline: Prospect, Discovery, Proposal, Negotiation, Closed Won, Closed Lost.',
  autoSystem: true,
  // replaceExisting: true (default for current-state properties)
}
 
// Append — accumulate over time
{
  propertyName: 'pain_points',
  description: 'Specific business problems or frustrations the person has mentioned, quoted or paraphrased directly. Add new ones; do not remove old ones.',
  autoSystem: true,
  // replaceExisting: false (append mode)
}

After designing the schema, the skill either generates client.collections.create() code to create it programmatically, or provides a specification for manual creation in the Personize web app, then confirms with client.collections.list().


Phase 5: GENERATE

This phase is not about generating content. It's about generating content you can trust to send without human review — or knowing exactly when you can't.

The skill encodes eight production constraints, all required before any message leaves the system:

  1. Channel format matching — Email requires HTML tags (<p>, <b>, <a>). SMS is plain text, 160 characters. Slack uses markdown. Push notifications cap at 100 characters for the body. Misformatted output renders incorrectly or gets rejected by delivery APIs.

  2. No invented claims — Never invent stats, case studies, endorsements, or promises not present in governance variables or entity memory. Hallucinated facts in outbound communication create legal liability.

  3. Subject and body as separate fields — Email subject line and body are two distinct outputs. Concatenated output breaks template rendering in every delivery system.

  4. Channel length limits — SMS ≤ 160 chars. Push ≤ 100 chars. Slack DMs ≤ 150 words. Exceeding limits causes truncation or split messages.

  5. URL validation — No placeholder URLs. Every link prefixed https://, every deep link verified against the routing scheme.

  6. Relevance over personalization depth — If a personal detail doesn't serve the message's goal, drop it. Tangential personal details feel intrusive, not helpful.

  7. Sensitive content flagging — Pricing, legal, medical, financial, compliance topics → flag for human review, do not send autonomously.

  8. Governance-first generationsmartGuidelines() is called before any content is generated. Every governance constraint (competitor mentions, required disclaimers, tone) is enforced before the first word is written.

The Generation Pattern

const result = await client.ai.prompt({
    context,   // assembled from smartGuidelines + smartDigest + recall
    instructions: [
        // Step 1: Analyze — who, what, why
        { prompt: 'Analyze the recipient and the goal. What facts do we know? What is the ONE outcome we want?', maxSteps: 3 },
        // Step 2: Guardrails check
        { prompt: 'Review governance guidelines. List constraints: forbidden topics, required disclaimers, tone rules, format rules.', maxSteps: 2 },
        // Step 3: Generate
        { prompt: 'Generate the email. Follow ALL guardrails. Output:\n\nSUBJECT: ...\nBODY_HTML: ...\nBODY_TEXT: ...', maxSteps: 5 },
    ],
    evaluate: true,
    evaluationCriteria: 'Content must: (1) match channel format, (2) contain no invented facts, (3) have subject and body as separate fields, (4) stay within length limits, (5) follow all governance constraints.',
});

evaluate: true instructs the AI to grade its own output before it reaches anyone. If the score is below threshold, the message is flagged instead of sent.


Phase 6: WIRE

The gap between "I generated great content" and "it actually reaches the right person through the right system" is where most integrations stall. The skill knows five wiring patterns.

Pattern 1: Wrap an Existing Function — The cleanest integration. You have a sendWelcomeEmail() function already in production. The skill wraps it: call smartGuidelines() + smartDigest() in parallel, pass the context to client.ai.prompt(), parse the output, then call your original function with personalized content. The existing function handles delivery exactly as before.

// Before
async function sendWelcomeEmail(userId: string, email: string) {
    const template = getDefaultTemplate('welcome');
    await emailService.send({ to: email, ...template });
}
 
// After — wraps the original, doesn't replace it
async function sendPersonalizedWelcomeEmail(userId: string, email: string) {
    const [governance, digest] = await Promise.all([
        client.ai.smartGuidelines({ message: 'welcome email tone and guidelines', mode: 'fast' }),
        client.memory.smartDigest({ email, type: 'Contact', token_budget: 1500 }),
    ]);
 
    const context = [governance.data?.compiledContext, digest.data?.compiledContext].join('\n\n---\n\n');
    const result = await client.ai.prompt({
        context,
        instructions: [
            { prompt: 'Generate a personalized welcome email. Output SUBJECT: and BODY_HTML: as separate fields.', maxSteps: 5 },
        ],
    });
 
    const output = String(result.data || '');
    const subject = output.match(/SUBJECT:\s*(.+)/i)?.[1]?.trim() || 'Welcome!';
    const bodyHtml = output.match(/BODY_HTML:\s*([\s\S]+?)(?=\n[A-Z_]+:|$)/i)?.[1]?.trim() || '';
 
    await emailService.send({ to: email, subject, html: bodyHtml });
 
    // Close the loop — remember what was sent
    await client.memory.memorize({
        content: `[WELCOME EMAIL] Sent ${new Date().toISOString()}. Subject: ${subject}`,
        email, enhanced: true, tags: ['generated', 'welcome', 'email'],
    });
}

Pattern 2: Webhook Receiver — External events (CRM update, Stripe payment, support ticket created) trigger a Personize pipeline. The event is memorized first, then the pipeline decides whether to generate and deliver based on the event type.

Pattern 3: Middleware Enrichment — Personize sits between request and response. Every API route gets personalization context injected into req.personalization. If Personize is down, req.personalization is null and the route continues unaffected.

Pattern 4: Cron → Generate → Deliver — Scheduled job (GitHub Actions or a cron function) runs daily or weekly. Fetches entities, assembles context from memory, generates personalized content, delivers to each channel. This pattern powers digest emails, renewal reminders, and weekly usage reports.

Pattern 5: Event-Driven Queue — High-volume pipelines use a queue (BullMQ, SQS) to process memorization and generation without overwhelming rate limits. Each worker handles one entity at a time.

The critical constraint across all five patterns: the existing function must work if Personize is down. Personalization is an enhancement, not a dependency. Graceful degradation is not optional.


Phase 7: REVIEW

For teams that already have an integration, the skill runs a full production audit against nine areas.

  1. Connection — SDK installed, client.me() returns org name, rate limits readable
  2. Schema — Collections created, every property has a description (not just a name), display types match the data
  3. Memorize — Rich text uses extractMemories: true, batch operations have 429 retry logic, not pre-processing with an LLM before memorizing (the platform does the extraction)
  4. Recall — Right method for each use case: smartRecall() for semantic search, smartDigest() for entity context bundles, recall() for direct lookup
  5. Governance — At least one guideline created, triggerKeywords set on each, smartGuidelines() returns meaningful content for real tasks
  6. Generate — Multi-step instructions[] used, structured outputs for machine-readable results, generated content memorized back, evaluate: true on production runs
  7. Agents — Common prompt patterns saved as agents, input variables documented, tested with real entity data
  8. Workspaces — Workspace schema attached to entities needing multi-agent coordination, agents reading and writing the workspace via memorize()
  9. Production readiness — Context assembled from all three layers (smartGuidelines + smartDigest + smartRecall), generated outputs memorized after delivery, sensitive content flagged, rate limits read from client.me() not hardcoded

The skill reads the existing integration code, identifies data that exists but isn't being memorized, and finds UI and notification touchpoints that currently have no personalization. It presents findings as: here's what you're doing well, here's what's missing, here's how to improve — with specific code diffs for the most impactful gaps.


Design Patterns and Architectural Nuances

The seven phases are the journey. These are the decision frameworks the skill uses within them — the patterns a solution architect needs to internalize to make the right calls.

The Data Intelligence Taxonomy

Not all data is created equal. The skill classifies what to memorize into eleven categories, each with a different ingestion strategy:

| Category | What It Is | Extraction? | Example | |---|---|---|---| | Identity & Profile | Structured attributes | No — upsert() | Name, email, company, plan tier | | Behavioral Signals | Usage events and engagement | Yes — patterns | Feature activations, page views, session frequency | | Search Queries | What the user asked for | Yes — intent | "how to export CSV", "cancel subscription" | | Content Consumption | What they read or watched | Yes — interests | Help articles visited, webinar attendance | | User-Generated Content | What they created | Yes — context | Uploaded documents, form responses, survey answers | | Expressed Preferences | Natural language statements | Yes — preferences | "I prefer weekly updates", "don't email on Fridays" | | Interaction History | Support, calls, emails | Yes — relationship | Ticket transcripts, call notes, email threads | | Purchase & Usage | Transaction and metric data | No — upsert() | MRR, seats used, last invoice date | | ML Model Outputs | Predictions with explanations | No — upsert() | Churn score: 0.73, reason: "usage dropped 40%" | | Generated Content | What the AI already sent | Yes — avoid repeats | Previous outreach emails, notification history | | External Signals | Third-party intelligence | Yes — context | Funding rounds, job changes, news mentions |

The critical nuance: categories 1, 8, and 9 are already structured. Running extractMemories: true on a number like MRR: 45000 wastes API calls and extraction budget. The skill distinguishes these automatically — structured data goes through upsert() or memorizeBatch() with extractMemories: false, unstructured content goes through memorize() with extraction on. Getting this wrong is the most common budget mistake in early integrations.

The Three-Layer Operating Model

Every pipeline the skill designs follows the same context assembly pattern. Three layers, assembled in parallel, merged before any generation happens:

Layer 1: GuidelinessmartGuidelines() fetches the governance variables relevant to the current task. Brand voice, compliance rules, ICP definitions, tone requirements, competitor mention policies. This layer answers: what are our rules?

Layer 2: MemorysmartDigest() compiles everything known about the entity into a token-budgeted context block. Past interactions, extracted properties, behavioral patterns, previous generated content. This layer answers: who are we talking to?

Layer 3: WorkspacesmartRecall() pulls coordination state when multiple agents or humans are working on the same entity. Open tasks, recent contributions, handoff notes, pending decisions. This layer answers: what's already in motion?

const [guidelines, memory, workspace] = await Promise.all([
    client.ai.smartGuidelines({ message: taskDescription, mode: 'fast' }),
    client.memory.smartDigest({ email, type: 'Contact', token_budget: 1500 }),
    client.memory.recall({ query: 'workspace updates and pending actions', email, limit: 10 }),
]);
 
const context = [
    guidelines.data?.compiledContext,
    memory.data?.compiledContext,
    workspace.data?.memories?.map(m => m.content).join('\n'),
].filter(Boolean).join('\n\n---\n\n');

The skill never generates with only one or two layers. Context assembled from all three layers is the minimum bar for production-quality output. Skipping governance means ungoverned generation. Skipping memory means generic output. Skipping workspace means agents stepping on each other.

Performance Modes and Token Budgets

Two performance modes change how the skill designs pipelines:

  • mode: 'fast' — ~200ms response. Uses cached governance and lighter recall. For real-time interactions: in-app messages, API middleware, chatbot responses, UI personalization where latency matters.
  • mode: 'full' — ~3 seconds. Deep analysis with full governance retrieval. For async pipelines: email campaigns, QBR prep, digest generation, anything where quality outweighs speed.

Token budgets in smartDigest() control how much context is assembled. The skill calibrates these based on the task:

  • 500 tokens — Lightweight. Enough for a push notification or a single-sentence personalization.
  • 1,500 tokens — Standard. Covers identity, recent interactions, key properties. Good for emails and in-app messages.
  • 3,000+ tokens — Deep. Full history, cross-entity context, behavioral patterns. For QBR prep, account reviews, complex generation.

The budget isn't a guess. The skill calculates it from the model's context window minus the governance tokens minus the prompt template size, leaving room for generation.

Anti-Patterns the Skill Catches

Phase 7 (REVIEW) audits for these. But the skill also prevents them during design:

Pre-processing with an LLM before memorizing. Teams sometimes run GPT over their data to "clean it" before sending it to Personize. The platform already runs extraction — doing it twice doubles cost and often loses signal the second LLM decided wasn't important.

Hardcoding rate limits. Every plan has different limits. The skill always reads from client.me() and calculates throughput: each record uses ~4–6 API calls, so a 60 calls/min limit means ~10 records/min, not 60. Hardcoding 60 causes 429 errors in the first batch run.

Using smartDigest() when recall() is enough. Digest compiles a full entity profile. If you only need "what support tickets are open," a targeted recall() with a specific query is faster, cheaper, and returns more focused context.

Memorizing without tags. Tags make recall precise. An email sent without a ['generated', 'outreach', 'email'] tag set is a memory the system can recall but can't filter. When you later need "all outreach emails sent to this contact," you're doing semantic search instead of a direct filter.

Stacking emphasis on already-personalized content. A message that references someone's name, title, company, recent purchase, last support ticket, AND preferred communication style doesn't feel personalized — it feels surveilled. The skill enforces constraint 6 from GENERATE: relevance over personalization depth.


Ten Industry Blueprints

The skill ships with blueprints for ten industries. Each blueprint contains: recommended schema, governance setup, unified memory strategy, use cases organized by function (Sales, Marketing, CS, Product, Operations), agent coordination patterns, and code examples specific to that industry's data models.

SaaS / B2B Software — 39 use cases. The densest blueprint. Sales: personalized cold outreach using job change signals, champion tracking through org changes, deal risk scoring from engagement drops. Marketing: intent-scored trial activation, account-level ABM content. CS: proactive churn detection from usage pattern shifts, QBR prep assembled in seconds from memory.

E-Commerce / DTC / Retail — 36 use cases. Browse abandonment sequences that reference the specific products viewed. Post-purchase cross-sell based on product compatibility memory. Return pattern detection for proactive service. Win-back campaigns that know the real reason the customer churned.

Healthcare / HealthTech — 28 use cases. Personalized care reminders calibrated to patient preferences and history. Provider briefings before appointments. Patient education content matched to reading level, language, and condition history. Compliance-first governance that enforces HIPAA constraints across all generated output.

Financial Services / FinTech — 30 use cases. Portfolio review prep using the client's stated goals, risk tolerance, and life changes stored in memory. Wealth management context assembled from every interaction. Regulatory disclosure constraints enforced in governance before any output generates.

Education / EdTech — 30 use cases. Personalized learning paths that adapt to what the student already knows. Instructor support briefings before office hours. Institutional reporting on engagement patterns. Student risk detection from behavioral signals before a drop becomes visible in grades.

Real Estate / PropTech — 30 use cases. Buyer memory that accumulates preferences, rejections, and reasoning across every property shown. Seller listing briefs from comparable sales and market context. Long-term relationship management for clients who transact every 5–7 years — memory bridges the gap.

Recruiting / HR Tech — 34 use cases. Candidate memory across every touchpoint in the process. Personalized outreach that references the specific research a recruiter has done. Interview prep briefs for hiring managers. Client-side relationship management that remembers what the client said about every candidate ever discussed.

Professional Services — 26 use cases. Matter briefings that assemble all relevant case history before client calls. Business development intelligence on prospect companies. Talent utilization visibility across the firm. Engagement health monitoring before a client relationship deteriorates.

Insurance / InsurTech — 24 use cases. Agent productivity tools that surface the right product for each client from stored profile memory. Claims handling context assembled for adjusters. Renewal outreach that references the client's coverage history and life changes.

Travel & Hospitality — 28 use cases. Guest preference memory that persists across stays. Pre-arrival personalized itineraries assembled from past behavior and stated preferences. Revenue management tools informed by customer lifetime value stored in memory. Post-stay sequences that build on what was remembered.


Cross-Entity Memory

Three-layer context assembly — Guidelines, Memory, and Workspace converge into governed generation

The three-layer model described above works within a single entity. Cross-entity memory is where it gets architecturally interesting — and where single-entity thinking breaks down.

Most teams start by memorizing one entity type. Contacts, usually. The real power shows up when memory spans entity types and the AI reasons across the relationships.

A B2B SaaS product has contacts, companies, and deals. When generating content for a contact, pulling only the contact's memory creates a blind spot. The company context — industry, size, growth stage, open support tickets, usage metrics — lives in a separate entity. The skill maps these relationships and builds the cross-entity context assembly pattern:

const [governance, contactDigest, companyDigest, recentTickets] = await Promise.all([
    client.ai.smartGuidelines({ message: 'renewal conversation guidelines' }),
    client.memory.smartDigest({ email: contact.email, type: 'Contact', token_budget: 1500 }),
    client.memory.smartDigest({ website_url: company.domain, type: 'Company', token_budget: 1000 }),
    client.memory.recall({ query: 'support issues and complaints', email: contact.email, limit: 5 }),
]);

The AI doesn't walk into a renewal conversation blind to an active support problem. That's the difference between personalization and context-aware intelligence.

For an energy management platform, the skill mapped four entity types: buildings, building managers, energy auditors, and scenario decisions. Each relationship between them became a structured memory event that any agent could recall. The schema made sense on the first pass — for a product the skill had never worked with before. It wasn't following a template. It was reasoning about the domain.


The Feedback Loop

The feedback loop — Memory feeds Generation feeds Delivery feeds Memory

One thing the skill encodes that's easy to miss: everything generated gets memorized back.

When the AI sends an email, that email and its outcome get stored in the contact's memory. When a push notification is delivered, the delivery is logged. Next time the pipeline runs, it knows what was already said, what worked, and what to avoid repeating. Memory feeds generation. Generation feeds memory.

We ran the same pipeline on the same contacts two weeks apart. The second run referenced what the first run had sent. Nobody told it to do that. The loop just worked because the architecture made it inevitable.

This is why the integration checklist has a dedicated section for feedback loops. It's not enough to generate and deliver. The system only gets smarter if the output goes back in.


The framework — discover, propose, plan, schema, generate, wire, review — has held up across every product we've tested it on, from single-person startups to enterprise platforms with thousands of entities. The source is open. If you want to understand how it reasons about your domain, it's right there.

npx skills add personizeai/personize-skills

If you try it, I want to hear what it surfaces. Especially the things you didn't expect.


Next: a real discovery session — what happens when the AI reads a codebase and maps data flows we would have needed three meetings to find.