<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Product Development on Claude Code Wiki</title><link>http://www.markalston.net/claude-code-wiki/product/</link><description>Recent content in Product Development on Claude Code Wiki</description><generator>Hugo</generator><language>en-us</language><atom:link href="http://www.markalston.net/claude-code-wiki/product/index.xml" rel="self" type="application/rss+xml"/><item><title>Product Thinking for Engineers</title><link>http://www.markalston.net/claude-code-wiki/product/product-thinking/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>http://www.markalston.net/claude-code-wiki/product/product-thinking/</guid><description>&lt;h1 id="product-thinking-for-engineers"&gt;Product Thinking for Engineers&lt;a class="anchor" href="#product-thinking-for-engineers"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="executive-summary"&gt;Executive Summary&lt;a class="anchor" href="#executive-summary"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Most engineers build what they&amp;rsquo;re told to build, and build it well. The gap is that &amp;ldquo;well&amp;rdquo; usually means technically sound &amp;ndash; correct algorithms, clean architecture, good test coverage &amp;ndash; while the thing being built may not solve the problem it was meant to solve. Product thinking is the discipline of questioning what you&amp;rsquo;re building and why, before optimizing how.&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Article&lt;/th&gt;
 &lt;th&gt;Focus&lt;/th&gt;
 &lt;th&gt;What You&amp;rsquo;ll Get&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Product Thinking (this page)&lt;/td&gt;
 &lt;td&gt;Why engineers should care about product work&lt;/td&gt;
 &lt;td&gt;A concrete example of product-blind vs. product-aware engineering, and how the section fits together&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;a href="http://www.markalston.net/claude-code-wiki/product/user-research/"&gt;User Research &amp;amp; Validation&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;Discovering what users need&lt;/td&gt;
 &lt;td&gt;Interview techniques, assumption mapping, validation methods&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;a href="http://www.markalston.net/claude-code-wiki/product/requirements-specifications/"&gt;Requirements &amp;amp; Specifications&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;Turning research into buildable definitions&lt;/td&gt;
 &lt;td&gt;Spec writing, acceptance criteria, traceability&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;a href="http://www.markalston.net/claude-code-wiki/product/prototyping-iteration/"&gt;Prototyping &amp;amp; Iteration&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;Building to learn, not to ship&lt;/td&gt;
 &lt;td&gt;Throwaway prototypes, feedback loops, iteration discipline&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;a href="http://www.markalston.net/claude-code-wiki/product/prioritization-tradeoffs/"&gt;Prioritization &amp;amp; Trade-offs&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;Deciding what to build next&lt;/td&gt;
 &lt;td&gt;Impact vs. effort, RICE scoring, opportunity cost, cost of delay, saying no&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="table-of-contents"&gt;Table of Contents&lt;a class="anchor" href="#table-of-contents"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#the-gap-between-building-and-solving"&gt;The Gap Between Building and Solving&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#what-product-thinking-is"&gt;What Product Thinking Is&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#an-example-search-autocomplete"&gt;An Example: Search Autocomplete&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#how-claude-code-changes-the-equation"&gt;How Claude Code Changes the Equation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#how-this-section-is-organized"&gt;How This Section Is Organized&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="the-gap-between-building-and-solving"&gt;The Gap Between Building and Solving&lt;a class="anchor" href="#the-gap-between-building-and-solving"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Engineers optimize. Given a problem, they reduce latency, improve throughput, eliminate edge cases, and harden failure modes. This is valuable work. It is also, frequently, the wrong work.&lt;/p&gt;</description></item><item><title>User Research &amp; Validation with Claude Code</title><link>http://www.markalston.net/claude-code-wiki/product/user-research/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>http://www.markalston.net/claude-code-wiki/product/user-research/</guid><description>&lt;h1 id="user-research--validation-with-claude-code"&gt;User Research &amp;amp; Validation with Claude Code&lt;a class="anchor" href="#user-research--validation-with-claude-code"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="executive-summary"&gt;Executive Summary&lt;a class="anchor" href="#executive-summary"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Engineers skip user research because it feels like a separate discipline &amp;ndash; interviews, surveys, affinity diagrams, personas. Most of that overhead is mechanical: reading, categorizing, synthesizing. Claude Code handles the mechanical parts, which means you can do useful research in 20 minutes from your terminal. This article shows you how.&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Research Method&lt;/th&gt;
 &lt;th&gt;What It Tells You&lt;/th&gt;
 &lt;th&gt;Claude Code Technique&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Support ticket analysis&lt;/td&gt;
 &lt;td&gt;Where users get stuck right now&lt;/td&gt;
 &lt;td&gt;Feed ticket exports, extract patterns and frequency&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Interview transcript synthesis&lt;/td&gt;
 &lt;td&gt;What users say they need (and what they reveal accidentally)&lt;/td&gt;
 &lt;td&gt;Load transcripts, pull out themes and contradictions&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Competitive analysis&lt;/td&gt;
 &lt;td&gt;What alternatives exist and where they fall short&lt;/td&gt;
 &lt;td&gt;Describe competitor features, identify gaps&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Usage data interpretation&lt;/td&gt;
 &lt;td&gt;What users actually do (vs. what they say)&lt;/td&gt;
 &lt;td&gt;Feed metrics, generate hypotheses for observed behavior&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Survey design and analysis&lt;/td&gt;
 &lt;td&gt;Targeted answers to specific questions&lt;/td&gt;
 &lt;td&gt;Generate questions from assumptions, analyze response data&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Feasibility prototyping&lt;/td&gt;
 &lt;td&gt;Whether a solution is technically viable&lt;/td&gt;
 &lt;td&gt;Build proof-of-concept scripts to test core mechanics&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="table-of-contents"&gt;Table of Contents&lt;a class="anchor" href="#table-of-contents"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#why-engineers-skip-research"&gt;Why Engineers Skip Research&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#support-ticket-and-bug-report-analysis"&gt;Support Ticket and Bug Report Analysis&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#synthesizing-interview-transcripts"&gt;Synthesizing Interview Transcripts&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#competitive-analysis"&gt;Competitive Analysis&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#usage-data-interpretation"&gt;Usage Data Interpretation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#validation-before-commitment"&gt;Validation Before Commitment&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#when-to-stop-researching"&gt;When to Stop Researching&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="why-engineers-skip-research"&gt;Why Engineers Skip Research&lt;a class="anchor" href="#why-engineers-skip-research"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Research feels slow because it&amp;rsquo;s unstructured. Writing code has a tight feedback loop &amp;ndash; write, run, see results. Research has no compiler. You read a stack of support tickets and come away with vague impressions. You interview three users and get three different stories. There&amp;rsquo;s no green bar that tells you you&amp;rsquo;re done.&lt;/p&gt;</description></item><item><title>Requirements &amp; Specifications: From Problem to Prompt</title><link>http://www.markalston.net/claude-code-wiki/product/requirements-specifications/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>http://www.markalston.net/claude-code-wiki/product/requirements-specifications/</guid><description>&lt;h1 id="requirements--specifications-from-problem-to-prompt"&gt;Requirements &amp;amp; Specifications: From Problem to Prompt&lt;a class="anchor" href="#requirements--specifications-from-problem-to-prompt"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="executive-summary"&gt;Executive Summary&lt;a class="anchor" href="#executive-summary"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;You&amp;rsquo;ve done the research. You know what users struggle with. Now you need to turn that understanding into something precise enough that you &amp;ndash; or Claude Code &amp;ndash; can build it. This article covers requirement formats, feature decomposition, edge case discovery, and the handoff to spec-driven development. The goal: close the gap between &amp;ldquo;I understand the problem&amp;rdquo; and &amp;ldquo;I can write a build prompt.&amp;rdquo;&lt;/p&gt;</description></item><item><title>Prototyping &amp; Iteration: Build to Learn</title><link>http://www.markalston.net/claude-code-wiki/product/prototyping-iteration/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>http://www.markalston.net/claude-code-wiki/product/prototyping-iteration/</guid><description>&lt;h1 id="prototyping--iteration-build-to-learn"&gt;Prototyping &amp;amp; Iteration: Build to Learn&lt;a class="anchor" href="#prototyping--iteration-build-to-learn"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="executive-summary"&gt;Executive Summary&lt;a class="anchor" href="#executive-summary"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;A prototype is a question in code form. You build it to answer a specific question &amp;ndash; &amp;ldquo;can this API handle the load?&amp;rdquo;, &amp;ldquo;does this workflow make sense to users?&amp;rdquo;, &amp;ldquo;will these two systems integrate?&amp;rdquo; &amp;ndash; and then discard it. This article covers the types of prototypes, when to use each, the Claude Code workflow for running them, and the discipline of deciding what comes after.&lt;/p&gt;</description></item><item><title>Feature Prioritization &amp; Trade-offs</title><link>http://www.markalston.net/claude-code-wiki/product/prioritization-tradeoffs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>http://www.markalston.net/claude-code-wiki/product/prioritization-tradeoffs/</guid><description>&lt;h1 id="feature-prioritization--trade-offs"&gt;Feature Prioritization &amp;amp; Trade-offs&lt;a class="anchor" href="#feature-prioritization--trade-offs"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="executive-summary"&gt;Executive Summary&lt;a class="anchor" href="#executive-summary"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Every team has more ideas than capacity. The hard part of product work is deciding what to build next and &amp;ndash; equally important &amp;ndash; what to defer. This article covers prioritization frameworks you can apply to a backlog, the concept of opportunity cost in engineering decisions, the discipline of saying no, and how Claude Code can model trade-offs quantitatively.&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Framework&lt;/th&gt;
 &lt;th&gt;Best For&lt;/th&gt;
 &lt;th&gt;Inputs Needed&lt;/th&gt;
 &lt;th&gt;Output&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Impact vs. effort&lt;/td&gt;
 &lt;td&gt;Comparing features against each other&lt;/td&gt;
 &lt;td&gt;Estimated impact (users, revenue, retention), estimated effort (days, complexity)&lt;/td&gt;
 &lt;td&gt;2x2 matrix ranking features by ROI&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;RICE scoring&lt;/td&gt;
 &lt;td&gt;Large backlogs with mixed feature types&lt;/td&gt;
 &lt;td&gt;Reach, impact, confidence, effort&lt;/td&gt;
 &lt;td&gt;Numeric score for stack-ranking&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Opportunity cost&lt;/td&gt;
 &lt;td&gt;Binary build-or-defer decisions&lt;/td&gt;
 &lt;td&gt;What else the team could build with the same time&lt;/td&gt;
 &lt;td&gt;Explicit comparison of alternatives&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Cost of delay&lt;/td&gt;
 &lt;td&gt;Time-sensitive features&lt;/td&gt;
 &lt;td&gt;Revenue or retention impact per week of delay&lt;/td&gt;
 &lt;td&gt;Urgency-adjusted priority&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="table-of-contents"&gt;Table of Contents&lt;a class="anchor" href="#table-of-contents"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#the-prioritization-problem"&gt;The Prioritization Problem&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#impact-vs-effort"&gt;Impact vs. Effort&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#rice-scoring"&gt;RICE Scoring&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#opportunity-cost"&gt;Opportunity Cost&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#cost-of-delay"&gt;Cost of Delay&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#saying-no"&gt;Saying No&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#modeling-trade-offs-with-claude-code"&gt;Modeling Trade-offs with Claude Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#worked-example-the-dashboard-backlog"&gt;Worked Example: The Dashboard Backlog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#anti-patterns"&gt;Anti-Patterns&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="the-prioritization-problem"&gt;The Prioritization Problem&lt;a class="anchor" href="#the-prioritization-problem"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;A team has twelve features in the backlog, capacity for three this quarter, and stakeholders who each believe their feature is the most important. Without a framework, the loudest voice wins &amp;ndash; or the team compromises by starting five features and finishing none.&lt;/p&gt;</description></item></channel></rss>