<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:blog="https://jonesrussell.github.io/blog/ns"><channel><title>Github-Actions on Web Developer Blog</title><link>https://jonesrussell.github.io/blog/tags/github-actions/</link><description>Recent content in Github-Actions on Web Developer Blog</description><generator>Hugo -- 0.160.1</generator><language>en-us</language><lastBuildDate>Mon, 06 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://jonesrussell.github.io/blog/tags/github-actions/feed.xml" rel="self" type="application/rss+xml"/><item><title>Day One of the Content Pipeline: What Broke and What I Fixed</title><link>https://jonesrussell.github.io/blog/refining-content-pipeline-github-actions/</link><pubDate>Mon, 06 Apr 2026 00:00:00 +0000</pubDate><guid>https://jonesrussell.github.io/blog/refining-content-pipeline-github-actions/</guid><category>devops</category><blog:tag>github-actions</blog:tag><blog:tag>automation</blog:tag><blog:tag>content</blog:tag><blog:tag>claude-code</blog:tag><description>First-run lessons from an automated content pipeline. Noise, human-only merges, and a backwards production step surfaced in 24 hours.</description><content:encoded><![CDATA[<p>Ahnii!</p>
<p>Yesterday&rsquo;s post walked through <a href="/blog/automated-content-pipeline-github-actions/">automating a content pipeline with GitHub Actions and Issues</a>. The idea: a daily scheduled job scans recent commits and closed issues across several repos, filters out the noise, and opens what&rsquo;s left as GitHub issues labeled <code>stage:mined</code>. One of those issues looks something like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"><code class="language-text" data-lang="text"><span style="display:flex;"><span>Title: [content] feat: add SovereigntyProfile to Layer 0
</span></span><span style="display:flex;"><span>Body:
</span></span><span style="display:flex;"><span>  ## Source
</span></span><span style="display:flex;"><span>  Commit `abc1234` in `waaseyaa/framework`
</span></span><span style="display:flex;"><span>  ## Content Seed
</span></span><span style="display:flex;"><span>  feat: add SovereigntyProfile to Layer 0
</span></span><span style="display:flex;"><span>  ## Suggested Type
</span></span><span style="display:flex;"><span>  text-post
</span></span></code></pre></div><p>Those issues are raw material. You curate them into drafts, produce the copy, and publish. That surfacing step is what the rest of this post calls <em>mining</em>. This post is about what happened the first time I actually ran that pipeline. The short version: it works, but the first real run turned up three problems no amount of planning could have caught. Here are the three fixes and the meta-lesson underneath them.</p>
<h2 id="day-one-output-20-issues-too-much-noise">Day One Output: 20 Issues, Too Much Noise</h2>
<p>The mining workflow fired on schedule and opened 20 <code>stage:mined</code> issues overnight, pulled from three repos. Good news: the pipeline saw everything it was supposed to see. Bad news: &ldquo;everything&rdquo; is not the same as &ldquo;a usable drafting queue.&rdquo; The first run had more noise than I expected, and it had noise the filter couldn&rsquo;t see.</p>
<h2 id="fix-1-tighten-the-mining-filter">Fix 1: Tighten the Mining Filter</h2>
<p>Even with the v1 noise filter, too many low-signal commits made it through. Things like <code>fix: align FileRepositoryInterface usage with Waaseyaa\Media\File contract</code> matter for the codebase and are boring as standalone posts. The first fix was to extend the exclude regex in <code>content-mine.yml</code>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>COMMITS<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span>gh api <span style="color:#e6db74">&#34;repos/</span>$REPO<span style="color:#e6db74">/commits?since=</span>$SINCE<span style="color:#e6db74">&amp;per_page=50&#34;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span>  --jq <span style="color:#e6db74">&#39;.[] | select(.commit.message | test(&#34;^(Merge |chore|docs|fix typo|bump|update dep|Bump |fix:.*([Pp]hp[Ss]tan|namespace|alignment|placeholder|phpunit|mock|ignore|typo))&#34;; &#34;i&#34;) | not) | {sha: .sha[0:7], message: (.commit.message | split(&#34;\n&#34;) | .[0]), date: .commit.author.date}&#39;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span>  2&gt;/dev/null <span style="color:#f92672">||</span> echo <span style="color:#e6db74">&#34;&#34;</span><span style="color:#66d9ef">)</span>
</span></span></code></pre></div><p>The new patterns (<code>phpstan</code>, <code>namespace</code>, <code>alignment</code>, <code>placeholder</code>, <code>phpunit</code>, <code>mock</code>, <code>ignore</code>, <code>typo</code>) catch categories of real work nobody wants to read about. A minimum message length of 25 characters cuts drive-by fixes. Fewer mined issues per run, and the ones that survive sit closer to &ldquo;actually postable.&rdquo; That handled the mechanical noise. The next problem was harder because no regex could see it.</p>
<h2 id="fix-2-merge-in-curation">Fix 2: Merge-in-Curation</h2>
<p>Filters are a blunt instrument. They cannot tell that eight separate commits all belong to the same post. On day one, the <a href="https://github.com/waaseyaa/giiken">Giiken</a> project alone produced eight mined issues: scaffold, entity types, RBAC, ingestion, wiki schema, query layer, plus two support commits. Every one of them was a valid feature commit. Together they were one post. No filter was going to catch that. Only a human reading them side by side could say &ldquo;these are a story.&rdquo;</p>
<p>So curation got a new action: <strong>merge into target</strong>. Instead of picking one winner and closing the rest, you pick a canonical issue, roll the seeds from the others into its body, and close the sources. The target ends up carrying a combined seed (the whole story), and the sub-issues get a <code>skipped</code> label and a closed state.</p>
<p>The curation skill now runs like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"><code class="language-text" data-lang="text"><span style="display:flex;"><span>→ Approve (move to stage:curated)
</span></span><span style="display:flex;"><span>→ Skip   (close with skipped label)
</span></span><span style="display:flex;"><span>→ Merge  (pick target, combine seeds, close sources)
</span></span><span style="display:flex;"><span>→ Edit   (adjust seed, type, or channels before approving)
</span></span></code></pre></div><p>Running that over the 20 mined issues collapsed them to 4 curated posts: one about the pipeline itself, one about the Giiken project, one about a governance protocol suite in the framework, and one about a specific Symfony refactor. Signal up, count down. Two fixes done. The third was the embarrassing one.</p>
<h2 id="fix-3-put-the-blog-first">Fix 3: Put the Blog First</h2>
<p>The v1 production step went straight from a curated issue to Facebook, X, and LinkedIn copy. That read fine in the design doc. It fell apart the first time I tried to run it, because every one of those social posts had a placeholder where the URL should go. The URL had to point at a blog post. The blog post did not exist yet.</p>
<p>So I rewrote the <code>/content-produce</code> skill — a <a href="https://claude.com/claude-code">Claude Code</a> workflow that turns queue issues into drafts. The new flow:</p>
<pre class="mermaid">flowchart TD
    A[stage:curated issue] --&gt; B[Draft Hugo post&lt;br/&gt;draft: true]
    B --&gt; C[Draft social copy&lt;br/&gt;docs/social/slug.md]
    C --&gt; D[Commit both to blog repo]
    D --&gt; E{Human review}
    E --&gt;|Flip draft: false| F[GitHub Actions deploys]
    F --&gt; G[/content-pipeline/]
    G --&gt; H[Buffer API → X, LinkedIn, Facebook]
</pre>
<p>The human controls publication. The skill commits drafts only and never flips <code>draft: false</code>. Once I flip the flag and push, <a href="https://docs.github.com/en/actions">GitHub Actions</a> deploys the post, and a separate <code>/content-pipeline</code> skill handles the Buffer API for social distribution. Each step has one job. This post you&rsquo;re reading is the first one produced by the new flow.</p>
<h2 id="why-content-pipelines-need-continuous-refinement">Why Content Pipelines Need Continuous Refinement</h2>
<p>You cannot design a content pipeline in the abstract. You ship v1, run it against one day of real input, and watch it lie to you. Then you fix the specific lies. That loop is the work.</p>
<p>Three days ago this pipeline did not exist. Two days ago it was a spec. Yesterday it shipped. Today it is already different. None of the three fixes in this post were things I could have known up front. They came from running the thing, staring at the output, and asking &ldquo;what is this queue actually trying to tell me?&rdquo;</p>
<p>If you are building your own version of this, expect the same arc. Your v1 will have noise you cannot see yet. Your first curation session will reveal merges a filter could not find. And your production step will probably be backwards, because writing the fun part first (the tweets) is more tempting than writing the part that does the work (the blog post). The refinement is not a sign something went wrong. It is the point.</p>
<p>Baamaapii</p>
]]></content:encoded></item><item><title>Automate your content pipeline with GitHub Actions and Issues</title><link>https://jonesrussell.github.io/blog/automated-content-pipeline-github-actions/</link><pubDate>Sun, 05 Apr 2026 00:00:00 +0000</pubDate><guid>https://jonesrussell.github.io/blog/automated-content-pipeline-github-actions/</guid><category>devops</category><blog:tag>github-actions</blog:tag><blog:tag>automation</blog:tag><blog:tag>content</blog:tag><description>Build a daily content mining pipeline that scans your repos and queues post ideas as GitHub issues.</description><content:encoded><![CDATA[<p>Ahnii!</p>
<p>You ship work every day, but most of it never becomes a post. The problem isn&rsquo;t writing. It&rsquo;s remembering what you shipped three days ago that was actually worth talking about. This post walks through a content pipeline that mines your <a href="https://github.com">GitHub</a> repos daily and queues content ideas as issues, so nothing slips through.</p>
<h2 id="how-the-pipeline-works">How the Pipeline Works</h2>
<p>The system has three moving parts: a <a href="https://docs.github.com/en/actions">GitHub Actions</a> workflow that runs on a cron schedule, an issue template that standardizes the format, and label-based stages that track each idea from raw commit to published post.</p>
<pre class="mermaid">flowchart TD
    A[commit lands] --&gt; B[Action mines it]
    B --&gt; C[issue created&lt;br/&gt;stage:mined]
    C --&gt; D[you curate&lt;br/&gt;stage:curated]
    D --&gt; E[produce copy&lt;br/&gt;stage:ready]
    E --&gt; F[distribute]
</pre>
<p>Every stage is a GitHub label. You always know where each content idea sits, and nothing moves forward without your decision.</p>
<h2 id="the-mining-workflow">The Mining Workflow</h2>
<p>The workflow runs daily at 8am ET, scans a list of repos, and creates issues for commits that look like real work.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">name</span>: <span style="color:#ae81ff">Content Mining</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">on</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">schedule</span>:
</span></span><span style="display:flex;"><span>    - <span style="color:#f92672">cron</span>: <span style="color:#e6db74">&#39;0 12 * * *&#39;</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">workflow_dispatch</span>:
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">permissions</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">issues</span>: <span style="color:#ae81ff">write</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">contents</span>: <span style="color:#ae81ff">read</span>
</span></span></code></pre></div><p>The <code>workflow_dispatch</code> trigger lets you run it manually when you want to catch up. Permissions are scoped to just what the job needs: reading commits and writing issues.</p>
<p>The core loop iterates over repos and fetches recent commits via the GitHub API:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">env</span>:
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">GH_TOKEN</span>: <span style="color:#ae81ff">${{ secrets.CROSS_REPO_TOKEN }}</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">run</span>: |<span style="color:#e6db74">
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">  SINCE=$(date -u -d &#39;1 day ago&#39; +%Y-%m-%dT%H:%M:%SZ)
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">  REPOS=&#34;waaseyaa/framework waaseyaa/giiken jonesrussell/jonesrussell&#34;
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">  for REPO in $REPOS; do
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">    COMMITS=$(gh api &#34;repos/$REPO/commits?since=$SINCE&amp;per_page=50&#34; \
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">      --jq &#39;.[] | select(.commit.message | test(&#34;...filter...&#34;) | not) | ...&#39;)
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">  done</span>
</span></span></code></pre></div><p>The <code>CROSS_REPO_TOKEN</code> is a personal access token with read access to all repos you want to mine. Without it, the workflow can only see public repos.</p>
<h2 id="filtering-noise">Filtering Noise</h2>
<p>Not every commit is content. The filter regex excludes merge commits, dependency bumps, docs changes, and housekeeping fixes:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>test<span style="color:#f92672">(</span><span style="color:#e6db74">&#34;^(Merge |chore|docs|fix typo|bump|update dep|Bump |fix:.*
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">  ([Pp]hp[Ss]tan|namespace|alignment|placeholder|phpunit|mock|ignore|typo))&#34;</span>; <span style="color:#e6db74">&#34;i&#34;</span><span style="color:#f92672">)</span>
</span></span></code></pre></div><p>This catches the patterns that showed up as noise in practice: PHPStan fixes, namespace alignment, test placeholders. Commits also need a minimum message length of 25 characters to filter out low-context changes like &ldquo;fix test&rdquo; or &ldquo;update readme&rdquo;.</p>
<p>The filter will evolve. After your first curation pass, you&rsquo;ll know which patterns your repos produce that aren&rsquo;t worth posting about. Update the regex and the next run gets cleaner.</p>
<h2 id="deduplication">Deduplication</h2>
<p>Before creating an issue, the workflow checks whether a commit has already been queued:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>EXISTING<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span>gh issue list --repo jonesrussell/jonesrussell <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span>  --label <span style="color:#e6db74">&#34;content-queue&#34;</span> --search <span style="color:#e6db74">&#34;</span>$SHA<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span>  --json number --jq <span style="color:#e6db74">&#39;length&#39;</span><span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> <span style="color:#e6db74">&#34;</span>$EXISTING<span style="color:#e6db74">&#34;</span> !<span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;0&#34;</span> <span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>  echo <span style="color:#e6db74">&#34;Skipping (already queued): </span>$MSG<span style="color:#e6db74">&#34;</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">continue</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">fi</span>
</span></span></code></pre></div><p>This prevents duplicate issues when you re-run the workflow manually or when the cron overlaps with a manual trigger.</p>
<h2 id="the-issue-template">The Issue Template</h2>
<p>Each mined commit becomes an issue with a structured body:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"><code class="language-markdown" data-lang="markdown"><span style="display:flex;"><span><span style="color:#75715e">## Source
</span></span></span><span style="display:flex;"><span>Commit <span style="color:#e6db74">`abc1234`</span> in <span style="color:#e6db74">`waaseyaa/framework`</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e">## Content Seed
</span></span></span><span style="display:flex;"><span>feat(#571): add DomainRouterInterface, EntityTypeLifecycleRouter, SchemaRouter
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e">## Suggested Type
</span></span></span><span style="display:flex;"><span>text-post
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e">## Suggested Channels
</span></span></span><span style="display:flex;"><span>x, linkedin, facebook
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e">## Generated Artifacts
</span></span></span><span style="display:flex;"><span>&lt;!-- To be filled by production skill --&gt;
</span></span></code></pre></div><p>The &ldquo;Generated Artifacts&rdquo; section stays empty until you curate and produce the content. Labels track the stage: <code>stage:mined</code>, <code>stage:curated</code>, <code>stage:ready</code>, <code>stage:distributed</code>.</p>
<h2 id="the-curation-step">The Curation Step</h2>
<p>Mining is automated. Curation is not. You review each <code>stage:mined</code> issue and decide: approve, skip, merge with another item, or edit the seed. Skipped items get closed with an audit comment explaining why. Approved items move to <code>stage:curated</code>.</p>
<p>This is where judgment lives. A commit that says &ldquo;feat: Community RBAC policies&rdquo; might be a standalone post or might merge with two other commits into a broader story about your data model. The pipeline gives you the raw material. You shape it.</p>
<h2 id="closed-issues-as-content-sources">Closed Issues as Content Sources</h2>
<p>The workflow also scans recently closed issues across your repos:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>gh issue list --repo <span style="color:#e6db74">&#34;</span>$REPO<span style="color:#e6db74">&#34;</span> --state closed <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span>  --json number,title,closedAt,labels <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span>  --jq <span style="color:#e6db74">&#34;.[] | select(.closedAt &gt; \&#34;</span>$SINCE<span style="color:#e6db74">\&#34;) | ...&#34;</span>
</span></span></code></pre></div><p>Closed issues often represent shipped features with richer context than a commit message. The workflow creates content queue items for those too, with the same deduplication and labeling.</p>
<h2 id="setting-it-up-in-your-repos">Setting It Up in Your Repos</h2>
<p>You need three things:</p>
<ol>
<li><strong>A personal access token</strong> (<code>CROSS_REPO_TOKEN</code>) with <code>repo</code> scope, stored as a repository secret</li>
<li><strong>The workflow file</strong> at <code>.github/workflows/content-mine.yml</code> in whichever repo you want to host the content queue</li>
<li><strong>The labels</strong> created in that repo: <code>content-queue</code>, <code>stage:mined</code>, <code>stage:curated</code>, <code>stage:ready</code>, <code>stage:distributed</code>, <code>stage:skipped</code></li>
</ol>
<p>Create the labels first:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#66d9ef">for</span> label in content-queue stage:mined stage:curated stage:ready stage:distributed stage:skipped; <span style="color:#66d9ef">do</span>
</span></span><span style="display:flex;"><span>  gh label create <span style="color:#e6db74">&#34;</span>$label<span style="color:#e6db74">&#34;</span> --repo your-org/your-repo
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">done</span>
</span></span></code></pre></div><p>Then add the workflow, update the <code>REPOS</code> list with your repos, and trigger it manually to verify.</p>
<h2 id="what-this-doesnt-do">What This Doesn&rsquo;t Do</h2>
<p>This pipeline handles discovery, not writing. It won&rsquo;t draft a blog post or compose a tweet. Those are separate steps that happen after curation, when you know the angle and audience for each piece.</p>
<p>It also won&rsquo;t decide what&rsquo;s worth posting. That&rsquo;s the point. Automated mining with human curation gives you a reliable queue without losing editorial control.</p>
<p>Baamaapii</p>
]]></content:encoded></item></channel></rss>