Design Decisions: Building a Modern Technical Blog
Table of Contents
- 1. Introduction
- 2. Framework Selection
- 3. Content Management: Org-Mode to HTML
- 4. Styling Architecture
- 5. Interactive Features
- 6. SEO Optimization
- 7. Analytics and Tracking
- 8. Newsletter Integration
- 9. Security Considerations
- 10. Accessibility
- 11. Performance Optimizations
- 12. Versioning and Releases
- 13. Conclusion
- 14. References
- 15. tldr
1. Introduction
Building a personal blog might seem like a solved problem in 2026, but I wanted something different. Not just another static site generator output, but an interactive reading experience that reflects how I think about and organize technical content. This post documents the design decisions that shaped chiply.dev, from the choice of frameworks to security considerations.
What started as a simple blog evolved into a platform featuring 3D knowledge graphs, interactive charts, recursive link previews, and a sophisticated content management system built on Emacs org-mode. Along the way, I made dozens of architectural decisions, each with trade-offs worth examining.
2. Framework Selection
2.1. Why SvelteKit?
Starting a technical blog from scratch in 2026, I needed a framework that could support far more than static content. The vision included 3D visualizations, interactive charts, server-side API endpoints for GitHub integration and search, and a reading experience that felt more like an application than a document. Static site generators like Hugo excelled at content but couldn't support the interactivity I wanted. React-based solutions like Next.js carried virtual DOM overhead that seemed wasteful for a content-heavy site. Astro was compelling for its island architecture, but I needed deeper component interactivity than islands easily provide.
The task was to find a framework that compiled to minimal client-side JavaScript, provided explicit and predictable reactivity, and colocated frontend and backend code without the complexity of a separate API layer.
After evaluating Next.js, Astro, and Hugo against these requirements, I chose SvelteKit 2 with Svelte 5. The result is a framework that eliminates virtual DOM overhead entirely, provides explicit reactivity through runes, and lets me write API endpoints alongside the components that consume them — all while shipping less JavaScript to the client than any React-based alternative.
2.1.1. Svelte 5 Runes
Svelte 5 introduced "runes" - a new reactivity system that makes state management explicit and predictable:
// Reactive state declaration
let count = $state(0);
// Derived values (computed properties)
let doubled = $derived(count * 2);
// Component props with destructuring
let { title, author } = $props();
This approach eliminates the "magic" of Svelte 4's implicit reactivity while remaining concise. Unlike React's useState hooks, runes don't require understanding closures and stale closure bugs.
2.1.2. Compiled Output
Svelte compiles components to vanilla JavaScript at build time, eliminating the runtime overhead of virtual DOM diffing. For a content-heavy blog with interactive visualizations, this results in:
- Smaller bundle sizes (no framework runtime shipped to clients)
- Faster initial page loads
- Better performance on mobile devices
2.1.3. Full-Stack Capabilities
SvelteKit provides file-based routing with integrated API endpoints:
src/routes/
├── +page.svelte # Home page
├── +layout.svelte # Root layout
├── [post]/ # Dynamic blog routes
│ ├── +page.svelte # Post layout
│ └── +page.server.ts # Server-side data loading
└── api/
├── commits/ # GitHub API proxy
├── subscribe/ # Newsletter endpoint
└── preview-proxy/ # Link preview service
This colocation of frontend and backend code simplifies development and deployment.
2.2. Build Tooling: Vite
With a project featuring heavy interactive components and frequent iteration, slow build tools would have been a serious drag on development velocity. Traditional bundlers like Webpack require full rebuilds on changes, and the lag compounds when you're tweaking 3D graph parameters or CSS transitions and need instant visual feedback.
I needed a build tool that provided near-instant feedback during development while still producing optimized production bundles with proper code splitting. Vite 7 powers the development experience with:
- Hot Module Replacement (HMR) that updates in milliseconds
- Native ES modules during development (no bundling required)
- Optimized production builds with code splitting
- Built-in TypeScript support
The result is a development server that starts instantly and updates faster than I can switch windows.
2.3. Deployment: Vercel
The blog needed a hosting platform that could handle both static prerendered pages and dynamic API endpoints (GitHub commit fetching, newsletter subscriptions, link preview proxying) without managing separate infrastructure. Self-hosting would mean configuring a server, managing SSL certificates, and dealing with scaling — all distractions from writing content.
The task was to find a platform with native SvelteKit support, global edge distribution for API routes, and zero-config deployments from Git pushes. I chose Vercel because:
- Native SvelteKit support: The
@sveltejs/adapter-vercelhandles all configuration - Edge functions: API routes run close to users globally
- Automatic previews: Every PR gets a preview deployment
- Analytics integration: Built-in performance monitoring
The vercel.json configuration enables aggressive caching:
{
"headers": [
{
"source": "/fonts/(.*)",
"headers": [
{ "key": "Cache-Control", "value": "public, max-age=31536000, immutable" }
]
}
]
}
3. Content Management: Org-Mode to HTML
3.1. Why Org-Mode?
Writing technical blog posts in Markdown quickly exposed its limitations. Code examples couldn't be executed or verified within the document, so they'd drift out of sync with the prose. Markdown's flat heading structure made reorganizing long posts cumbersome. And for literate programming posts — where the code is the content — Markdown had no concept of code tangling or noweb references.
I needed an authoring system that could execute code blocks inline, tangle source files from the document, support deep hierarchical organization, and export clean HTML. Rather than using Markdown or a CMS, I write all content in Emacs org-mode. The result is that every code example in a post can be verified at authoring time, documents restructure with a few keystrokes, and the same org file can produce both a blog post and a working program.
3.1.1. Literate Programming
Org-mode excels at mixing prose with executable code. Code blocks can be evaluated, and their results embedded in the document:
#+BEGIN_SRC python :results output
import pandas as pd
df = pd.read_csv("data.csv")
print(df.describe())
#+END_SRC
For a technical blog, this means code examples are always tested and accurate.
3.1.2. Hierarchical Organization
Org-mode's outline structure maps naturally to blog post sections. I can collapse, rearrange, and navigate large documents efficiently. The heading hierarchy (*, **, ***, etc.) exports cleanly to HTML with proper semantic structure.
3.1.3. Export Flexibility
Org-mode's export system (ox) produces clean HTML with customizable options:
#+OPTIONS: toc:t num:t H:6 html-postamble:nil #+PROPERTY: header-args :eval never-export
These options control table of contents generation, section numbering, heading depth, and code block behavior.
3.2. The Compilation Pipeline
With content authored in org-mode but served by a SvelteKit application, I needed a bridge between the two worlds. Org-mode's HTML export produces standalone documents, but SvelteKit needs to extract metadata (title, author, date, description) for SEO and navigation, and the content needs to integrate with the blog's component system for features like table of contents and tag extraction.
The task was to build a pipeline that preserves org-mode's authoring power while producing content that SvelteKit can load, parse, and enhance with interactive features. The org-to-HTML pipeline works as follows:
- Authoring: Write content in
org/*.orgfiles - Export: Emacs exports to HTML via
C-c C-e h h - Storage: HTML files live in
src/routes/[post]/ - Loading: SvelteKit loads HTML server-side, extracts metadata
- Rendering: Client-side components parse and enhance the HTML
3.2.1. Metadata Extraction
The server-side loader (+page.server.ts) extracts metadata from HTML:
// Extract title from <title> tag
const titleMatch = html.match(/<title>(.*?)<\/title>/);
// Extract author from meta tag
const authorMatch = html.match(/<meta name="author" content="(.*?)"/);
// Extract date from HTML comment
const dateMatch = html.match(/<!-- (\d{4}-\d{2}-\d{2})/);
// Extract first paragraph as description
const descMatch = html.match(/<p[^>]*>([^<]{50,160})/);
This approach keeps all content in org-mode while enabling rich SEO metadata.
3.3. Full-Text Search Indexing
With blog posts growing in length and number, readers needed a way to find specific content across all posts. But Algolia's record size limits (10KB per record) meant I couldn't simply index entire posts as single documents. Additionally, search results that link to an entire post aren't particularly helpful when the reader wants a specific section.
The solution was to chunk content by heading, creating one searchable record per section. This way, search results link directly to the relevant section with a highlight animation. For Algolia search integration, I created Python scripts that chunk content by heading:
# extract_html.py (simplified)
def extract_sections(html_content):
soup = BeautifulSoup(html_content, 'html.parser')
sections = []
for heading in soup.find_all(['h2', 'h3', 'h4', 'h5', 'h6']):
section_id = heading.get('id', '')
section_title = heading.get_text()
content = get_section_content(heading)
sections.append({
'anchor': section_id,
'sectionTitle': section_title,
'content': content[:5000] # Max 5KB per record
})
return sections
Each heading becomes a searchable record, enabling jump-to-section from search results.
4. Styling Architecture
4.1. Design Tokens with Open Props
A content-heavy blog with interactive visualizations needed consistent spacing, colors, and typography without bloating the CSS bundle. The primary concern was keeping the styling layer lightweight — every kilobyte of CSS is render-blocking, and a blog should load fast on any connection.
The task was to find a system that provides design consistency (spacing scales, color palettes, easing curves) without the overhead of utility-class frameworks or the tooling complexity of preprocessors. Rather than Tailwind CSS, I chose Open Props — a CSS custom properties library providing design tokens at just ~14KB. The result is semantic variable names like --text-muted instead of opaque utilities like text-gray-400, direct CSS control without fighting framework abstractions, and a bundle that's a fraction of even Tailwind's JIT output:
@import "open-props/style";
/* Semantic variable mapping */
:root {
--bg-primary: #fffefc; /* Warm cream */
--text-primary: var(--gray-8);
--link-color: var(--indigo-7);
--size-spacing: var(--size-4); /* Consistent spacing */
}
4.1.1. Why Not Tailwind?
Tailwind is excellent for rapid prototyping, but I had specific reasons to avoid it:
- Readability: Long class lists obscure HTML structure
- Semantic naming: I prefer
--text-mutedovertext-gray-400 - Bundle size: Open Props is lighter (~14KB vs Tailwind's JIT)
- Customization: Direct CSS control without fighting abstractions
4.2. Theme System
Dark mode is table stakes for a modern developer-focused blog. Readers coding late at night expect a site to respect their OS preference, and developers in particular notice when a blog blinds them with a white page at 2 AM. Beyond just supporting dark mode, the system needed to handle the notoriously tricky "flash of wrong theme" problem — where server-rendered HTML briefly shows the wrong theme before JavaScript hydrates.
The task was to implement light, dark, and system (OS-following) modes with zero visual flash on page load, persisted user preferences, and clean CSS that doesn't require duplicating every style rule. The blog supports three theme modes: light, dark, and system (follows OS preference).
4.2.1. Implementation Strategy
/* System preference (default) */
@media (prefers-color-scheme: dark) {
:root:not(.theme-light) {
--bg-primary: var(--gray-9);
--text-primary: var(--gray-1);
}
}
/* Manual override classes */
:root.theme-dark {
--bg-primary: var(--gray-9);
--text-primary: var(--gray-1);
}
:root.theme-light {
--bg-primary: #fffefc;
--text-primary: var(--gray-8);
}
The CSS cascade ensures manual selection overrides system preference.
4.2.2. Flash Prevention
To prevent a flash of wrong theme on page load, an inline script in app.html runs before rendering:
<script>
(function() {
const theme = localStorage.getItem('theme');
if (theme === 'dark') {
document.documentElement.classList.add('theme-dark');
} else if (theme === 'light') {
document.documentElement.classList.add('theme-light');
}
})();
</script>
4.3. Typography: Terminus Font
A technical blog's typography sets its entire visual tone. System fonts feel generic, and popular choices like Fira Code or JetBrains Mono appear on every other developer blog. I wanted a font that reinforced the terminal-inspired aesthetic while being genuinely readable for long technical prose — not just code blocks.
I chose the Terminus monospace font for its:
- Readability: Designed for long coding sessions
- Character distinction: Clear differentiation between similar characters (0/O, 1/l/I)
- Aesthetic: Technical, terminal-inspired appearance matching the blog's theme
Self-hosting with preloading ensures fast font delivery:
<link rel="preload" href="/fonts/TerminusTTF-4.49.3.woff2"
as="font" type="font/woff2" crossorigin>
5. Interactive Features
5.1. 3D Knowledge Graph (DAG3D)
Traditional blog navigation — chronological lists, tag clouds, category pages — fails to represent how technical topics actually interconnect. A post about database optimization relates to performance profiling, which connects to observability, which ties back to system design. These relationships are inherently graph-shaped, not list-shaped. Readers browsing a flat list miss connections that could lead them to exactly the content they need.
The task was threefold: improve content discoverability by surfacing topic relationships, create a visual portfolio piece that demonstrates frontend engineering capability, and provide a homepage that's genuinely interesting rather than a static list of links. The result is an interactive 3D force-directed graph on the homepage showing relationships between blog posts, where nodes represent posts and edges represent shared concepts.
5.1.1. Technology Stack
- 3d-force-graph: High-level library wrapping Three.js and d3-force-3d
- three-spritetext: Renders text labels as 3D sprites
- WebGL: Hardware-accelerated rendering
5.1.2. Performance Optimizations
// Lazy loading with IntersectionObserver
const observer = new IntersectionObserver(
(entries) => {
if (entries[0].isIntersecting) {
initializeGraph();
observer.disconnect();
}
},
{ rootMargin: '100px' }
);
// Pause animation when tab is hidden
document.addEventListener('visibilitychange', () => {
if (document.hidden) {
graph.pauseAnimation();
} else {
graph.resumeAnimation();
}
});
5.1.3. User Interaction
The graph supports:
- Auto-rotation: Continuous slow rotation for visual interest
- Drag: Stops rotation, allows free camera movement
- Click: Navigates to the clicked post
- Reset: Returns to default view with smooth animation
5.2. Plotly Chart Integration
Static images of charts in technical blog posts are a missed opportunity. Readers can't zoom into dense scatter plots, rotate 3D visualizations, or hover over data points to see exact values. But embedding interactive charts introduces UX problems — chart drag gestures conflict with page scrolling, and accidentally interacting with a chart while reading is frustrating.
I needed interactive charts that stay out of the way during normal reading but become fully interactive on demand, with the ability to expand to fullscreen for detailed exploration. Interactive charts are defined in JSON and rendered with Plotly.js:
{
"data": [{
"type": "scatter3d",
"x": [1, 2, 3],
"y": [4, 5, 6],
"z": [7, 8, 9],
"mode": "markers"
}],
"layout": {
"title": "3D Scatter Plot"
}
}
5.2.1. Lock/Unlock Mechanism
Charts start "locked" to prevent accidental interaction while scrolling:
- Charts render with
pointer-events: none - An overlay displays "Click to interact"
- Clicking enables the chart (
pointer-events: auto) - Pressing Escape re-locks the chart
5.2.2. Modal Expansion
Charts can expand to fullscreen modals with:
- Full toolbar access
- Animation frame preservation
- Enhanced legend positioning
- 95vw × 90vh dimensions
5.3. DevPulse: Commit Activity Grid
A portfolio blog should demonstrate that the author is actively building, not just publishing static content. Visitors landing on the homepage should immediately see evidence of consistent coding activity — it builds credibility and shows the site is maintained. GitHub's contribution graph is effective at communicating this at a glance, but it's buried on a profile page most visitors won't find.
The task was to build a visible indicator of development activity directly on the homepage, with more flexibility than GitHub's single-scale yearly view. Inspired by GitHub's contribution graph, DevPulse shows my commit activity with five different time scales for different perspectives:
5.3.1. Multi-Scale Timeline
Five different time scales provide different perspectives:
- Days: 7-column grid (weekdays)
- Weeks: 52 columns (one year)
- Months: 12 columns (J-D)
- Quarters: 4 columns (Q1-Q4)
- Years: 10 columns (decade view)
5.3.2. Data Pipeline
// API endpoint fetches from GitHub
const response = await fetch(
`https://api.github.com/repos/${owner}/${repo}/commits`,
{
headers: {
Authorization: `token ${GITHUB_TOKEN}`,
Accept: 'application/vnd.github.v3+json'
}
}
);
// Transform to activity grid
const commits = await response.json();
const activityMap = aggregateByTimePeriod(commits, scale);
5.4. Recursive Link Previews
Technical blog posts are densely linked — to other posts, documentation, external resources. Every click takes the reader away from their current context, and the mental cost of deciding "is this link worth following?" disrupts reading flow. Readers either ignore links entirely (missing valuable context) or click them and lose their place in the original article.
The task was to let readers preview linked content without navigating away, maintaining their reading context while still providing access to referenced material. The result is hover-triggered preview popups with support for nested previews up to 10 levels deep — a reader can preview a link, then preview a link within that preview, following a chain of references without ever leaving the page.
5.4.1. Architecture
User hovers link → 300ms delay → Fetch preview
↓
Internal link? → Clone article content
↓
External link? → Fetch via proxy
↓
Display in popup with sanitized HTML
5.4.2. Proxy Server
External previews route through /api/preview-proxy which:
- Fetches the external page
- Sanitizes HTML with DOMPurify
- Rewrites relative URLs to absolute
- Hides modals, cookie banners, chat widgets
- Returns safe, displayable content
5.4.3. Performance
- 300ms hover delay prevents accidental triggers
- Grace period keeps popup open when moving between link and popup
- 10-second timeout prevents stuck loading states
- Cross-origin iframe sandboxing for security
5.5. Full-Text Search with Algolia
As the number of posts grew, the table of contents and knowledge graph became insufficient for finding specific content. A reader who remembers reading about "WebGL performance" but can't recall which post it was in needs instant, typo-tolerant full-text search across all content.
Building search from scratch (inverted indices, ranking algorithms, typo tolerance) would be a massive undertaking for marginal benefit. The task was to integrate a hosted search service that provides sub-50ms results with section-level granularity, keyboard-driven UX, and minimal client-side code. Search is powered by Algolia with InstantSearch.js:
const searchClient = algoliasearch(APP_ID, API_KEY);
instantsearch({
indexName: 'posts',
searchClient,
searchFunction(helper) {
if (helper.state.query) {
helper.search();
}
}
});
5.5.2. Jump to Section
Search results link to specific sections with highlight animation:
.search-highlight-target {
animation: search-highlight 2s ease-out;
}
@keyframes search-highlight {
0% { background-color: rgba(138, 106, 170, 0.3); }
100% { background-color: transparent; }
}
6. SEO Optimization
Writing quality technical content is pointless if search engines can't find, understand, or properly display it. A SvelteKit blog with client-side rendering and dynamic content loading presents specific SEO challenges — crawlers may not execute JavaScript, social media link previews need pre-rendered metadata, and search engines need structured data to understand content relationships.
The task was to ensure every page is fully crawlable with rich metadata, appears correctly when shared on social media, and provides search engines with structured data about authorship, dates, and content type — all while keeping the content authoring workflow in org-mode.
6.2. Structured Data (JSON-LD)
Blog posts include schema.org structured data:
{
"@context": "https://schema.org",
"@type": "BlogPosting",
"headline": "Design Decisions: Building a Modern Technical Blog",
"author": {
"@type": "Person",
"name": "Charlie Holland"
},
"datePublished": "2026-01-21",
"mainEntityOfPage": {
"@type": "WebPage",
"@id": "https://chiply.dev/post-design-decisions"
}
}
This helps search engines understand content relationships.
6.3. Sitemap and RSS
Both are generated at build time by scanning the src/routes/[post]/ directory:
// sitemap.xml/+server.ts
export const GET: RequestHandler = async () => {
const posts = await discoverPosts();
const sitemap = `<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
${posts.map(post => `
<url>
<loc>https://chiply.dev/${post.slug}</loc>
<lastmod>${post.date}</lastmod>
<priority>0.8</priority>
</url>
`).join('')}
</urlset>`;
return new Response(sitemap, {
headers: { 'Content-Type': 'application/xml' }
});
};
6.4. Prerendering
All pages are prerendered at build time:
export const prerender = true;
export const entries: EntryGenerator = async () => {
const posts = await discoverPosts();
return posts.map(post => ({ post: post.slug }));
};
This ensures search engines receive fully-rendered HTML.
7. Analytics and Tracking
Publishing technical content into the void without any feedback loop felt unsatisfying. I was curious about basic questions: do people actually find and read these posts? Do they finish long articles or drop off midway? Which topics get traction? Standard page view counters answer the first question but tell you nothing about reading behavior.
The task was to understand traffic patterns and reader engagement out of curiosity, layering lightweight analytics that answer increasingly specific questions — from basic page views to scroll depth and session recordings — without invasive tracking or degrading page performance.
7.1. Vercel Analytics
The first layer is Vercel's built-in analytics, requiring just two lines of code to track page views and Web Vitals:
import { inject } from '@vercel/analytics';
inject();
7.2. Speed Insights
Real User Monitoring (RUM) captures Core Web Vitals:
- LCP (Largest Contentful Paint)
- FID (First Input Delay)
- CLS (Cumulative Layout Shift)
7.3. Custom Engagement Tracking
Page views alone don't distinguish between a reader who bounced after 3 seconds and one who spent 20 minutes reading every section. Standard analytics tools track navigation but not engagement depth. I wanted to know: do readers who start a 5000-word post actually finish it?
I built custom engagement tracking to answer these questions about reading behavior:
// Track scroll depth milestones
const milestones = [25, 50, 75, 100];
const handleScroll = debounce(() => {
const scrollPercent = (scrollTop / scrollHeight) * 100;
milestones.forEach(milestone => {
if (scrollPercent >= milestone && !reached[milestone]) {
reached[milestone] = true;
trackEvent('scroll_milestone', { depth: milestone });
}
});
}, 100);
Metrics tracked include:
- Scroll depth (25%, 50%, 75%, 100%)
- Time on page (active time, excluding hidden tabs)
- Read completion (>90% scroll depth)
7.4. Microsoft Clarity
Clarity provides heatmaps and session recordings for UX analysis:
// Secure initialization with validation
const clarityId = import.meta.env.VITE_CLARITY_ID;
if (clarityId && /^[a-zA-Z0-9]+$/.test(clarityId)) {
const script = document.createElement('script');
script.src = `https://www.clarity.ms/tag/${clarityId}`;
script.async = true;
document.head.appendChild(script);
}
9. Security Considerations
Professional security habits demanded proper hardening even for a personal blog. The site isn't just serving static HTML — it fetches and renders external content through the link preview proxy, loads org-mode-exported HTML into the DOM, accepts user input through newsletter subscriptions, and runs API endpoints that proxy to external services. Each of these is a potential XSS, injection, or data exfiltration vector.
The task was to implement defense-in-depth: multiple independent security layers so that if any single defense fails, others still protect the site. This meant sanitizing all rendered HTML, restricting what resources the browser can load, validating all inputs, and hardening HTTP headers against common attack patterns.
9.1. HTML Sanitization
The most critical defense layer, since the site dynamically renders HTML from multiple sources (org-mode exports, external link previews). All user-facing HTML is sanitized with DOMPurify using strict allowlists:
import DOMPurify from 'dompurify';
const ALLOWED_TAGS = [
'article', 'section', 'nav', 'header', 'footer',
'h1', 'h2', 'h3', 'h4', 'h5', 'h6',
'p', 'ul', 'ol', 'li', 'a', 'strong', 'em',
'pre', 'code', 'blockquote', 'table', 'tr', 'td', 'th',
'img', 'figure', 'figcaption', 'svg', 'path'
];
const ALLOWED_ATTR = [
'id', 'class', 'href', 'src', 'alt', 'title',
'aria-label', 'aria-hidden', 'role',
'data-*', 'width', 'height'
];
export const sanitizeHtml = (html: string) => {
return DOMPurify.sanitize(html, {
ALLOWED_TAGS,
ALLOWED_ATTR,
ALLOWED_URI_REGEXP: /^(?:https?|mailto|tel):/i
});
};
9.2. Content Security Policy
The CSP header restricts resource loading:
// hooks.server.ts
const csp = [
"default-src 'self'",
"script-src 'self' 'unsafe-inline' 'unsafe-eval' cdn.jsdelivr.net cdnjs.cloudflare.com",
"style-src 'self' 'unsafe-inline' cdn.jsdelivr.net",
"img-src 'self' data: blob: https:",
"font-src 'self' data: cdn.jsdelivr.net",
"connect-src 'self' api.github.com *.algolia.net",
"frame-src 'self'",
"object-src 'none'",
"base-uri 'self'",
"upgrade-insecure-requests"
].join('; ');
response.headers.set('Content-Security-Policy', csp);
9.3. Security Headers
Additional headers prevent common attacks:
response.headers.set('X-Frame-Options', 'SAMEORIGIN');
response.headers.set('X-Content-Type-Options', 'nosniff');
response.headers.set('Referrer-Policy', 'strict-origin-when-cross-origin');
response.headers.set('Permissions-Policy', 'geolocation=(), microphone=(), camera=()');
response.headers.set('Strict-Transport-Security', 'max-age=31536000; includeSubDomains');
9.4. Input Validation
All API inputs are validated:
// Email validation (RFC 5321 compliant)
const isValidEmail = (email: string): boolean => {
if (typeof email !== 'string') return false;
if (email.length > 254) return false;
const [local, domain] = email.split('@');
if (!local || !domain) return false;
if (local.length > 64) return false;
if (/\.\./.test(email)) return false;
return /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email);
};
// URL validation
const isValidUrl = (url: string): boolean => {
try {
const parsed = new URL(url);
return ['http:', 'https:'].includes(parsed.protocol);
} catch {
return false;
}
};
10. Accessibility
A technical blog should be readable by everyone, including developers using screen readers, keyboard-only navigation, or high-contrast modes. The interactive features (3D graph, charts, search modals) introduced specific accessibility challenges — custom interactive widgets don't get keyboard support or screen reader announcements for free.
The task was to maintain WCAG compliance across both the org-mode-exported static content and the custom interactive Svelte components, ensuring every feature works without a mouse and communicates its state to assistive technology.
10.1. Semantic HTML
The foundation of accessibility is semantic HTML, and org-mode's export helps here by default — headings, paragraphs, lists, and tables all use correct elements. Org-mode exports clean semantic HTML:
<article>
<header>
<h1>Post Title</h1>
<time datetime="2026-01-21">January 21, 2026</time>
</header>
<section id="introduction">
<h2>Introduction</h2>
<p>Content...</p>
</section>
</article>
10.2. ARIA Labels
Interactive elements include ARIA attributes:
<button
aria-label="Toggle theme"
aria-pressed={isDark}
onclick={toggleTheme}
>
<i class="fa-solid fa-moon" aria-hidden="true"></i>
</button>
<dialog
role="dialog"
aria-modal="true"
aria-labelledby="modal-title"
>
<h2 id="modal-title">Search</h2>
</dialog>
10.4. Focus Management
Modals trap and restore focus:
let previouslyFocusedElement: HTMLElement | null = null;
function openModal() {
previouslyFocusedElement = document.activeElement as HTMLElement;
modalElement.focus();
}
function closeModal() {
previouslyFocusedElement?.focus();
previouslyFocusedElement = null;
}
10.5. Skip Link
A skip link allows keyboard users to bypass navigation:
<a href="#main-content" class="skip-link"> Skip to main content </a>
11. Performance Optimizations
The blog includes heavy dependencies — Three.js for the 3D graph, Plotly.js for interactive charts, Algolia for search, and multiple analytics scripts. Loading all of these eagerly on every page would produce a massive initial bundle, degrading Core Web Vitals and making the site sluggish on mobile devices or slow connections. A reader who just wants to read a blog post shouldn't download a WebGL renderer.
The task was to ensure fast initial page loads regardless of which features a particular page uses, deferring expensive resources until they're actually needed while maintaining a smooth experience when they do load.
11.1. Lazy Loading
Heavy components load on demand, triggered only when they enter (or approach) the viewport:
// IntersectionObserver for viewport-triggered loading
const observer = new IntersectionObserver(
(entries) => {
entries.forEach(entry => {
if (entry.isIntersecting) {
loadComponent();
observer.unobserve(entry.target);
}
});
},
{ rootMargin: '100px' }
);
11.2. Code Splitting
Dynamic imports split the bundle:
// Only load Algolia when search opens
const loadSearch = async () => {
const { default: algoliasearch } = await import('algoliasearch');
const { default: instantsearch } = await import('instantsearch.js');
// Initialize search...
};
11.3. Caching Strategy
Different resources have different cache lifetimes:
| Resource Type | Browser Cache | CDN Cache |
|---|---|---|
| Fonts | 1 year | 1 year |
| Static assets | 1 year | 1 year |
| API responses | 5 minutes | 1 hour |
| HTML pages | 0 | 1 hour |
11.4. Image Optimization
Images are optimized with:
- WebP format where supported
- Lazy loading via
loading"lazy"= - Appropriate sizing with
srcset - Placeholder aspect ratios to prevent CLS
12. Versioning and Releases
As the project grew with frequent commits — new features, bug fixes, content additions — the question of "what changed and when" became harder to answer. Without structured versioning, there's no way to communicate the significance of changes (is this a breaking change? a new feature? a patch?) or generate meaningful changelogs for anyone following the project.
Setting up proper release engineering early, before the project accumulated hundreds of commits, would prevent future pain. The task was to automate version bumping and changelog generation from commit history, requiring only disciplined commit messages rather than manual bookkeeping.
12.1. Semantic Versioning
The project uses SemVer with automated releases, where the version bump is determined entirely by commit message prefixes:
| Commit Type | Version Bump |
|---|---|
fix: |
PATCH (0.0.x) |
feat: |
MINOR (0.x.0) |
BREAKING CHANGE: |
MAJOR (x.0.0) |
12.2. Release Please
Google's release-please automates version management:
- Conventional commits trigger Release PRs
- PRs include changelog updates
- Merging creates GitHub releases and tags
package.jsonversion updates automatically
12.3. Changelog Generation
Changelogs are auto-generated from commit messages:
## [0.1.0] - 2026-01-21 ### Features - Add share button with social media platforms - Add 3D knowledge graph visualization ### Bug Fixes - Fix theme flash on page load - Correct chart modal focus management
13. Conclusion
Building chiply.dev has been an exercise in thoughtful engineering. Every decision - from Svelte's compiled output to org-mode's literate programming - serves the goal of creating an engaging, performant, and accessible reading experience.
The codebase reflects my belief that personal projects should be laboratories for exploring ideas. The recursive link previews might be over-engineered, but they taught me about iframe security policies. The 3D knowledge graph might be unnecessary, but it forced me to learn WebGL performance optimization.
If you're building your own technical blog, I hope this post provides useful starting points. Feel free to explore the source code on GitHub, and don't hesitate to reach out with questions or suggestions.
14. References
15. tldr
This post details the architectural decisions behind chiply.dev, a modern technical blog built with SvelteKit 2 and Svelte 5's new runes system for explicit reactivity and compiled performance. The content pipeline uses Emacs org-mode for literate programming capabilities, exporting to HTML that gets processed server-side for metadata extraction and chunked into Algolia search records by heading.
The styling leverages Open Props CSS custom properties instead of Tailwind for semantic naming and lighter bundles, with a sophisticated theme system preventing flash-of-wrong-theme on load. The interactive features include a 3D knowledge graph built with Three.js and WebGL, Plotly charts with lock/unlock mechanisms, and the DevPulse commit activity visualization showing GitHub contributions across multiple time scales.
Recursive link previews support 10 levels of nesting, fetching content through a sanitizing proxy server for external sites. The SEO strategy includes comprehensive meta tags, structured data with JSON-LD, and full prerendering at build time. Analytics combine Vercel's built-in monitoring with custom engagement tracking for scroll depth and read completion, plus Microsoft Clarity for heatmaps.
Newsletter subscriptions use Buttondown's API with double opt-in for GDPR compliance. Security measures include DOMPurify HTML sanitization, strict Content Security Policy headers, and comprehensive input validation. The site maintains WCAG compliance through semantic HTML, ARIA labels, and full keyboard navigation support.
Performance optimizations include lazy loading, code splitting with dynamic imports, and aggressive caching strategies differentiated by resource type. The release process uses semantic versioning with Google's release-please automating changelog generation from conventional commits.