now available at: twitter.christinecorbettmoran.com
Disclaimer: AI assisted writing of this post. I appreciate y’all want bespoke human prose. At the same time, an AI did the heavy lifting project, and I want to share the prompt and the joy so you can be inspired and enabled to easily reproduce. Bespoke human prose would be a blocker to that. I’ve moved on to other projects.
Twitter is dead. Long live the archive.
When I downloaded my Twitter data in November 2022, it was because I intended to leave the platform. I did and then the archive sat untouched for three years. Today I asked my AI agent to do something with it, and a few hours later I had a fully searchable vintage Twitter timeline displaying 12 years of my online life. Live at: twitter.christinecorbettmoran.com
If you want to skip the story and do this yourself, here’s the prompt. Hand it to any capable AI agent (Claude Code, Codex, etc.) along with your downloaded Twitter archive:
I have a Twitter archive downloaded from Twitter (the standard data export). The archive is at [path]. It contains tweets.js, tweets_media/, and other files.
Please:
- Parse tweets.js into clean JSON, separating originals from retweets
- Detect threads (self-reply chains)
- Map tweet IDs to local media files
- Build a static web app that displays the tweets in a vintage ~2012-2015 Twitter UI style
- Include: infinite scroll, search, year filter, sort (newest/oldest/popular), media display, thread modals
- Split the data into chunks for lazy-loading (keep initial load under 1MB)
- Run a security audit to confirm no DMs or PII are included
- Use vanilla HTML/CSS/JS — no frameworks, no build tools
That’s it. Review the output, customize the bio/avatar/links, deploy to any static host, done. The rest of this post is what happened when I ran that pipeline myself — the decisions, the tradeoffs, and what actually took effort.
What You’re Starting With
Twitter’s data export gives you a folder containing tweets.js (all your tweets as JavaScript objects), a tweets_media/ directory with attached images and videos, direct-messages.js (which we explicitly exclude), and assorted other JS files for likes, followers, and ad data that we ignore.
The raw tweet data is a mess. It’s wrapped in a JavaScript variable assignment, dates are in multiple formats, media references use Twitter CDN URLs that may be dead, and there’s no thread detection.
The Tools
The AI agent (OpenClaw running Claude) wrote all the Python processing scripts, built the entire web app, designed the vintage UI, ran the security audit, refactored for lazy-loading, and handled the Cloudflare deployment. It spawned sub-agents for the heavier refactoring work — splitting the 22MB monolith into lazy-loaded chunks was a big enough change that it made sense to hand off.
The stack:
- Python 3 for data processing
- Vanilla HTML/CSS/JS (no React, no Vue, no npm, no webpack, no build step)
- No database, no server-side rendering, no API endpoints
- Cloudflare Pages for hosting (free tier)
The whole thing is files serving files. You can open index.html from a file:// URL and it nearly works (CORS blocks the JSON fetches, so you need any static server).
Why Bother?
Twelve years of tweets are a weird, accidental diary. South Pole research updates next to bad puns. Conference live-threads next to late-night debugging rants. Job changes, moves, friendships forming in public replies.
Twitter made all of this ephemeral by design. The archive makes it permanent. And if you’re going to preserve it, you might as well make it look like it did when you wrote it, back when the timeline was chronological and the whole thing fit in your head.
The archive ZIP is sitting in your downloads folder. It takes one prompt and a couple of hours.