Pour kettle and let steep the gods of tea. I built NewsBlur and Turn Touch.
1345 stories
·
868 followers

Highway 1 In Big Sur Is Officially Open After 2024 Landslide — Here’s What To Know

1 Comment

Big Sur is one of the most beautiful and iconic parts of California — Highway 1 curves along the Big Sur coastline, featuring stunning ocean views from the dramatic bluffs. Unfortunately, the road is prone to landslides, and a major landslide in February 2024 closed a key portion of the highway through Big Sur. California Transit has announced that the iconic highway is officially open again, ahead of schedule. The road’s initial reopening date was March 30, which makes the repairs 90 days ahead of schedule.

“This reopening will bring much-needed relief to small businesses and families in Big Sur and the surrounding communities who have shown remarkable resilience and strength. I’m grateful to Caltrans crews for delivering Highway 1 ahead of schedule—deploying remotely controlled heavy equipment, stabilizing slopes with thousands of steel reinforcements drilled up to 60 feet deep, and using cutting-edge monitoring systems to overcome extraordinary geological challenges while keeping workers safe. This vital corridor is the gateway to California’s coast and the lifeblood of the Big Sur economy—and today it’s restored.” Said Governor Gavin Newsom regarding the reopening. 

The 2024 landslide that caused the closure — called Regent’s Landslide — resulted in a 6.8-mile closure on the road. Highway 1 along Big Sur is very prone to landslides, and 75 miles of the road is one of the most landslide-prone areas of the Western United States, with 1500 landslides on record. Repairing the road was difficult for crews, requiring daily monitoring from drones and remotely controlled machinery.

Big Sur is a popular day trip from San Francisco, offering iconic coastal views and many delightful stops along the drive. There are beautiful hikes, like a hidden valley filled with calla lilies, and many lovely hotels. We also recommend stopping by the historic restaurant Nepenthe, which features a deck with an incredible view of the coast.

You can learn more about the reopening here.

The post Highway 1 In Big Sur Is Officially Open After 2024 Landslide — Here’s What To Know appeared first on Secret San Francisco.

Read the whole story
samuel
7 hours ago
reply
Just in time for my SF to LA birthday roadtrip in February!
Cambridge, Massachusetts
Share this story
Delete

Custom icons for folders and feeds

2 Comments and 4 Shares

I have a lot of folders. Over the years I’ve organized my feeds into categories like News, Tech, Cooking, and Comics. But when I’m scanning my feed list, they all look the same—just folder icons with text. I wanted a way to make certain folders stand out at a glance, especially the ones I check most often.

That’s why I built custom icons for both folders and feeds. You can now personalize any folder or feed with an emoji, a preset icon in any color, or even upload your own image.

How it works

Right-click on any folder or feed in your feed list and select “Folder settings” or “Site settings”. You’ll see a new “Folder Icon” or “Feed Icon” tab where you can customize the icon.

There are three ways to set a custom icon:

Preset icons: Pick from over 240 icons (a mix of outline and filled styles) and colorize them with any of 84 colors organized by hue. Want a red heart for your favorites folder? A blue code bracket for programming feeds? It’s all there.

Emoji: Choose from 180 emojis organized by category. A basketball for sports feeds, a fork and knife for cooking, a newspaper for news—you get the idea.

Upload your own: Have a specific image in mind? Upload any image and it will be automatically resized to fit perfectly in your feed list.

Great for feeds without icons

Many feeds don’t have favicons, or they have generic RSS icons that all look the same. Custom feed icons let you give these feeds distinctive icons so you can spot them instantly. I’ve been using this to add icons to older blogs and newsletters that never bothered setting up a proper favicon.

Custom icons are available now on the web for all NewsBlur users. Folders and feeds both support the same icon options of emoji, preset icons with colors, or uploaded images.

If you have feedback or ideas for additional icon options, please share them on the NewsBlur forum.

Read the whole story
fxer
7 days ago
reply
Awesome, will definitely start using this. Out of curiosity is there a reason to only support a curated subset of emoji instead of full Unicode? With the image upload someone could just add any emoji they wanted anyway, but curious why not out of the box.
Bend, Oregon
samuel
7 days ago
Just too many to show. I curated it down but I'd gladly add more if requested. Feel free to submit a PR, you can ask Claude Code to do it with this prompt: "Add these/a bunch of emoji to the custom icons dialog" and then submit the PR. Easy!
samuel
8 days ago
reply
This is such a fun feature!
Cambridge, Massachusetts
samuel
8 days ago
Coming soon to both iOS and Android. PRs are in, just need to merge and deploy
egoexpress
8 days ago
Pretty cool, looks great!
Share this story
Delete

Sam Rose explains how LLMs work with a visual essay

1 Comment and 3 Shares

Sam Rose explains how LLMs work with a visual essay

Sam Rose is one of my favorite authors of explorable interactive explanations - here's his previous collection.

Sam joined ngrok in September as a developer educator. Here's his first big visual explainer for them, ostensibly about how prompt caching works but it quickly expands to cover tokenization, embeddings, and the basics of the transformer architecture.

The result is one of the clearest and most accessible introductions to LLM internals I've seen anywhere.

Animation. Starts in tokens mode with an array of 75, 305, 24, 887 - clicking embeddings animates those into a 2D array showing each one to be composed of three floating point numbers.

Tags: ai, explorables, generative-ai, llms, sam-rose, tokenization

Read the whole story
samuel
27 days ago
reply
This is a fantastic visual essay. Great if you know it, even better if you've wanted to know the basic architecture of a transformer.
Cambridge, Massachusetts
acdha
16 days ago
reply
Washington, DC
Share this story
Delete

The Infinite Loop of One LLM Talking to Another

1 Comment

This is very funny, but also a good indication of just how far away these things are from actual intelligence. First, a reasonable human being would never get caught in a loop like this. Second, only humans can not only recognize what’s going on here, but also see the humor in it.

Read the whole story
samuel
29 days ago
reply
Well that video just updated a whole lot of my priors.
Cambridge, Massachusetts
Share this story
Delete

I ported JustHTML from Python to JavaScript with Codex CLI and GPT-5.2 in 4.5 hours

2 Comments

I wrote about JustHTML yesterday - Emil Stenström's project to build a new standards compliant HTML5 parser in pure Python code using coding agents running against the comprehensive html5lib-tests testing library. Last night, purely out of curiosity, I decided to try porting JustHTML from Python to JavaScript with the least amount of effort possible, using Codex CLI and GPT-5.2. It worked beyond my expectations.

TL;DR

I built simonw/justjshtml, a dependency-free HTML5 parsing library in JavaScript which passes 9,200 tests from the html5lib-tests suite and imitates the API design of Emil's JustHTML library.

It took two initial prompts and a few tiny follow-ups. GPT-5.2 running in Codex CLI ran uninterrupted for several hours, burned through 1,464,295 input tokens, 97,122,176 cached input tokens and 625,563 output tokens and ended up producing 9,000 lines of fully tested JavaScript across 43 commits.

Time elapsed from project idea to finished library: about 4 hours, during which I also bought and decorated a Christmas tree with family and watched the latest Knives Out movie.

Some background

One of the most important contributions of the HTML5 specification ten years ago was the way it precisely specified how invalid HTML should be parsed. The world is full of invalid documents and having a specification that covers those means browsers can treat them in the same way - there's no more "undefined behavior" to worry about when building parsing software.

Unsurprisingly, those invalid parsing rules are pretty complex! The free online book Idiosyncrasies of the HTML parser by Simon Pieters is an excellent deep dive into this topic, in particular Chapter 3. The HTML parser.

The Python html5lib project started the html5lib-tests repository with a set of implementation-independent tests. These have since become the gold standard for interoperability testing of HTML5 parsers, and are used by projects such as Servo which used them to help build html5ever, a "high-performance browser-grade HTML5 parser" written in Rust.

Emil Stenström's JustHTML project is a pure-Python implementation of an HTML5 parser that passes the full html5lib-tests suite. Emil spent a couple of months working on this as a side project, deliberately picking a problem with a comprehensive existing test suite to see how far he could get with coding agents.

At one point he had the agents rewrite it based on a close inspection of the Rust html5ever library. I don't know how much of this was direct translation versus inspiration - his project has 1,215 commits total so it appears to have included a huge amount of iteration, not just a straight port.

My project is a straight port. I instructed Codex CLI to build a JavaScript version of Emil's Python code.

The process in detail

I started with a bit of mise en place. I checked out two repos and creating an empty third directory:

cd ~/dev
git clone https://github.com/EmilStenstrom/justhtml
git clone https://github.com/html5lib/html5lib-tests
mkdir justjshtml
cd justjshtml

Then I started Codex CLI for GPT-5.2 like this:

codex --yolo -m gpt-5.2

That --yolo flag is a shortcut for --dangerously-bypass-approvals-and-sandbox, which is every bit as dangerous as it sounds.

My first prompt told Codex to inspect the existing code and use it to build a specification for the new JavaScript library:

We are going to create a JavaScript port of ~/dev/justhtml - an HTML parsing library that passes the full ~/dev/html5lib-tests test suite. It is going to have a similar API to the Python library but in JavaScript. It will have no dependencies other than raw JavaScript, hence it will work great in the browser and node.js and other environments. Start by reading ~/dev/justhtml and designing the user-facing API for the new library - create a spec.md containing your plan.

I reviewed the spec, which included a set of proposed milestones, and told it to add another:

Add an early step to the roadmap that involves an initial version that parses a simple example document that is valid and returns the right results. Then add and commit the spec.md file.

Here's the resulting spec.md file. My request for that initial version became "Milestone 0.5" which looked like this:

Milestone 0.5 — End-to-end smoke parse (single valid document)

  • Implement the smallest end-to-end slice so the public API is real early:
    • new JustHTML("<html><head></head><body><p>Hello</p></body></html>") returns a tree with the expected tag structure and text nodes.
    • doc.toText() returns "Hello" and doc.errors is empty for this valid input.
  • Add scripts/smoke.js (no deps) that runs the example and asserts the expected structure/output.
  • Gate: node scripts/smoke.js passes.

Then I told it:

Implement Milestone 0.5

And off it went. The resulting code appeared to work so I said:

Configure GitHub Actions test.yml to run that on every commit, then commit and push

I ran git init and created a private GitHub repository for this project at this point, and set up the local directory to push to that remote.

Here's that initial push. Then I told it:

OK do the rest, commit and push often

And that was almost it!

I set my laptop to not fall asleep and left it to its devices while we went off to buy a Christmas tree. The "commit and push often" meant I could monitor its progress on my phone by refreshing the commit log on GitHub.

I was running this against my $20/month ChatGPT Plus account, which has a five hour token allowance window for Codex CLI. That ran out at 6:35pm and Codex paused, so I waited until the reset point at 7:14pm and typed:

Keep going

At 9:30pm it declared itself done with the following summary message:

• What’s Done

  - html5lib-tests now run in CI and pass: tokenizer 6810/6810, tree 1770/1782
    (12 skipped: #script-on), serializer 230/230, encoding 82/83 (1 skipped:
    scripted) (.github/workflows/test.yml:1).
  - Public API expanded: stream() (src/stream.js:1), CSS selectors query()/
    matches() (src/selector.js:1), and toMarkdown() (src/markdown.js:1), all
    exported via src/index.js:1.
  - Added no-deps unit scripts for selectors/stream/markdown (scripts/run-
    selector-tests.js:1, scripts/run-stream-tests.js:1, scripts/run-markdown-
    tests.js:1) and wired them into CI.
  - Fixed TreeBuilder so <br> no longer closes <p> (src/treebuilder.js:647).

As a finishing touch, I had it add a playground interface so I could try out the new library in my browser. I prompted:

Add a playground.html in the top level folder which loads the necessary ES modules from ./src/... and implements the exact same functionality as seen on https://tools.simonwillison.net/justhtml but using the JavaScript library instead of Pyodide

It fetched my existing playground page using curl and built a new playground.html file that loaded the new JavaScript code instead. This worked perfectly.

I enabled GitHub Pages for my still-private repo which meant I could access the new playground at this URL:

https://simonw.github.io/justjshtml/playground.html

Screenshot of JustJSHTML Playground web application. Header reads "JustJSHTML Playground" with subtitle "A dependency-free JavaScript HTML5 parser - GitHub". Below is a status bar showing "JavaScript Environment" with a green "Ready" badge. The main input area has "Paste HTML" and "Fetch from URL" buttons, with a text area containing HTML code: "<!DOCTYPE html> <html> <head> <title>Example Page</title> </head> <body> <header> <nav> <ul>". A "Playground Mode" section shows buttons for "CSS Selector Query", "Pretty Print HTML", "Tree Structure", "Stream Events", "Extract Text", and "To Markdown" (highlighted in purple). Below is a text field labeled "CSS Selector (optional - leave empty for whole document):" with placeholder "e.g., article, main, .content (or leave empty)" and a green "Convert to Markdown" button. The Output section has a teal header with "Whole document" badge and displays converted markdown: "Example Page" followed by "- [Home](/)" "- [About](/about)" "- [Contact](/contact)".

All it needed now was some documentation:

Add a comprehensive README with full usage instructions including attribution plus how this was built plus how to use in in HTML plus how to use it in Node.js

You can read the result here.

We are now at eight prompts total, running for just over four hours and I've decorated for Christmas and watched Wake Up Dead Man on Netflix.

According to Codex CLI:

Token usage: total=2,089,858 input=1,464,295 (+ 97,122,176 cached) output=625,563 (reasoning 437,010)

My llm-prices.com calculator estimates that at $29.41 if I was paying for those tokens at API prices, but they were included in my $20/month ChatGPT Plus subscription so the actual extra cost to me was zero.

What can we learn from this?

I'm sharing this project because I think it demonstrates a bunch of interesting things about the state of LLMs in December 2025.

  • Frontier LLMs really can perform complex, multi-hour tasks with hundreds of tool calls and minimal supervision. I used GPT-5.2 for this but I have no reason to believe that Claude Opus 4.5 or Gemini 3 Pro would not be able to achieve the same thing - the only reason I haven't tried is that I don't want to burn another 4 hours of time and several million tokens on more runs.
  • If you can reduce a problem to a robust test suite you can set a coding agent loop loose on it with a high degree of confidence that it will eventually succeed. I called this designing the agentic loop a few months ago. I think it's the key skill to unlocking the potential of LLMs for complex tasks.
  • Porting entire open source libraries from one language to another via a coding agent works extremely well.
  • Code is so cheap it's practically free. Code that works continues to carry a cost, but that cost has plummeted now that coding agents can check their work as they go.
  • We haven't even begun to unpick the etiquette and ethics around this style of development. Is it responsible and appropriate to churn out a direct port of a library like this in a few hours while watching a movie? What would it take for code built like this to be trusted in production?

I'll end with some open questions:

  • Does this library represent a legal violation of copyright of either the Rust library or the Python one?
  • Even if this is legal, is it ethical to build a library in this way?
  • Does this format of development hurt the open source ecosystem?
  • Is it responsible to publish software libraries built in this way?
  • How much better would this library be if an expert team hand crafted it over the course of several months?

Tags: html, javascript, python, ai, generative-ai, llms, ai-assisted-programming, gpt-5, codex-cli

Read the whole story
samuel
31 days ago
reply
Well this is blowing my mind. Love the waypoint of the minimal test case.
Cambridge, Massachusetts
Share this story
Delete
1 public comment
denismm
31 days ago
reply
Interesting that he runs the entire experiment and tjen says “is this legal? Is this ethical?”
cpm
30 days ago
The way he is doing it I think it's legal and ethical. He is doing it all in public and letting anyone see/use the result. I'm sure others will be doing similar things in less ethical and maybe less legal ways.

'Tis the season for bright lights and big energy bills. Here are some ways to keep it budget-friendly.

1 Comment
A photo of Shawn Keeler's over-the-top holiday display at his Putnam Valley home.

Shawn Keeler has been going all out on Christmas decorations for decades.

Holiday decorations can zap a lot of energy, but there are ways to decorate and save on power bills. [ more › ]

Read the whole story
samuel
33 days ago
reply
Not sure I buy the 10% parasitic leakage from plugged in but turned off lights/inflatables. It’s way lower than that and that paragraph encourages paranoid behavior.
Cambridge, Massachusetts
Share this story
Delete
Next Page of Stories