Hacker news

  • Top
  • New
  • Past
  • Ask
  • Show
  • Jobs

ASCII characters are not pixels: a deep dive into ASCII rendering (https://alexharri.com)

440 points by alexharri about 6 hours ago | 57 comments | View on ycombinator

stephantul about 1 hour ago |

Amazing post, I didn’t think this through a lot, but since you are normalizing the vectors and calculating the euclidean distance, you will get the same results using a simple matmul, because euclidean distance over normalized vectors is a linear transform of the cosine distance.

Since you are just interested in the ranking, not the actual distance, you could also consider skipping the sqrt. This gives the same ranking, but will be a little faster.

snackbroken about 1 hour ago |

> I don’t believe I’ve ever seen shape utilized in generated ASCII art, and I think that’s because it’s not really obvious how to consider shape when building an ASCII renderer.

Acerola worked a bit on this in 2024[1], using edge detection to layer correctly oriented |/-\ over the usual brightness-only pass. I think either technique has cases where one looks better than the other.

[1]https://www.youtube.com/watch?v=gg40RWiaHRY

sph about 6 hours ago |

Every example I thought "yeah, this is cool, but I can see there's space for improvement" — and lo! did the author satisfy my curiosity and improve his technique further.

Bravo, beautiful article! The rest of this blog is at this same level of depth, worth a sub: https://alexharri.com/blog

echoangle about 2 hours ago |

Very cool effect!

> It may seem odd or arbitrary to use circles instead of just splitting the cell into two rectangles, but using circles will give us more flexibility later on.

I still don’t really understand why the inner part of the rectangle can’t just be split in a 2x3 grid. Did I miss the explanation?

wonger_ about 4 hours ago |

Great breakdown and visuals. Most ASCII filters do not account for glyph shape.

It reminds me of how chafa uses an 8x8 bitmap for each glyph: https://github.com/hpjansson/chafa/blob/master/chafa/interna...

There's a lot of nitty gritty concerns I haven't dug into: how to make it fast, how to handle colorspaces, or like the author mentions, how to exaggerate contrast for certain scenes. But I think 99% of the time, it will be hard to beat chafa. Such a good library.

EDIT - a gallery of (Unicode-heavy) examples, in case you haven't seen chafa yet: https://hpjansson.org/chafa/gallery/

crazygringo about 2 hours ago |

> I don’t believe I’ve ever seen shape utilized in generated ASCII art, and I think that’s because it’s not really obvious how to consider shape when building an ASCII renderer.

Not to take away from this truly amazing write-up (wow), but there's at least one generator that uses shape:

https://meatfighter.com/ascii-silhouettify/

See particularly the image right above where it says "Note how the algorithm selects the largest characters that fit within the outlines of each colored region."

There's also a description at the bottom of how its algorithm works, if anyone wants to compare.

dboon about 2 hours ago |

Fantastic article! I wrote an ASCII renderer to show a 3D Claude for my Claude Wrapped[^1], and instead of supersampling I just decided to raymarch the whole thing. SDFs give you a smoother result than even super sampling, but of course your scene has to be represented with distance functions and combinations thereof whereas your method is generally applicable.

Taking into account the shape of different ASCII characters is brilliant, though!

[1]: https://spader.zone/wrapped/

AgentMatt about 3 hours ago |

Great article!

I think there's a small problem with intermediate values in this code snippet:

  const maxValue = Math.max(...samplingVector)

  samplingVector = samplingVector.map((value) => {
    value = x / maxValue; // Normalize
    value = Math.pow(x, exponent);
    value = x * maxValue; // Denormalize
    return value;
  })
Replace x by value.

mwillis about 1 hour ago |

Fantastic technique and deep dive. I will say, I was hoping to see an improved implementation of the Cognition cube array as the payoff at the end. The whole thing reminded me of the blogger/designer who, years ago, showed YouTube how to render a better favicon by using subpixel color contrast, and then IIRC they implemented the improvement. Some detail here: https://web.archive.org/web/20110930003551/http://typophile....

CarVac about 4 hours ago |

The contrast enhancement seems simpler to perform with an unsharp mask in the continuous image.

It probably has a different looking result, though.

undefined about 2 hours ago |

undefined

jrmg about 3 hours ago |

This is amazing all round - in concept, writing, and coding (both the idea and the blog post about it).

I feel confident stating that - unless fed something comprehensive like this post as input, and perhaps not even then - an LLM could not do something novel and complex like this, and will not be able to for some time, if ever. I’d love to read about someone proving me wrong on that.

symisc_devel about 4 hours ago |

There is already a C library that does realtime ascii rendering using décision trees:

GitHub: https://github.com/symisc/ascii_art/blob/master/README.md Docs: https://pixlab.io/art

estimator7292 about 1 hour ago |

Those 3D interactive animations are the smoothest 3D rendering I've ever seen in a mobile browser. I'm impressed

undefined about 5 hours ago |

undefined

chrisra about 5 hours ago |

> To increase the contrast of our sampling vector, we might raise each component of the vector to the power of some exponent.

How do you arrive at that? It's presented like it's a natural conclusion, but if I was trying to adjust contrast... I don't see the connection.

mark-r about 1 hour ago |

This is something I've wanted to do for 50 years, but never found the time or motivation. Well done!

eerikkivistik about 3 hours ago |

It reminds me quite a bit of collision engines for 2D physics/games. Could probably find some additional clever optimisations for the lookup/overlap (better than kd-trees) if you dive into those. Not that it matters too much. Very cool.

nickdothutton about 5 hours ago |

What a great post. There is an element of ascii rendering in a pet project of mine and I’m definitely going to try and integrate this work. From great constraints comes great creativity.

Sesse__ about 3 hours ago |

I did something very similar to this (searching for similar characters across the grid, including some fuzzy matching for nearby pixels) around 1996. I wonder if I still have the code? It was exceedingly slow, think minutes for a frame at the Pentiums of the time.

maxglute about 1 hour ago |

Mesmerizing, the i, ! shading is unreasonably effective.

shiandow about 4 hours ago |

I'm not sure if this exponent is actually enhancing contrast or just fixing the gamma.

nathaah3 about 6 hours ago |

that was so brilliant! i loved it! thanks for putting it out :)

Jyaif about 6 hours ago |

It's important to note that the approach described focuses on giving fast results, not the best results.

Simply trying every character and considering their entire bitmap, and keeping the character that reduces the distance to the target gives better results, at the cost of more CPU.

This is a well known problem because early computers with monitors used to only be able to display characters.

At some point we were able to define custom character bitmap, but not enough custom characters to cover the entire screen, so the problem became more complex. Which new character do you create to reproduce an image optimally?

And separately we could choose the foreground/background color of individual characters, which opened up more possibilities.

lysace about 1 hour ago |

Seems like stellar work. Kudos.

I am however am struck with the from an outsider POV highly niche specific terminology used in the title.

"ASCII rendering".

Yes, I know what ASCII is. I understand text rendering in sometimes painful detail. This was something else.

Yes, it's a niche and niches have their own terminologies that may or may not make sense in a broader context.

HN guidelines says "Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize."

I'm not sure what is the best course of action here - perhaps nothing. I keep bumping into this issue all the time at HN, though. Basically the titles very often don't include the context/niche.

adam_patarino about 5 hours ago |

Tell me someone has turned this into a library we can use

steve1977 about 3 hours ago |

Thanks! This article put a genuine smile on my face, I can still discover some interesting stuff on the Internet beyond AI slop.

zdimension about 4 hours ago |

Well-written post. Very interesting, especially the interactive widgets.

blauditore about 4 hours ago |

Nice! Now add colors and we can finally play Doom on the command line.

More seriously, using colors (not trivial probably, as it adds another dimension), and some select Unicode characters, this could produce really fancy renderings in consoles!

chrisra about 4 hours ago |

Next up: proportional fonts and font weights?