Hacker news

  • Top
  • New
  • Past
  • Ask
  • Show
  • Jobs

Reading across books with Claude Code (https://pieterma.es)

133 points by gmays 1 day ago | 39 comments | View on ycombinator

Ronsenshi about 20 hours ago |

For me this looks like a great way to build connections between books in order to create a recommendation engine - something better than what Goodreads & Co provides. Something actually useful.

The cost of indexing using third party API is extremely high, however. This might work out well with an open source model and a cluster of raspberry pi for large library indexing?

duck 1 day ago |

nubskr about 14 hours ago |

I've been using Claude Code for my research notes and had the same realization, it's less about perfecting prompts and more about building tools so it can surprise you. The moment I stopped treating it like a function and started treating it like a coworker who reads at 1000 wpm, everything clicked

rbbydotdev about 5 hours ago |

I had a similar toy project. Attempting to make custom day trips from guide books. I immediately ran into limitations naïvely chunking paragraphs into a RAG. My next attempt I’m going to try using a llm model to extract “entities” like holidays/places/history and store them in a graph db coupled with vectors and original source text or index references(page + column)

Still experimental and way outside my expertise, would love to hear anyone with ideas or experience with this kind of problem

jszymborski 1 day ago |

This is all interesting, however I find myself most interested in how the topic tree is created. It seems super useful for lots of things. Anyone can point me to something similar with details?

EDIT: Whoops, I found more details at the very end of the article.

zkmon about 9 hours ago |

I used AI for accelerating my reading a book recently. This is a interesting usecase. But it same as racing for the destination instead enjoying the journey.

It kills the tone, pace and the expressions of the author. It is pretty much same as an assistant summarizing the whole book for you, if that's what you want. It misses the entire experience delivered by the author.

ebiester about 23 hours ago |

I did a similar thing with productivity books early last year, but never released it because it wasn't high enough quality. I keep meaning to get back to that project but it had a much more rigid hypothesis in mind - trying to get the kind of classification from this is pretty difficult and even more so to get high value from it.

undefined about 14 hours ago |

undefined

doytch about 22 hours ago |

The mental model I had of this was actually on the paragraph or page level, rather than words like the post demos. I think it'd be really interesting if you're reading a take on a concept in one book and you can immediately fan-out and either read different ways of presenting the same information/argument, or counters to it.

voidhorse 1 day ago |

This was posted before and there were many good criticisms raised in the comments thread.

I'd just reiterate two general points of critique:

1. The point of establishing connections between texts is semantic and terms can have vastly different semantic meanings dependent on the sphere of discourse in which they occur. Because of the way LLMs work, the really novel connections probably won't be found by an LLM since the way they function is quite literally to uncover what isn't novel.

2. Part of the point in making these connections is the process that acts on the human being making the connections. Handing it all off to an LLM is no better than blindly trusting authority figures. If you want to use LLMs as generators of possible starting points or things to look at and verify and research yourself, that seems totally fine.

skeptrune 1 day ago |

I really like the idea of the topic tree. That intuitively resonates.

lloydatkinson about 10 hours ago |

How can anyone even trust crap like this? It was only a few days ago Claude and ChatGPT hallucinated a bunch of stuff from actual docs I sent them links to. When asked about it, they just apologised.

kylehotchkiss 1 day ago |

In several years, IMO the most interesting people are going to be the ones still actually reading paper books and not trying to shove everything into a LLM

mizuirorivi about 3 hours ago |

[dead]

gulugawa 1 day ago |

[flagged]

nsmdkdfk 1 day ago |

[flagged]