The mental model I had of this was actually on the paragraph or page level, rather than words like the post demos. I think it'd be really interesting if you're reading a take on a concept in one book and you can immediately fan-out and either read different ways of presenting the same information/argument, or counters to it.
This is all interesting, however I find myself most interested in how the topic tree is created. It seems super useful for lots of things. Anyone can point me to something similar with details?
EDIT: Whoops, I found more details at the very end of the article.
I did a similar thing with productivity books early last year, but never released it because it wasn't high enough quality. I keep meaning to get back to that project but it had a much more rigid hypothesis in mind - trying to get the kind of classification from this is pretty difficult and even more so to get high value from it.
This was posted before and there were many good criticisms raised in the comments thread.
I'd just reiterate two general points of critique:
1. The point of establishing connections between texts is semantic and terms can have vastly different semantic meanings dependent on the sphere of discourse in which they occur. Because of the way LLMs work, the really novel connections probably won't be found by an LLM since the way they function is quite literally to uncover what isn't novel.
2. Part of the point in making these connections is the process that acts on the human being making the connections. Handing it all off to an LLM is no better than blindly trusting authority figures. If you want to use LLMs as generators of possible starting points or things to look at and verify and research yourself, that seems totally fine.
In several years, IMO the most interesting people are going to be the ones still actually reading paper books and not trying to shove everything into a LLM
I don't think the Venn diagram of those people and everyone else is as separate as you imagine.
I'm a Literature major and avid reader, but projects like this are still incredibly exciting to me. I salivate at the thought of new kinds of literary analysis that AI is going to open up.
I agree that we should be reading books with our eyes and that feeding a book into an LLM doesn't constitute reading it and confers few of the same benefits.
But this thing isn't (so far as I can tell) even slightly proposing that we feed books into an LLM instead of reading them. It looks to me more like a discovery mechanism: you run this thing, it shows you some possible links between books, and maybe you think "hmm, that little snippet seems well written" or "well, I enjoyed book X, let's give book Y a try" or whatever.
I don't think it would work particularly well for me; I'd want longer excerpts to get a sense of whether a book is interesting, and "contains a fragment that has some semantic connection with a fragment of a book I liked" doesn't feel like enough recommendation. Maybe it is indeed a huge waste of time. But if it is, it isn't because it's encouraging people to substitute LLM use for reading.
I need a name for people who dismiss an entirely new and revolutionary class of technology without even trying it, so much so that they'll not even read about any new ideas that involve it.
I'm not entirely sure that's a fair association. The Luddites weren't against technology in general, they were fighting for their livelihoods. There very well could be a fresh luddite movement centered around the use of AI tools, but I don't think "luddite" is the right term in this specific case.
EDIT: Whoops, I found more details at the very end of the article.
I'd just reiterate two general points of critique:
1. The point of establishing connections between texts is semantic and terms can have vastly different semantic meanings dependent on the sphere of discourse in which they occur. Because of the way LLMs work, the really novel connections probably won't be found by an LLM since the way they function is quite literally to uncover what isn't novel.
2. Part of the point in making these connections is the process that acts on the human being making the connections. Handing it all off to an LLM is no better than blindly trusting authority figures. If you want to use LLMs as generators of possible starting points or things to look at and verify and research yourself, that seems totally fine.
I'm a Literature major and avid reader, but projects like this are still incredibly exciting to me. I salivate at the thought of new kinds of literary analysis that AI is going to open up.
"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."
https://news.ycombinator.com/newsguidelines.html
But this thing isn't (so far as I can tell) even slightly proposing that we feed books into an LLM instead of reading them. It looks to me more like a discovery mechanism: you run this thing, it shows you some possible links between books, and maybe you think "hmm, that little snippet seems well written" or "well, I enjoyed book X, let's give book Y a try" or whatever.
I don't think it would work particularly well for me; I'd want longer excerpts to get a sense of whether a book is interesting, and "contains a fragment that has some semantic connection with a fragment of a book I liked" doesn't feel like enough recommendation. Maybe it is indeed a huge waste of time. But if it is, it isn't because it's encouraging people to substitute LLM use for reading.