▲ | LangExtract: Python library for extracting structured data from language models(github.com) | |||||||||||||||||||
162 points by simonpure 8 days ago | 14 comments | ||||||||||||||||||||
▲ | simonw 4 days ago | parent | next [-] | |||||||||||||||||||
I implemented a similar pattern in my LLM tool and Python library back in February: https://simonwillison.net/2025/Feb/28/llm-schemas/ My version works with Pydantic models or JSON schema in Python code, or with JSON schema or a weird DSL I invented on the command-line:
Result: https://gist.github.com/simonw/f8143836cae0f058f059e1b8fc2d9... | ||||||||||||||||||||
▲ | constantinum 5 days ago | parent | prev | next [-] | |||||||||||||||||||
There is also Unstract(open-source) that helps process structured data extraction. Key differences: 1. Unstract has a Pre-processing layer(OCR). Which converts documents into LLM readable formats.(helps improve accuracy, and control costs) 2. Unstract also connects to your existing data sources, making it an out-of-the-box ETL tool. | ||||||||||||||||||||
| ||||||||||||||||||||
▲ | wodenokoto 4 days ago | parent | prev | next [-] | |||||||||||||||||||
It’s not extracting data _from_ the model it is using the model to extract structured data from the input. | ||||||||||||||||||||
▲ | ttul 4 days ago | parent | prev | next [-] | |||||||||||||||||||
The use case that immediately comes to mind is analysis of legal documents. Lawyers spend a lot of time going through piles of contracts during due diligence for any kind of investment or acquisition transaction, painstakingly identifying concepts that need to be addressed in various ways. LLMs are decent at doing this kind of work, but error-prone (as are humans, by the way). Having a way to visualize the results could be helpful in speeding up the review process of the LLM’s work. | ||||||||||||||||||||
▲ | albert_e 3 days ago | parent | prev | next [-] | |||||||||||||||||||
For complex business documents -- one approach was to use Named Entity Recognition to identify all entities and use that to build a knowledge graph to serve as a complementary repository of knowledge (in addition to the vector embeddings of semantic chunks) to aid RAG workflows. Does this proposed approach complement this or supercede the need for NER / Knowledge Graph. Just wondering aloud. Appreciate any insights here. | ||||||||||||||||||||
▲ | hm-nah 4 days ago | parent | prev | next [-] | |||||||||||||||||||
Oly Chit! This is a BIG deal! Sub-page citations…in-context RAG…built-in HTML UI…this is like the holy grail of deterministic text extraction. I’m trying this ASAP Rocky. | ||||||||||||||||||||
▲ | Noumenon72 4 days ago | parent | prev | next [-] | |||||||||||||||||||
In the example, if `extraction_class` can be any string, how does it know that "relationship" implies it should have attributes "character_1" and "character_2" when your example data didn't? | ||||||||||||||||||||
▲ | andrewrn 4 days ago | parent | prev | next [-] | |||||||||||||||||||
You could use this to generate character graphs from big novels. Make an app that allows you to input a page number so the model only extracts characters you've encountered thus far. | ||||||||||||||||||||
▲ | ramkumarkb 4 days ago | parent | prev | next [-] | |||||||||||||||||||
Does this work with other open-source LLMs like Qwen3 or other OpenAI compatible LLM Apis? | ||||||||||||||||||||
| ||||||||||||||||||||
▲ | brokensegue 4 days ago | parent | prev [-] | |||||||||||||||||||
wiring this to wikidata would be great |