Remix.run Logo
normie3000 5 hours ago

Thanks for noting this - for a moment I was excited.

xml 3 hours ago | parent | next [-]

You can still be excited! Recently, GLM-OCR was released, which is a relatively small OCR model (2.5 GB unquantized) that can run on CPU with good quality. I've been using it to digitize various hand-written notes and all my shopping receipts this week.

https://github.com/zai-org/GLM-OCR

(Shameless plug: I also maintain a simplified version of GLM-OCR without dependency on the transformers library, which makes it much easier to install: https://github.com/99991/Simple-GLM-OCR/)

mrweasel 4 hours ago | parent | prev [-]

When people mentions the number of lines of code, I've started to become suspicious. More often than not it's X number of lines, calling a massive library loading a large model, either locally or remote. We're just waiting for spinning up your entire company infrastructure in two lines of code, and then just being presented a Terraform shell script wrapper.

I do agree with the use of serverless though. I feel like we agree long ago that serverless just means that you're not spinning up a physical or virtual server, but simply ask some cloud infrastructure to run your code, without having to care about how it's run.

goodmythical an hour ago | parent | next [-]

>implement RSA with this one simple line of python!

locknitpicker an hour ago | parent | prev [-]

> When people mentions the number of lines of code, I've started to become suspicious.

Low LoC count is a telltale sign that the project adds little to no value. It's a claim that the project integrates third party services and/or modules, and does a little plumbing to tie things together.