| |
| ▲ | woadwarrior01 7 days ago | parent | next [-] | | > If you looking for privacy there is only 1 app in the whole wide internet right now, HugstonOne That's a tall claim. I've been selling a macOS and iOS private LLM app on the App Store for over two years now, that is: a) is fully native (not electron.js)
b) not a llama.cpp / MLX wrapper
c) fully sandboxed (none of Jan, Ollama, LM Studio are) I will not promote. Quite shameless of you to shill your electron.js based llama.cpp wrapper here. | | |
| ▲ | threecheese 6 days ago | parent | next [-] | | Purchased, to show my support (and to play around ofc). | | | |
| ▲ | trilogic 7 days ago | parent | prev [-] | | [flagged] | | |
| ▲ | rovr138 7 days ago | parent [-] | | Since they won't promote, here's the link, https://apps.apple.com/us/app/private-llm-local-ai-chat/id64... > I accept every challenge to prove that HugstonOne is worth the claim. I expect your review. | | |
| ▲ | trilogic 7 days ago | parent [-] | | [flagged] | | |
| ▲ | rovr138 7 days ago | parent | next [-] | | I did? None of your points talk about privacy which was your original argument. I’ll remind you, > If you looking for privacy there is only 1 app in the whole wide internet right now, HugstonOne (I challenge everyone to find another local GUI with that privacy). Heck, if you look at the original comment, it clearly states it’s macOS and iOS native, > I've been selling a macOS and iOS private LLM app on the App Store for over two years now, that is:
> a) is fully native (not electron.js) b) not a llama.cpp / MLX wrapper c) fully sandboxed (none of Jan, Ollama, LM Studio are) How do you expect it to be and cross platform? Isn’t hugstone windows only? So, what are your privacy arguments? Don’t move the goal post. | | |
| ▲ | trilogic 7 days ago | parent [-] | | I am honest, I don't know how your app works, speaking privacy. I can´t even try it. You are free (like literally free) to try mine. How do users of mac/ios own their data is unknown to me. I didn't want to make a point on that as I have already big techs against, and didn't want to hijack the session further. Now for real, I wish to meet more people like you, I admire your professional way of arguing, and I really wish you all the best :) |
| |
| ▲ | yjftsjthsd-h 7 days ago | parent | prev [-] | | > 7 It is only for mac for god sake, you need to pay to breath. And HugstonOne is for Windows; what of it? |
|
|
|
| |
| ▲ | imiric 7 days ago | parent | prev | next [-] | | You mean this[1]? It's not open source, has no license, runs on Windows only, and requires an activation code to use. Also, the privacy policy on their website is missing[2]. Anyone remotely concerned about privacy wouldn't come near this thing. Ah, you're the author, no wonder you're shilling for it. [1]: https://github.com/Mainframework/HugstonOne [2]: https://hugston.com/privacy | | |
| ▲ | trilogic 7 days ago | parent [-] | | The app has a licence well visible when you install the app. The rest is written in the website and in github. Then about: requires an activation code to use, ofc it is made for ethical research purposes, so yes I am distributing it responsibly. And you can see the videos in the youtube channel for how it works. But the most important point is that you can try it easily with a firewall to see that it do not leak bytes like all the rest there. That´s what i call privacy, It has a button that cut all connection. You can say what you want but that´s it that´s all. | | |
| ▲ | do_not_redeem 7 days ago | parent | next [-] | | > But the most important point is that you can try it easily with a firewall to see that it do not leak bytes like all the rest there. Great to hear! Since you care so much about privacy, how can I get an activation code without sending any bytes over a network or revealing my email address? | |
| ▲ | riquito 7 days ago | parent | prev [-] | | Closed source, without 3rd party independent review and people should just trust you? As if your app cannot start sending data away in a month or attempt to detect monitoring software, to name a couple |
|
| |
| ▲ | kgeist 7 days ago | parent | prev [-] | | >I challenge everyone to find another local GUI with that privacy Llama.cpp's built-in web UI. | | |
| ▲ | trilogic 7 days ago | parent [-] | | This is from webui website docs: Once saved, Open WebUI will begin using your local Llama.cpp server as a backend!
So you see Llama server not CLI. That´s a big flag there. I repeat no app in the whole world takes seriously privacy like HugstonOne. This is not advertisement, I am just making a point. | | |
| ▲ | kgeist 7 days ago | parent [-] | | I'm not sure what you're talking about. Llama.cpp is an inference server which runs LLMs locally. It has a built-in web UI. You can't get more private than the inference server itself. I tried downloading your app, and it's a whopping 500 MB. What takes up the most disk space? The llama-server binary with the built-in web UI is like a couple MBs. | | |
| ▲ | trilogic 7 days ago | parent [-] | | With all respect you do seem to not understand much of how privacy works. Llama-server is working in Http. And yes the app is a bit heavy as is loading llm models using llama.cpp cli and multimodal which in itself are quite heavy, also just the dlls for cpu/gpu are huge, (just the one for the nvidial gpu is 500mb if I don't go wrong). | | |
| ▲ | kgeist 7 days ago | parent | next [-] | | Unless you expose random ports on the local machine to the Internet, running apps on localhost is pretty safe. Llama-server's UI stores conversations in the browser's localStorage so it's not retrievable even if you expose your port. To me, downloading 500 MB from some random site feels far less safe :) >the app is a bit heavy as is loading llm models using llama.cpp cli So it adds an unnecessary overhead of reloading all the weights to VRAM on each message? On some larger models it can take up to a minute. Or you somehow stream input/output from an attached CLI process without restarting it? | |
| ▲ | rcakebread 7 days ago | parent | prev | next [-] | | Says the guy with a link to a broken privacy policy on their website. | | |
| ▲ | trilogic 7 days ago | parent [-] | | I accept critics, and I thank you for it. It will be fixed ASAP. |
| |
| ▲ | giantrobot 7 days ago | parent | prev [-] | | > With all respect you do seem to not understand much of how privacy works. Llama-server is working in Http. What in the world are you trying to say here? llama.cpp can run completely locally and web access can be limited to localhost only. That's entirely private and offline (after downloading a model). I can't tell if you're spreading FUD about llama.cpp or are just generally misinformed about how it works. You certainly have some motivated reasoning trying to promote your app which makes your replies seem very disingenuous. | | |
| ▲ | trilogic 7 days ago | parent [-] | | I am not here to teach cybersecurity Tcp/ip protocols or ML. HTTP = HyperText Transfer Protocol
The standard protocol for transferring data over the web. CLI = Command-Line Interface. Try again after endless nights of informatic work please. | | |
| ▲ | kgeist 7 days ago | parent | next [-] | | HTTP can be 100% local without involving the web. | |
| ▲ | giantrobot 6 days ago | parent | prev [-] | | Wow. I honestly can't tell if you're trying to troll people are sitting on top of some Dunning-Kruger peak. Shine on you delusional diamond. |
|
|
|
|
|
|
|