| ▲ | everdrive 5 hours ago |
| It's getting to the point where a user needs at minimum two browsers. One to allow all this horrendous client checking so that crucial services work, and another browser to attempt to prevent tracking users across the web. Nick, I understand the practical realities regarding why you'd need to try to tamp down on some bot traffic, but do you see a world where users are not forced to choose between privacy and functionality? |
|
| ▲ | cruffle_duffle a minute ago | parent | next [-] |
| There is also the browser I use to get Claude to route around people blocking its webfetch. Both Playwright and chrome-mcp. |
|
| ▲ | mememememememo 4 hours ago | parent | prev | next [-] |
| Local models for privacy. You want to go to the world's best hotel? You are gonna be on their CCTV. Staying at home is crappier but private. Unfortunately for the first time moores law isn't helping (e.g. give a poor person an old laptop and install linux they will be fine). They can do that and all good except no LLM. |
| |
| ▲ | karlgkk 3 hours ago | parent | next [-] | | > You want to go to the world's best hotel? You are gonna be on their CCTV. ironically, in high end hotels, there's often a lot less cctv. not none. just less. rich people enjoy privacy | | |
| ▲ | Barbing 3 hours ago | parent [-] | | So they’re not just hidden better? Does make sense. Well, I can use the world‘s best safety deposit box without being on CCTV while I pass secrets in and out of it, right? Just not for free. Bummer, this sounds like it is about to turn into a Monero ad (“let us pay privately”) |
| |
| ▲ | nozzlegear 18 minutes ago | parent | prev [-] | | > Staying at home is crappier but private. Doesn't make sense, my home is much more preferable to a hotel |
|
|
| ▲ | 0x3f 5 hours ago | parent | prev | next [-] |
| Meet me in a cafe and I will sign a JWT saying you're not a bot. You can submit this to whoever will accept it. |
| |
| ▲ | magicseth 5 hours ago | parent | next [-] | | If apple approves it, ive got a solution: A keyboardthat attests to your humanity https://typed.by/magicseth/2451#2NyGLfAQxmqRiAOTlaX7ma3G4d1o... | | |
| ▲ | mzajc 5 hours ago | parent | next [-] | | Brilliant! Just the thing we want: more hardware attestation, more deanonymization, less user control, all diligently orchestrated in a repository where the only contributor is Anthropic Claude [0]. Comes complete with a misaligned ASCII diagram in the README to show how much effort the humans behind it put in! Yes, even their "humanifesto" is LLM output, and is written almost exclusively in the "it's not X <emdash> it's Y" style. [0]: https://github.com/magicseth/keywitness/graphs/contributors | | |
| ▲ | delish 4 hours ago | parent | next [-] | | Those are all situationally-valid criticisms, but I've long thought the ability to have smartphones' cameras cryptographically sign photos is good when available. The use case is demonstrating a photo wasn't doctored, and that it came from a device associated with e.g. a journalist, who maintains a public key. Of course, it should be optional. | | |
| ▲ | magicseth 3 hours ago | parent [-] | | Yes! That's what I'm getting at. This protocol optionally allows you to sign with your private key, but you don't have to for the protocol to provide utility. It could just be enough to say "if you trust magicseth's binary and apple, then this was typed one letter at a time" There's nothing stopping folks from typing a message an LLM wrote one at a time, but the idea of increasing the human cost of sending messages is an interesting one, or at least I thought :-( | | |
| |
| ▲ | magicseth 3 hours ago | parent | prev | next [-] | | Hi! I want anonymity! I also want to be able to prove what level of effort has been put in to something. I think there's room for both. This is an encrypted proof that I wrote something on a keyboard that tracks fingers. The protocol allows you to optionally sign it with your identity, but that isn't strictly required. It is an attempt at putting something into the conversation more than just "OSS is broken because there are too many slop PRs." What if OSS required a human to attest that they actually looked at the code they're submitting? This tool could help with that. Yes LLMs were used greatly in the production of this prototype! It doesn't change the goal of the experiment! or it's potential utility! Do you see any potential area in your world where some piece of this is valuable? | |
| ▲ | Arainach 4 hours ago | parent | prev [-] | | > Yes, even their "humanifesto" is LLM output, and is written almost exclusively in the "it's not X <emdash> it's Y" style. ....no. There's not a single occurrence of that. https://keywitness.io/manifesto There are six emdashes on that page. NONE of them are "it's not X it's why". > Emails, messages, essays, code reviews, love letters — all suspect. > We believe this can be solved — not by detecting AI, but by proving humanity. > KeyWitness captures cryptographic proof at the point of input — the keyboard. > When you seal a message, the keyboard builds a W3C Verifiable Credential — a self-contained proof that can be verified by anyone, anywhere, without trusting us or any central authority. > That's an alphabet of 774 symbols — each carrying log2(774) ≈ 9.6 bits. 27 emoji for 256 bits. > They're a declaration: this message was written by a person — one of the diverse, imperfect, irreplaceable humans who still choose to type their own words. Clarifications: 4 Continuation from a list: 1 Could just be a comma: 1 "It's not X -- it's Y": 0. If you're going to make lazy commentary about good writing being AI, please at least be sure that you're reading the content and saying accurate things. | | |
| ▲ | magicseth 3 hours ago | parent | next [-] | | It is largely written by iteration with an LLM! No need to speculate or analyze em dashes :-) The emoji idea was mine. I like it :-) unfortunately it doesn't work in places like HN that strip out emoji. So I had to make a base64 encoding option. The goal was to create an effective encryption key for the url hash (so it doesn't get sent to the server). And encoding skin tone with human emojis allows a super dense bit/visual character encoding that ALSO is a cute reference to the humans I'm trying to center with this project! | |
| ▲ | josephg 4 hours ago | parent | prev | next [-] | | > We believe this can be solved — not by detecting AI, but by proving humanity “It's not X -- it's Y": 1 | |
| ▲ | dandellion 4 hours ago | parent | prev | next [-] | | It's either a bot, or someone who writes exactly like a bot. I don't care which it is, both go to the discard pile. | | |
| ▲ | arrowsmith an hour ago | parent | next [-] | | It’s a product for people who need help telling whether text was written by AI. Maybe they deliberately write it like that, to filter out people who aren’t the target market? | |
| ▲ | magicseth 3 hours ago | parent | prev [-] | | phew! |
| |
| ▲ | arrowsmith an hour ago | parent | prev | next [-] | | From their “how it works” page: > The server stores an encrypted blob it can't decrypt. We couldn't read your messages even if we wanted to. That's not a policy — it's math. If you can’t tell that this is AI slop then maybe KeyWitness does solve a real problem after all. | |
| ▲ | Velocifyer 4 hours ago | parent | prev [-] | | <redacted because my friend posted it but accidentaly used my account> | | |
| ▲ | magicseth 3 hours ago | parent [-] | | Oh you think it's stupid? It was an attempt to encode an encryption key that isn't sent to the server in a way that is minimally invasive. The skintone emomis allow pretty high byte density, and also are cute! Sorry it doesn't meet your needs. There is irony in having an ai generated humanifesto. Could it be intentional? hmm? Is there no irony in deriding a project for being potentially LLM generated, when it's goal is to aide people in differentiating?
:shrug: |
|
|
| |
| ▲ | arrowsmith an hour ago | parent | prev | next [-] | | You’re getting a negative reaction from others but I share this feedback in good faith: I don’t understand what problem your product is supposed to solve. Yeah I guess the cryptographic stuff sounds vaguely impressive although it’s been a long time since I had to think about cryptography in detail. But what is this _for_? I’m going to buy an expensive keyboard so that I can send messages to someone and they’ll know it’s really me – but it has to be someone who a) doesn’t trust me or any of our existing communication channels and b) cares enough to verify using this weird software? Oh and it’s important they know I sent it from a particular device out of the many I could be using? Who is that person? What would I be sending them? What is the scenario where we would both need this? Also the server can’t read the message but the decryption key is in the URL? So anyone with the URL can still read it? Then why even bother encrypting it? Maybe this is one of those cases where I’m so far outside your target market that it was never supposed to make sense to me but I feel like I’m missing something here. Or maybe you need to work on your elevator pitch. Just sharing my honest reaction. | |
| ▲ | Terretta 3 hours ago | parent | prev | next [-] | | The first widely distributed and open source version of this typist timing validation idea I saw (and incorporated into my own software at the time) was released by Michael Crichton as part of a password 2nd-factor checker (1st factor a known phrase or even your name, the 2nd factor being your idiosyncratic typing pattern) in Creative Computing magazine that printed the code. Original here: https://archive.org/details/sim_creative-computing_1984-06_1... | |
| ▲ | scoofy 5 hours ago | parent | prev | next [-] | | Somewhere there is someone 3D printing a keyboard cover that an llm can type with. | | |
| ▲ | magicseth 3 hours ago | parent [-] | | I'm actually building a physical keyboard for those people who don't have iphones! Though given the reaction I'm seeing here, I probably won't share it with this audience :-P it has capacitive keys, a secure enclave, and a fingerprint sensor. |
| |
| ▲ | Velocifyer 4 hours ago | parent | prev | next [-] | | This does not prove anything and it is only avalible to users with X.com accounts (you need a X.com account to download the app). | | |
| ▲ | magicseth 3 hours ago | parent [-] | | Hi! You don't need an x.com account to download, that's just the easiest way to dm me. If you're actually interested, I can let you try it! The source is also available. It proves 1) that an apple device with a secure enclave signed it. 2) that my app signed it. If you trust the binary I've distributed is the same as the one on the app store, then it also proves:
3) that it was typed on my keyboard not using automation (though as others have mentioned, you could build a capacitive robot to type on it)
4) that the typer has the same private key as previous messages they've signed (if you have an out of band way to corroborate that's great too)
5) optionally, that the person whose biometrics are associated with the device approved it. There is also an optional voice to text mode that uses 3d face mesh to attempt to verify the words were spoken live. Not every level of verification is required by the ptrotocol, so you could attest that it was written on a keyboard, but not who wrote it (not yet implemented in the client app). The protocol doesn't require you to run my app, if you compile it yourself, you can create your own web of trust around you! | | |
| ▲ | Velocifyer 3 hours ago | parent [-] | | >that an apple device with a secure enclave signed it. What Apple devices are supported? All I have is a iPhone 4 running a old iOS version(pre iOS 7) (which I will not update and I don't think has a secure enclave) and a M1 mac mini and some lightning earpods and a apple thunderbolt display and some USB-A chargers and some old MacBooks. I saw something about android (https://typed.by/manifesto#:~:text=Android,Integrity) on the website, but it mentioned Play Integrity which I do not have becuase I use LineageOS for MicroG. I think that the concept is stupid becuase it would require to somehow prove that the app is not modified(which is impractical) and there is no stylus on a motor or fake screen(which is also impractical). I think that a better aproach would be to form a Web Of Trust where only people's (not just humans, this would include all animals and potentially aliens but no clankers) certificates are signed, but with a interface that is friendly to people who are not very into technology but with some sort of way to not have who your friends are revealed, but this would still allow someone to get a attestation for their robot. |
|
| |
| ▲ | toss1 4 hours ago | parent | prev [-] | | Oh Gawd, not this idea again! This idea of capturing the timing of people's keystrokes to identify them, ensure it is them typing their passwords, or even using the timing itself as a password has been recurring every few years for at least three decades. It is always just as bad. Because there are so many cases where it completely fails. The first case is a minor injury to either hand — just put a fat bandage on one finger from a minor kitchen accident, and you'll be typing completely differently for a few days. Or, because I just walked into my office eating a juicy apple with one hand and I'm in a hurry typing my PW with my other hand because someone just called with an urgent issue I've got to fix, aaaaannnd, your software balks because I'm typing with a completely different cadence. The list of valid reasons for failure is endless wherein a person's usual solid patterns are good 90%+ of the time, but will hard fail the other 10% of the time. And the acceptable error rate would be 2-4 orders of magnitude less. It's a mystery how people go all the way to building software based on an idea that seems good but is actually bad, without thinking it through, or even checking how often it has been done before and failed? | | |
| ▲ | monocularvision 2 hours ago | parent | next [-] | | You might want to check out “How it Works” on the site as none of what you said applies: https://typed.by/how | | |
| ▲ | josefx 2 hours ago | parent [-] | | Then why does your link claim the following? > While you type, the keyboard quietly records how you type — the rhythm, the pauses between keys, where your finger lands, how hard you press. > Nobody types the same way. Your pattern is as unique as your handwriting. That's the signal. | | |
| ▲ | arrowsmith 36 minutes ago | parent [-] | | I’m sceptical about this idea but, to give it full credit, it’s a custom piece of hardware that would presumably be more accurate than previous software-only attempts. Maybe it will actually work this time, idk, although I still don’t really see the point. |
|
| |
| ▲ | magicseth 4 hours ago | parent | prev [-] | | That's not what this is. at all. |
|
| |
| ▲ | jagged-chisel 5 hours ago | parent | prev | next [-] | | Sounds like we’re bringing back the PGP key signing parties | | |
| ▲ | __MatrixMan__ 5 hours ago | parent | next [-] | | The sooner we do the better. | | |
| ▲ | hathawsh 4 hours ago | parent [-] | | I wonder what the PGP signing concept does to thwart people who want to profit and don't care about the public good. It seems like anyone who attends a signing party can sell their key to the highest bidder, leading to bots and spammers all over again. | | |
| ▲ | __MatrixMan__ 40 minutes ago | parent | next [-] | | In the flat trust model we currently use most places, it's on each person to block each spammer, bot, etc. The cost of creating a new bot account is low so it's cheap to make them come back. On a web of trust, if you have a negative interaction with a bot, you revoke trust in one of the humans in the chain of trust that caused you to come in contact with that bot. You've now effectively blocked all bots they've ever made or ever will make... At least until they recycle their identity and come to another key signing party. Once you have the web in place though, a series of "this key belongs to a human" attestations, then you can layer metadata on top of it like "this human is a skilled biologist" or "this human is a security expert". So if you use those attestations to determine what content your exposed to then a malicious human doesn't merely need to show up at a key signing party to bootstrap a new identity, they also have to rebuild their reputation to a point where you or somebody you trust becomes interested in their content again. Nothing can be done to prevent bad people from burning their identities for profit, but we can collectively make it not economical to do so by practicing some trust hygiene. Key signing establishes a graph upon which more effective trust management becomes possible. It on its own is likely insufficient. | |
| ▲ | 0x3f 4 hours ago | parent | prev [-] | | You can never prevent things like this, but you can make it expensive enough to effectively solve the problem for almost all use cases. |
|
| |
| ▲ | zar1048576 2 hours ago | parent | prev [-] | | Definitely miss those! |
| |
| ▲ | tshaddox 4 hours ago | parent | prev | next [-] | | Doesn’t really make sense, because any service can just say “you must paste your human-attestation JWT here to use this service” and plenty of people will. | | |
| ▲ | 0x3f 4 hours ago | parent [-] | | You can just decay your trust level based on the `iat` value. That way people will need to keep buying me coffee. I can optionally chide them for giving out their token. If you're engaging with the idea seriously, I suppose we'd need to build a reputation or trust network or something. Although if you're talking about replay attacks specifically, there are other crypto based solutions for that. | | |
| ▲ | tshaddox an hour ago | parent | next [-] | | My point is that there probably is no way in principle to distinguish between a human user utilizing automation on their own behalf in good faith (e.g. RSS readers) and bad faith automations. | |
| ▲ | magicseth 3 hours ago | parent | prev [-] | | I am engaging with this seriously! I don't know if there will be any real solution. But I think it's worth exploring. |
|
| |
| ▲ | 5 hours ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | kevin_thibedeau 4 hours ago | parent | prev | next [-] |
| I've been doing that for years. Cloudflare is slowly breaking more and more of the web. |
|
| ▲ | atoav 3 hours ago | parent | prev | next [-] |
| What if I run a website and OpenAI produces bot traffic? Do they also consider it abuse when they do it? |
|
| ▲ | madrox 4 hours ago | parent | prev | next [-] |
| I am not Nick, but there's a few ways that world happens: the free tier goes away and what people pay for more correctly reflects what they use, this all becomes cheap enough that it doesn't matter, or we come up with an end to end method of determining usage is triggered by a person. Another way is to just do better isolation as a user. That's probably your best shot without hoping these companies change policies. |
|
| ▲ | gruez 5 hours ago | parent | prev | next [-] |
| >It's getting to the point where a user needs at minimum two browsers. One to allow all this horrendous client checking so that crucial services work, and another browser to attempt to prevent tracking users across the web. What are you talking about? It works fine with firefox with RFP and VPN enabled, which is already more paranoid than the average configuration. There are definitely sites where this configuration would get blocked, but chatgpt isn't one of them, so you're barking up the wrong tree here. |
| |
| ▲ | scared_together 5 minutes ago | parent [-] | | Is your interlocutor barking up the wrong tree, or are you missing the forest for the trees? According to the OP: > The program checks 55 properties spanning three layers: your browser (GPU, screen, fonts), the Cloudflare network (your city, your IP, your region from edge headers), and the ChatGPT React application itself (__reactRouterContext, loaderData, clientBootstrap). I guess Firefox VPN will hide the IP at least. But what about the other data, is it faked by RFP? Because if not, the so-called privacy offered by this configuration is outdated. You might be fingerprinted by OpenAI right now, as “that guy with all the Firefox anti-fingerprinting stuff enabled, even though it breaks other sites”. |
|
|
| ▲ | SV_BubbleTime 5 hours ago | parent | prev [-] |
| Firefox multicontainers are pretty cool. But it’s an advanced process that most people wouldn’t do or do correctly. |
| |
| ▲ | Sabinus 4 hours ago | parent | next [-] | | I love the containers too. My current use case is to keep my YouTube account separate from my Google one. Google doesn't need all that behavioural data in one place. It's a pity Firefox doesn't get the praise it deserves half as much as it cops criticism. | |
| ▲ | halJordan 4 hours ago | parent | prev | next [-] | | It is absolutely not an advanced process. It's clicking a gui. It's not advanced thinking to understand profiles. It's a basic ability to hold multiple things in your mind at once. Telling people that's difficult only increases the societal problem that being ignorant is ok. | | |
| ▲ | docjay 3 hours ago | parent [-] | | “Difficult” is a relative term. They were saying it was a difficult concept for them, not you. In order to save their ego, people often phrase those events to be inclusive of the reader; it doesn’t feel as bad if you imagine everyone else would struggle too. Pay attention and you’ll notice yourself doing it too. “Ignorant” is also infinite - you’re ignorant of MANY things as well, and I’m sure you would struggle with things I can do with ease. For example, understanding the meaning behind what’s being said so I know not to brow-beat someone over it. | | |
| ▲ | SV_BubbleTime an hour ago | parent [-] | | Mostly right; it’s not that it was difficult for me. It’s that normal people are never going to do it. I’m almost endlessly surprised by the probably-autistic-spectrum responses to tech things from people with no idea how things seem to other people. |
|
| |
| ▲ | Imustaskforhelp 5 hours ago | parent | prev [-] | | The possibilities with Firefox multi containers and automation scripts as well are truly endless. It's also possible to make Firefox route each container through a different proxy which could be running locally even which then can connect to multiple different VPN's. I haven't tried doing that but its certainly possible. It's sort of possible to run different browsers with completely new identities and sometimes IP within the convenience of one. It's really underrated. I don't use the IP part of this that I have mentioned but I use multi containers quite a lot on zen and they are kind of core part of how I browse the web and there are many cool things which can be done/have been done with them. |
|