| ▲ | MyNameIsNickT 4 hours ago |
| Hey! I'm Nick, and I work on Integrity at OpenAI. These checks are part of how we protect our first-party products from abuse like bots, scraping, fraud, and other attempts to misuse the platform. A big reason we invest in this is because we want to keep free and logged-out access available for more users. My team’s goal is to help make sure the limited GPU resources are going to real users. We also keep a very close eye on the user impact. We monitor things like page load time, time to first token and payload size, with a focus on reducing the overhead of these protections. For the majority of people, the impact is negligible, and only a very small percentage may see a slight delay from extra checks. We also continuously evaluate precision so we can minimize false positives while still making abuse meaningfully harder. |
|
| ▲ | Imnimo 3 hours ago | parent | next [-] |
| It's interesting to me that OpenAI considers scraping to be a form of abuse. |
| |
| ▲ | nikitaga an hour ago | parent | next [-] | | Scraping static content from a website at near-zero marginal cost to its server, vs scraping an expensive LLM service provided for free, are different things. The former relies on fairly controversial ideas about copyright and fair use to qualify as abuse, whereas the latter is direct financial damage – by your own direct competitors no less. It's fun to poke at a seeming hypocrisy of the big bad, but the similarity in this case is quite superficial. | | |
| ▲ | not2b 9 minutes ago | parent | next [-] | | I understand why OpenAI is trying to reduce its costs, but it simply isn't true that AI crawlers aren't creating very significant load, especially those crawlers that ignore robots.txt and hide their identities. This is direct financial damage and it's particularly hard on nonprofit sites that have been around a long time. | |
| ▲ | bakugo an hour ago | parent | prev | next [-] | | The cost is so marginal that many, many websites have been forced to add cloudflare captchas or PoW checks before letting anyone access them, because the server would slow to a crawl from 1000 scrapers hitting it at once otherwise. | |
| ▲ | nslsm an hour ago | parent | prev | next [-] | | The issue is that there are so many awful webmasters that have websites that take hundreds of milliseconds to generate and are brought down by a couple requests a second. | | |
| ▲ | bakugo an hour ago | parent [-] | | OpenAI must be the most awful webmasters of all, then, to need such sophisticated protections. |
| |
| ▲ | AtlasBarfed 20 minutes ago | parent | prev | next [-] | | Because you say it is? I obviously disagree. I mean, on top of this we are talking about not-open OpenAI. | |
| ▲ | karlshea 20 minutes ago | parent | prev [-] | | I don’t know what world you live in but it’s not this one. |
| |
| ▲ | Aurornis an hour ago | parent | prev | next [-] | | I interpreted scraping to mean in the context of this: > we want to keep free and logged-out access available for more users I have no doubt that many people see the free ChatGPT access as a convenient target for browser automation to get their own free ChatGPT pseudo-API. | |
| ▲ | ProofHouse 2 hours ago | parent | prev | next [-] | | The irony is thick | |
| ▲ | sabedevops 3 hours ago | parent | prev | next [-] | | Seriously. The hypocrisy is staggering! | |
| ▲ | zer00eyz 2 hours ago | parent | prev [-] | | " Integrity at OpenAI .. protect ... abuse like bots, scraping, fraud " Did you mean to use the word hypocrisy. If not, I'm happy to have said it. I just want to note, that it is well covered how good the support is for actual malware... |
|
|
| ▲ | everdrive 4 hours ago | parent | prev | next [-] |
| It's getting to the point where a user needs at minimum two browsers. One to allow all this horrendous client checking so that crucial services work, and another browser to attempt to prevent tracking users across the web. Nick, I understand the practical realities regarding why you'd need to try to tamp down on some bot traffic, but do you see a world where users are not forced to choose between privacy and functionality? |
| |
| ▲ | mememememememo 3 hours ago | parent | next [-] | | Local models for privacy. You want to go to the world's best hotel? You are gonna be on their CCTV. Staying at home is crappier but private. Unfortunately for the first time moores law isn't helping (e.g. give a poor person an old laptop and install linux they will be fine). They can do that and all good except no LLM. | | |
| ▲ | karlgkk 2 hours ago | parent [-] | | > You want to go to the world's best hotel? You are gonna be on their CCTV. ironically, in high end hotels, there's often a lot less cctv. not none. just less. rich people enjoy privacy | | |
| ▲ | Barbing an hour ago | parent [-] | | So they’re not just hidden better? Does make sense. Well, I can use the world‘s best safety deposit box without being on CCTV while I pass secrets in and out of it, right? Just not for free. Bummer, this sounds like it is about to turn into a Monero ad (“let us pay privately”) |
|
| |
| ▲ | 0x3f 3 hours ago | parent | prev | next [-] | | Meet me in a cafe and I will sign a JWT saying you're not a bot. You can submit this to whoever will accept it. | | |
| ▲ | magicseth 3 hours ago | parent | next [-] | | If apple approves it, ive got a solution: A keyboardthat attests to your humanity https://typed.by/magicseth/2451#2NyGLfAQxmqRiAOTlaX7ma3G4d1o... | | |
| ▲ | mzajc 3 hours ago | parent | next [-] | | Brilliant! Just the thing we want: more hardware attestation, more deanonymization, less user control, all diligently orchestrated in a repository where the only contributor is Anthropic Claude [0]. Comes complete with a misaligned ASCII diagram in the README to show how much effort the humans behind it put in! Yes, even their "humanifesto" is LLM output, and is written almost exclusively in the "it's not X <emdash> it's Y" style. [0]: https://github.com/magicseth/keywitness/graphs/contributors | | |
| ▲ | delish 3 hours ago | parent | next [-] | | Those are all situationally-valid criticisms, but I've long thought the ability to have smartphones' cameras cryptographically sign photos is good when available. The use case is demonstrating a photo wasn't doctored, and that it came from a device associated with e.g. a journalist, who maintains a public key. Of course, it should be optional. | | |
| ▲ | magicseth 2 hours ago | parent [-] | | Yes! That's what I'm getting at. This protocol optionally allows you to sign with your private key, but you don't have to for the protocol to provide utility. It could just be enough to say "if you trust magicseth's binary and apple, then this was typed one letter at a time" There's nothing stopping folks from typing a message an LLM wrote one at a time, but the idea of increasing the human cost of sending messages is an interesting one, or at least I thought :-( |
| |
| ▲ | magicseth 2 hours ago | parent | prev | next [-] | | Hi! I want anonymity! I also want to be able to prove what level of effort has been put in to something. I think there's room for both. This is an encrypted proof that I wrote something on a keyboard that tracks fingers. The protocol allows you to optionally sign it with your identity, but that isn't strictly required. It is an attempt at putting something into the conversation more than just "OSS is broken because there are too many slop PRs." What if OSS required a human to attest that they actually looked at the code they're submitting? This tool could help with that. Yes LLMs were used greatly in the production of this prototype! It doesn't change the goal of the experiment! or it's potential utility! Do you see any potential area in your world where some piece of this is valuable? | |
| ▲ | Arainach 3 hours ago | parent | prev [-] | | > Yes, even their "humanifesto" is LLM output, and is written almost exclusively in the "it's not X <emdash> it's Y" style. ....no. There's not a single occurrence of that. https://keywitness.io/manifesto There are six emdashes on that page. NONE of them are "it's not X it's why". > Emails, messages, essays, code reviews, love letters — all suspect. > We believe this can be solved — not by detecting AI, but by proving humanity. > KeyWitness captures cryptographic proof at the point of input — the keyboard. > When you seal a message, the keyboard builds a W3C Verifiable Credential — a self-contained proof that can be verified by anyone, anywhere, without trusting us or any central authority. > That's an alphabet of 774 symbols — each carrying log2(774) ≈ 9.6 bits. 27 emoji for 256 bits. > They're a declaration: this message was written by a person — one of the diverse, imperfect, irreplaceable humans who still choose to type their own words. Clarifications: 4 Continuation from a list: 1 Could just be a comma: 1 "It's not X -- it's Y": 0. If you're going to make lazy commentary about good writing being AI, please at least be sure that you're reading the content and saying accurate things. | | |
| ▲ | magicseth 2 hours ago | parent | next [-] | | It is largely written by iteration with an LLM! No need to speculate or analyze em dashes :-) The emoji idea was mine. I like it :-) unfortunately it doesn't work in places like HN that strip out emoji. So I had to make a base64 encoding option. The goal was to create an effective encryption key for the url hash (so it doesn't get sent to the server). And encoding skin tone with human emojis allows a super dense bit/visual character encoding that ALSO is a cute reference to the humans I'm trying to center with this project! | |
| ▲ | josephg 2 hours ago | parent | prev | next [-] | | > We believe this can be solved — not by detecting AI, but by proving humanity “It's not X -- it's Y": 1 | |
| ▲ | dandellion 2 hours ago | parent | prev | next [-] | | It's either a bot, or someone who writes exactly like a bot. I don't care which it is, both go to the discard pile. | | | |
| ▲ | Velocifyer 2 hours ago | parent | prev [-] | | <redacted because my friend posted it but accidentaly used my account> | | |
| ▲ | magicseth 2 hours ago | parent [-] | | Oh you think it's stupid? It was an attempt to encode an encryption key that isn't sent to the server in a way that is minimally invasive. The skintone emomis allow pretty high byte density, and also are cute! Sorry it doesn't meet your needs. There is irony in having an ai generated humanifesto. Could it be intentional? hmm? Is there no irony in deriding a project for being potentially LLM generated, when it's goal is to aide people in differentiating?
:shrug: |
|
|
| |
| ▲ | Terretta an hour ago | parent | prev | next [-] | | The first widely distributed and open source version of this typist timing validation idea I saw (and incorporated into my own software at the time) was released by Michael Crichton as part of a password 2nd-factor checker (1st factor a known phrase or even your name, the 2nd factor being your idiosyncratic typing pattern) in Creative Computing magazine that printed the code. Original here: https://archive.org/details/sim_creative-computing_1984-06_1... | |
| ▲ | scoofy 3 hours ago | parent | prev | next [-] | | Somewhere there is someone 3D printing a keyboard cover that an llm can type with. | | |
| ▲ | magicseth 2 hours ago | parent [-] | | I'm actually building a physical keyboard for those people who don't have iphones! Though given the reaction I'm seeing here, I probably won't share it with this audience :-P it has capacitive keys, a secure enclave, and a fingerprint sensor. |
| |
| ▲ | Velocifyer 2 hours ago | parent | prev | next [-] | | This does not prove anything and it is only avalible to users with X.com accounts (you need a X.com account to download the app). | | |
| ▲ | magicseth 2 hours ago | parent [-] | | Hi! You don't need an x.com account to download, that's just the easiest way to dm me. If you're actually interested, I can let you try it! The source is also available. It proves 1) that an apple device with a secure enclave signed it. 2) that my app signed it. If you trust the binary I've distributed is the same as the one on the app store, then it also proves:
3) that it was typed on my keyboard not using automation (though as others have mentioned, you could build a capacitive robot to type on it)
4) that the typer has the same private key as previous messages they've signed (if you have an out of band way to corroborate that's great too)
5) optionally, that the person whose biometrics are associated with the device approved it. There is also an optional voice to text mode that uses 3d face mesh to attempt to verify the words were spoken live. Not every level of verification is required by the ptrotocol, so you could attest that it was written on a keyboard, but not who wrote it (not yet implemented in the client app). The protocol doesn't require you to run my app, if you compile it yourself, you can create your own web of trust around you! | | |
| ▲ | Velocifyer an hour ago | parent [-] | | >that an apple device with a secure enclave signed it. What Apple devices are supported? All I have is a iPhone 4 running a old iOS version(pre iOS 7) (which I will not update and I don't think has a secure enclave) and a M1 mac mini and some lightning earpods and a apple thunderbolt display and some USB-A chargers and some old MacBooks. I saw something about android (https://typed.by/manifesto#:~:text=Android,Integrity) on the website, but it mentioned Play Integrity which I do not have becuase I use LineageOS for MicroG. I think that the concept is stupid becuase it would require to somehow prove that the app is not modified(which is impractical) and there is no stylus on a motor or fake screen(which is also impractical). I think that a better aproach would be to form a Web Of Trust where only people's (not just humans, this would include all animals and potentially aliens but no clankers) certificates are signed, but with a interface that is friendly to people who are not very into technology but with some sort of way to not have who your friends are revealed, but this would still allow someone to get a attestation for their robot. |
|
| |
| ▲ | toss1 2 hours ago | parent | prev [-] | | Oh Gawd, not this idea again! This idea of capturing the timing of people's keystrokes to identify them, ensure it is them typing their passwords, or even using the timing itself as a password has been recurring every few years for at least three decades. It is always just as bad. Because there are so many cases where it completely fails. The first case is a minor injury to either hand — just put a fat bandage on one finger from a minor kitchen accident, and you'll be typing completely differently for a few days. Or, because I just walked into my office eating a juicy apple with one hand and I'm in a hurry typing my PW with my other hand because someone just called with an urgent issue I've got to fix, aaaaannnd, your software balks because I'm typing with a completely different cadence. The list of valid reasons for failure is endless wherein a person's usual solid patterns are good 90%+ of the time, but will hard fail the other 10% of the time. And the acceptable error rate would be 2-4 orders of magnitude less. It's a mystery how people go all the way to building software based on an idea that seems good but is actually bad, without thinking it through, or even checking how often it has been done before and failed? | | |
| ▲ | monocularvision 38 minutes ago | parent | next [-] | | You might want to check out “How it Works” on the site as none of what you said applies: https://typed.by/how | | |
| ▲ | josefx 19 minutes ago | parent [-] | | Then why does your link claim the following? > While you type, the keyboard quietly records how you type — the rhythm, the pauses between keys, where your finger lands, how hard you press. > Nobody types the same way. Your pattern is as unique as your handwriting. That's the signal. |
| |
| ▲ | magicseth 2 hours ago | parent | prev [-] | | That's not what this is. at all. |
|
| |
| ▲ | jagged-chisel 3 hours ago | parent | prev | next [-] | | Sounds like we’re bringing back the PGP key signing parties | | |
| ▲ | zar1048576 20 minutes ago | parent | next [-] | | Definitely miss those! | |
| ▲ | __MatrixMan__ 3 hours ago | parent | prev [-] | | The sooner we do the better. | | |
| ▲ | hathawsh 2 hours ago | parent [-] | | I wonder what the PGP signing concept does to thwart people who want to profit and don't care about the public good. It seems like anyone who attends a signing party can sell their key to the highest bidder, leading to bots and spammers all over again. | | |
| ▲ | 0x3f 2 hours ago | parent [-] | | You can never prevent things like this, but you can make it expensive enough to effectively solve the problem for almost all use cases. |
|
|
| |
| ▲ | tshaddox 3 hours ago | parent | prev | next [-] | | Doesn’t really make sense, because any service can just say “you must paste your human-attestation JWT here to use this service” and plenty of people will. | | |
| ▲ | 0x3f 3 hours ago | parent [-] | | You can just decay your trust level based on the `iat` value. That way people will need to keep buying me coffee. I can optionally chide them for giving out their token. If you're engaging with the idea seriously, I suppose we'd need to build a reputation or trust network or something. Although if you're talking about replay attacks specifically, there are other crypto based solutions for that. | | |
| ▲ | magicseth an hour ago | parent [-] | | I am engaging with this seriously! I don't know if there will be any real solution. But I think it's worth exploring. |
|
| |
| ▲ | 3 hours ago | parent | prev [-] | | [deleted] |
| |
| ▲ | kevin_thibedeau 2 hours ago | parent | prev | next [-] | | I've been doing that for years. Cloudflare is slowly breaking more and more of the web. | |
| ▲ | madrox 3 hours ago | parent | prev | next [-] | | I am not Nick, but there's a few ways that world happens: the free tier goes away and what people pay for more correctly reflects what they use, this all becomes cheap enough that it doesn't matter, or we come up with an end to end method of determining usage is triggered by a person. Another way is to just do better isolation as a user. That's probably your best shot without hoping these companies change policies. | |
| ▲ | atoav 2 hours ago | parent | prev | next [-] | | What if I run a website and OpenAI produces bot traffic? Do they also consider it abuse when they do it? | |
| ▲ | gruez 3 hours ago | parent | prev | next [-] | | >It's getting to the point where a user needs at minimum two browsers. One to allow all this horrendous client checking so that crucial services work, and another browser to attempt to prevent tracking users across the web. What are you talking about? It works fine with firefox with RFP and VPN enabled, which is already more paranoid than the average configuration. There are definitely sites where this configuration would get blocked, but chatgpt isn't one of them, so you're barking up the wrong tree here. | |
| ▲ | SV_BubbleTime 3 hours ago | parent | prev [-] | | Firefox multicontainers are pretty cool. But it’s an advanced process that most people wouldn’t do or do correctly. | | |
| ▲ | Sabinus 3 hours ago | parent | next [-] | | I love the containers too. My current use case is to keep my YouTube account separate from my Google one. Google doesn't need all that behavioural data in one place. It's a pity Firefox doesn't get the praise it deserves half as much as it cops criticism. | |
| ▲ | halJordan 2 hours ago | parent | prev | next [-] | | It is absolutely not an advanced process. It's clicking a gui. It's not advanced thinking to understand profiles. It's a basic ability to hold multiple things in your mind at once. Telling people that's difficult only increases the societal problem that being ignorant is ok. | | |
| ▲ | docjay 2 hours ago | parent [-] | | “Difficult” is a relative term. They were saying it was a difficult concept for them, not you. In order to save their ego, people often phrase those events to be inclusive of the reader; it doesn’t feel as bad if you imagine everyone else would struggle too. Pay attention and you’ll notice yourself doing it too. “Ignorant” is also infinite - you’re ignorant of MANY things as well, and I’m sure you would struggle with things I can do with ease. For example, understanding the meaning behind what’s being said so I know not to brow-beat someone over it. |
| |
| ▲ | Imustaskforhelp 3 hours ago | parent | prev [-] | | The possibilities with Firefox multi containers and automation scripts as well are truly endless. It's also possible to make Firefox route each container through a different proxy which could be running locally even which then can connect to multiple different VPN's. I haven't tried doing that but its certainly possible. It's sort of possible to run different browsers with completely new identities and sometimes IP within the convenience of one. It's really underrated. I don't use the IP part of this that I have mentioned but I use multi containers quite a lot on zen and they are kind of core part of how I browse the web and there are many cool things which can be done/have been done with them. |
|
|
|
| ▲ | halflife 4 hours ago | parent | prev | next [-] |
| Don’t know if it’s related to the article, but the chats ui performance becomes absolutely horrendous in long chats. Typing the chat box is slow, rendering lags and sometimes gets stuck altogether. I have a research chat that I have to think twice before messaging because the performance is so bad. Running on iPhone 16 safari, and MacBook Pro m3 chrome. |
| |
| ▲ | bschwindHN 7 minutes ago | parent | next [-] | | Almost certainly running some sort of O(n^2) algorithm on the chat text every key press. Or maybe just insane hierarchies of HTML. Either way, pretty wild that you can have billions of dollars at your disposal, your interface is almost purely text, and still manage to be a fuckup at displaying it without performance problems. | |
| ▲ | DenisM 3 hours ago | parent | prev | next [-] | | In the good old days Netflix had "Dynamic HTML" code that would take a DOM element which scrolled out of view port and move it to the position where it was about to be scrolled in from the other end. Hence he number of DOM elements stayed constant no matter how far you scroll and the only thing that grows is the Y coordinate. They did it because a lot of devices running Netflix (TVs, DVD players, etc) were underpowered and Netflix was not keen on writing separate applications. They did, however, invest into a browser engine that would have HW acceleration not just for video playback but also for moving DOM elements. Basically, sprites. The lost art of writing efficient code... | | |
| ▲ | zdragnar 3 hours ago | parent | next [-] | | > Hence he number of DOM elements stayed constant no matter how far you scroll and the only thing that grows is the Y coordinate. This is generally called virtual scrolling, and it is not only an option in many common table libraries, but there are plenty of standalone implementations and other libraries (lists and things) that offer it. The technique certainly didn't originate with Netflix. | | |
| ▲ | tmpz22 2 hours ago | parent | next [-] | | Its been about three years but infinite scroll is naunced depending on the content that needs to be displayed. Its a tough nut to crack and can require a lot of maintenance to keep stable. None of which chatgpt can handle presumably. | |
| ▲ | dotancohen 2 hours ago | parent | prev [-] | | And yet ChatGPT does not use it. GP was mentioning that a solution to the problem exists, not that Netflix specifically invented it. Your quip that the technique is not specific to Netflix bolsters the argument that OpenAI should code that in. | | |
| ▲ | jasonfarnon 2 hours ago | parent | next [-] | | I'm ignorant of the tech here. But I have noticed that ctrl-F search doesn't work for me on these longer chats. Which is what made me think they were doing something like virtual scrolling. I can't understand how the UI can get so slow if a bunch of the page is being swapped out. | | |
| ▲ | dotancohen an hour ago | parent [-] | | Ctrl-A for select all doesn't work either. I actually wondered how they broke that. |
| |
| ▲ | BoorishBears 2 hours ago | parent | prev [-] | | They didn't actually name the solution: the solution is virtualization. They described Netflix's implementation, but if someone actually wanted to follow up on this (even for their own personal interest), Dynamic HTML would not get you there, while virtualization would across all the places it's used: mobile, desktop, web, etc. |
|
| |
| ▲ | groundzeros2015 2 hours ago | parent | prev [-] | | This is how every scrolling list has been implemented since the 80s. We actually lost knowledge about how to build UI in the move to web | | |
| ▲ | bloomca an hour ago | parent [-] | | The biggest issue is that there is no native component support for that. So everyone implements their own and it is both brittle and introduces some issues like: - "ctrl + f" search stops working as expected
- the scrollbar has wrong dimensions
- sometimes the content might jump (common web issue overall) The reason why we lost it is because web supports wildly different types of layouts, so it is really hard to optimize the same way it is possible in native apps (they are much less flexible overall). | | |
| ▲ | TeMPOraL 23 minutes ago | parent [-] | | Right. This is one of my favorite examples of how badly bloated the web is, and how full of stupid decisions. Virtual scrolling means you're maintaining a window into content, not actually showing full content. Web browsers are perfectly fine showing tens of thousands of lines of text, or rows in a table, so if you need virtual scrolling for less, something already went badly wrong, and the product is likely to be a toy, not a tool (working definition: can it handle realistic amount of data people would use for productive work - i.e. 10k rows, not 10 rows). |
|
|
| |
| ▲ | stacktraceyo 4 hours ago | parent | prev | next [-] | | Same. It’s wild how bad it can get with just like a normal longer running conversation | |
| ▲ | moffkalast 3 hours ago | parent | prev [-] | | Yeah just had this earlier today, I had to write my response in vscode and paste it in, there were literal seconds of lag for typing each character. Typical bloated React. | | |
| ▲ | scq 3 hours ago | parent [-] | | Just because a web application uses React and is slow, it does not follow that it is slow because of React. It's perfectly possible to write fast or slow web applications in React, same as any other framework. Linear is one of the snappiest web applications I've ever used, and it is written in React. | | |
| ▲ | brigandish 2 hours ago | parent [-] | | Does not, in the seeming absence of other snappy examples and the overwhelming evidence of many, many slow React apps, the exception prove the rule? | | |
| ▲ | scq 2 hours ago | parent [-] | | There are plenty of snappy examples. Off the top of my head: Discord, Netflix, Signal Desktop, WhatsApp Web. |
|
|
|
|
|
| ▲ | sebmellen 4 hours ago | parent | prev | next [-] |
| Great to hear from a first-party source. I'm a Pro subscriber and my team spends well over two thousand dollars per month on OpenAI subscriptions. However, even when I'm logged in with my Pro account, if I'm using a VPN provider like Mullvad, I often have trouble using the chat interface or I get timeout errors. Is this to be expected? I would presume that if I'm authenticated and paying, VPN use wouldn't be a worry. It would be nice to be able to use the tool whether or not I'm on a VPN. |
| |
| ▲ | JumpCrisscross an hour ago | parent [-] | | > even when I'm logged in with my Pro account, if I'm using a VPN provider like Mullvad, I often have trouble using the chat interface or I get timeout errors Heard from a founder who recently switched his company to Claude due to OpenAI's lagginess–it's absolutely an OpenAI problem. Not an AI problem in general. |
|
|
| ▲ | driverdan an hour ago | parent | prev | next [-] |
| Brand new account with 2 comments in this thread. How can we be sure you're not a bot deployed to defend OpenAI? Please run Cloudflare's privacy invasive tool and share all the values it generates here so we can determine if you're a real person. |
|
| ▲ | seba_dos1 4 hours ago | parent | prev | next [-] |
| Hi! It's all perfectly understandable - after all, we use things like Anubis to protect our services from OpenAI and similar actors and keep them available to the real users for exactly the same reasons. |
|
| ▲ | noosphr 4 hours ago | parent | prev | next [-] |
| >These checks are part of how we protect our first-party products from abuse like bots, scraping, fraud, and other attempts to misuse the platform. Can you share these mitigations so we can mitigate against you? |
| |
| ▲ | 0x3f 3 hours ago | parent | next [-] | | It's just Cloudflare. Bypassing it is a whole industry. | | |
| ▲ | zenethian 2 hours ago | parent [-] | | I read the comment as “use it to mitigate against OpenAI bots scraping the web” and not to mitigate Cloudflare. | | |
| ▲ | 0x3f 2 hours ago | parent [-] | | Well it's the same answer isn't it... use Cloudflare. And hope OpenAI doesn't have a backroom scraping deal with them, which they might. |
|
| |
| ▲ | dawnerd 3 hours ago | parent | prev [-] | | Flaresolverr is one way. Isn’t perfect but bypasses a lot. |
|
|
| ▲ | c0_0p_ 4 hours ago | parent | prev | next [-] |
| Can't have those bots or scrapers running amok can we... |
|
| ▲ | the_gipsy 3 hours ago | parent | prev | next [-] |
| But is the title true, is typing specifically blocked? Or does it just block submitting the text? I ask because I have seen huge variations in load time. Sometimes I had to wait seconds until being able to type. Nowadays it seems better though. |
|
| ▲ | mehov 4 hours ago | parent | prev | next [-] |
| > because we want to keep free and logged-out access But don't you run these checks on logged-in users too? |
| |
| ▲ | MyNameIsNickT 4 hours ago | parent [-] | | Yep, on logged-in users too. The reason is basically the same: we want scarce compute going to real people, not attackers. Being logged in is one useful signal, but it doesn’t fully prevent automation, account abuse, or other malicious traffic, so we apply protections in both cases. | | |
| ▲ | angoragoats 3 hours ago | parent | next [-] | | Nothing you do can fully prevent automation. Someone who wants to automate requests badly enough will be able to do it, especially when the “protections” are as easy to decrypt and analyze as the OP proved. Meanwhile, the rest of us (well, not me, because I don’t use your garbage product, but lots of others do) have to suffer and have our compute resources used up in the name of “protection.” | | |
| ▲ | 3form 3 hours ago | parent | next [-] | | Yeah, that's it. Also, it is a bit amusing to me - "We want to prevent automation", says the employee of Let's Automate Inc. | |
| ▲ | geetee 3 hours ago | parent | prev [-] | | [flagged] |
| |
| ▲ | jorvi 2 hours ago | parent | prev | next [-] | | I'm glad you guys at least went with CloudFlare. LMarena went with Google's ReCaptcha, which is plain evil. It'll often gaslight you and pretend you failed a captcha of identifying something as simple as fire hydrants. Another lovely trick is asking you to identify bridges or busses, but in actuality it also wants you to identify viaducts or semi-trucks. | |
| ▲ | salawat 2 hours ago | parent | prev [-] | | More like "We want your money, but don't want to provide service." Are you sure OpenAI isn't morphing into a finance/insurance company? | | |
| ▲ | pixl97 2 hours ago | parent [-] | | While OAI is one of the more hypocritical of the bunch, it is not uncommon for paid services to have some limitations in their terms of service. Like going in a store and buying stuff, it doesn't me a free for all doing whatever you want. | | |
| ▲ | zamadatix 40 minutes ago | parent [-] | | Limitations on the ChatGPT subscription should have to do with the usage limits of the tier you paid for (and I don't think anyone has a problem with that). If I'm in the limits of requests I paid for then it's usage rather than abuse. "Abuse" checks should only come into play when someone tries to leverage the free tier. It reminds me of those cable companies that try to sell "unlimited" plans and then try to say customers who use more than x GB/month are abusing the service rather than just say what the real limits are because "unlimited" sounds better in marketing. |
|
|
|
|
|
| ▲ | pdntspa 3 hours ago | parent | prev | next [-] |
| Y'all just salty that DeepSeek et al are training their LLMs on yours |
|
| ▲ | myHNAccount123 3 hours ago | parent | prev | next [-] |
| Can you fix the resizing text box issue on Safari when a new line is inserted? When your question wraps to a newline Safari locks up for a few seconds and it's really annoying. You can test by pasting text too. |
|
| ▲ | tomalbrc 11 minutes ago | parent | prev | next [-] |
| Fake Account |
|
| ▲ | huertouisj 3 hours ago | parent | prev | next [-] |
| sometimes I paste giant texts (think summarization) in the chatgpt (paid) webapp and I noticed that the CPU fans spin up for about 5 seconds after, as if the text is "processed" client side somehow. this is before hitting "submit" to send the prompt to the model. I assumed it was maybe some tokenization going on client side, but now I realize maybe it's some proof of work related to prompt length? |
|
| ▲ | JumpCrisscross 2 hours ago | parent | prev | next [-] |
| > we want to keep free and logged-out access available for more users How does this comport with OpenAI's new B2B-first strategy? > We also keep a very close eye on the user impact Are paid or logged-in users also penalised? |
|
| ▲ | dev1ycan 3 hours ago | parent | prev | next [-] |
| "abuse like bots, scraping, fraud, and other attempts to misuse the platform" This has to be a joke, right? |
| |
| ▲ | pera 3 hours ago | parent | next [-] | | I really can't tell for sure (new user posting a ridiculously hypocritical corporate message on a Sunday) but if GP actually works for OpenAI the lack of self-awareness is seriously striking | | | |
| ▲ | 3 hours ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | vkou 3 hours ago | parent | prev | next [-] |
| > Hey! I'm Nick, and I work on Integrity at OpenAI. These checks are part of how we protect our first-party products from abuse like bots, scraping, fraud, and other attempts to misuse the platform. How can first-party products protect themselves from abuse by OpenAI's bots and scraping? |
| |
| ▲ | mystraline 2 hours ago | parent [-] | | This is a completely in-scope question. How do we defend against your scraping, OpenAI? I dont want any of my content scraped or seen by you all. Frankly, fuck you all for thinking my content is owned by you. | | |
|
|
| ▲ | 0dayman 3 hours ago | parent | prev | next [-] |
| Hi Nick, your software is a horrendous encroachment on users' privacy and its quality is subpar to those of us who know what we're working with. We don't use your product here. |
|
| ▲ | rglullis 3 hours ago | parent | prev | next [-] |
| I shouldn't be giving ideas to your boss, but I bet he would be interested in making ChatGPT available only by paying customers or free for those whose who gets their eyes scanned by The Orb. Give 30 days of raised limits and we're all set to live in the dystopia he wants. |
|
| ▲ | piskov 4 hours ago | parent | prev | next [-] |
| Tangential question: are there chatgpt app devs on X? There are a few from Codex team but I couldn’t find guys from “ordinary” chatgpt. Also if you could pass this over: it takes 5 taps to change thinking effort on ios and none (as in completely hidden) on macos. If I were to guess it seems that you were trying to lower the token usage :-). Why the effort is only nicely available on web and windows is beyond me |
|
| ▲ | andrepd 4 hours ago | parent | prev | next [-] |
| > OpenAI: These checks are part of how we protect products from abuse like bots, scraping, and other attempts to misuse the platform. This would be fucking HILARIOUS if it wasn't so tragic. |
| |
|
| ▲ | user3939382 4 hours ago | parent | prev | next [-] |
| Have you given any thought to what we trade when big tech elects one corporation as the gatekeeper for vast swaths of the Internet? |
|
| ▲ | crest 2 hours ago | parent | prev | next [-] |
| Then make sure they only target the free tier! |
|
| ▲ | quotemstr 3 hours ago | parent | prev | next [-] |
| We really need ZKPs of humanity |
| |
| ▲ | ctoth 3 hours ago | parent [-] | | No, we really don't. We don't need worldcoin, we don't need papers, please. We just don't. "Prove your humanity/age/other properties" with this mechanism quickly goes places you do not want it to go. | | |
| ▲ | Muromec 2 hours ago | parent | next [-] | | > quickly goes places you do not want it to go. Which places? | |
| ▲ | quotemstr 3 hours ago | parent | prev [-] | | No, it doesn't go places we "do not want it to go". What part of zero knowledge doesn't make sense? How precisely does a free, unlinkable, multi-vendor, open-source cryptographic attestation of recent humanity create something terrible? It would behoove people to engage with the substance of attestation proposals. It's lazy to state that any verification scheme whatsoever is equivalent to a panopticon, dystopia as thought-terminating cliche. We really do have the technology now to attest biographical details in such a way that whoever attests to a fact about you can't learn the use to which you put that attestation and in such a way that the person who verifies your attestation can see it's genuine without learning anything about you except that one bit of information you disclose. And no, such a ZK scheme does not turn instantly into some megacorp extracting monopoly rents from some kind of internet participation toll booth. Why would this outcome be inevitable? We have plenty of examples of fair and open ecosystems. It's just lazy to assert right out of the gate that any attestation scheme is going to be captured. So, please, can we stop matching every scheme whatsoever for verifying facts as actors as the East German villain in a cold war movie? We're talking about something totally different. | | |
| ▲ | ctoth 3 hours ago | parent | next [-] | | The ZK part isn't the problem. The "attestation of recent humanity" part is. Who attests? What happens when someone can't get attested? You've been to the doctor recently, right? Given them your SSN? Every identity system ever built was going to be scoped || voluntary. None of them stayed that way. Once you have the identity mechanism, "Oh it's zero knowledge! So let's use it for your age! Have you ever been convicted?" which leads to "mandated by employers" which leads to... We've seen this goddamn movie before. Let's just skip it this time? Please? | |
| ▲ | dzikimarian 3 hours ago | parent | prev [-] | | The part where FAANG does usual Embrace, Extend, Extinguish, masses don't care/understand and we have yet another "sign in with... " that isn't open source nor zero-knowledge in practice and monetizes your every move. And probably at least one of the vendors has massive leak that shows half-assed or even flawed on purpose implementation. |
|
|
|
|
| ▲ | 3 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | thegreatpeter 3 hours ago | parent | prev | next [-] |
| You’re doing gods work sir, thank you! |
|
| ▲ | nickphx 3 hours ago | parent | prev | next [-] |
| the irony of your statement is hilarious, disappointing, and infuriating. |
|
| ▲ | huflungdung an hour ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | jgalt212 4 hours ago | parent | prev [-] |
| [dead] |