| ▲ | Notion leaks email addresses of all editors of any public page(twitter.com) |
| 87 points by Tiberium 2 hours ago | 16 comments |
| |
|
| ▲ | Tiberium 3 minutes ago | parent | next [-] |
| Apparently this is even officially documented at https://www.notion.com/help/public-pages-and-web-publishing#... buried in a note: > Note: When you publish a Notion page to the web, the webpage’s metadata may include the names, profile photos, and email addresses associated with any Notion users that have contributed to the page. |
|
| ▲ | RomanPushkin 8 minutes ago | parent | prev | next [-] |
| It has been an issue for at least 5 years. I remember one dude from HN deanonymized me around 5 years ago by looking at my notion page. |
|
| ▲ | hohithere 2 minutes ago | parent | prev | next [-] |
| Any self hosted solution? |
|
| ▲ | DropDead an hour ago | parent | prev | next [-] |
| Big companys need to start caring more security and privacy of its users and employees |
| |
| ▲ | fnoef 20 minutes ago | parent | next [-] | | Nah. They care about profits only, the sooner the better, so everyone can cash out and move to their next “venture” | | | |
| ▲ | bitmasher9 43 minutes ago | parent | prev | next [-] | | I think we’ll start seeing consulting agencies advertise how many vulnerabilities that can resolve per million token, and engineering teams feeling pressure to merge this generated code. We’ll also see more token heavy services like dependabot, sonar cube, etc that specialize in providing security related PR Reviews and codebase audits. This is one of the spaces where a small team could build something that quickly pulls great ARR numbers. | | |
| ▲ | contractlens_hn 34 minutes ago | parent | next [-] | | The same vertical-specialist logic applies in legal tech. Law firms are drowning in contract review — NDA, MSAs, leases — and generic AI gives them vague answers with no accountability. The teams winning there aren't building 'AI for lawyers', they're building AI that cites every answer to a specific clause and pins professional liability to the output. That's a very different product than a chatbot. | |
| ▲ | delecti 17 minutes ago | parent | prev [-] | | Does SonarCube use LLMs these days? It always seemed like a bloated, Goodhart's law inviting, waste of time, so hearing that doesn't surprise me at all. |
| |
| ▲ | estimator7292 42 minutes ago | parent | prev [-] | | The problem is that they don't "need" to. There's no consequences for not caring, and no incentive to care. We need laws and a competent government to force these companies to care by levying significant fines or jail time for executives depending on severity. Not fines like 0.00002 cents per exposed customers, existential fines like 1% of annual revinue for each exposed customer. If you fuck up bad enough, your company burns to the ground and your CEO goes to jail type consequences. | | |
| ▲ | rafram 31 minutes ago | parent | next [-] | | This kind of response went out of fashion after Enron. Burning an entire company to the ground (in that case Arthur Andersen) and putting thousands out of work because of the misdeeds of a few - even if they were due to companywide culture problems - turned out to be disproportionate, wasteful, and cruel. | | |
| ▲ | knome 6 minutes ago | parent [-] | | the answer to that is a functional social safety net for the innocent employees to land in, not allowing companies to violate the law with impunity. |
| |
| ▲ | amelius 25 minutes ago | parent | prev [-] | | If the government wants me to take copyright and IP laws seriously, then they need to take my personal information seriously too. |
|
|
|
| ▲ | amazingamazing 38 minutes ago | parent | prev | next [-] |
| I've been toying around an architecture that sets things up such that the data for each user is actually stored with each user and only materialized on demand, such that many data leaks would yield little since the server doesn't actually store most of the user data. I mention this since this sorts of leaks are inevitable as long as people are fallible. I feel the correct solution is to not store user data to begin with. some problems I've identified: 1. suppose you have x users and y groups, of which require some subset of x. joining the data on demand can become expensive, O(x*y). 2. the main usefulness of such an architecture is if the data itself is stored with the user, but as group sizes y increase, a single user's data being offline makes aggregate usecases more difficult. this would lend itself to replicating the data server side, but that would defeat the purpose 3. assuming the previous two are solved, which is very difficult to say the least, how do you secure the data for the user such that someone who knows about this architecture can't just go to the clients and trivially scrape all of the data (per user)? 4. how do you allow for these features without allowing people to modify their data in ways you don't want to allow? encryption? a concrete example of this would be if HN had it so that each user had a sqlite database that stored all of the posts made per user. then, HN server would actually go and fetch the data for each of the posters to then show the regular page. presumably here if a data of a given user is inaccessible then their data would be omitted. |
| |
| ▲ | yellow_postit 30 minutes ago | parent [-] | | I’ve always liked this idea but I think it eventually ends back up with essentially our current system. Users have multiple devices so you quickly get to needing a sync service. Once that gets complex enough, then people will outsource to a third party and then we are back to a FB/Google/Apple sign in and data mgmt world. |
|
|
| ▲ | VladVladikoff 14 minutes ago | parent | prev [-] |
| The tweet is only a few words, you really need an LLM to write that for you??? |