| ▲ | Aurornis 6 hours ago |
| Meta is on the extreme other end of this. The article opens with how they're now using AI to monitor how everyone uses their computers. It's still insane to me that Meta thought this would be a good idea, or that employees would be comfortable with it even though they claim it's only used for anonymous AI training. |
|
| ▲ | loeg 6 hours ago | parent | next [-] |
| > using AI to monitor how everyone uses their computers It's the other way around -- they're monitoring the computers to train AI. |
| |
| ▲ | stasomatic 5 hours ago | parent | next [-] | | Could this be a vector to poison the AI? I am not one for sabotage, just bad karma all in all, but not all are like that, and if one knows their days at ACME are numbered, the sirens start singing. | |
| ▲ | sterlind 6 hours ago | parent | prev [-] | | probably both, to be fair. Meta may know that their employees will put up with it, given how depressing the job market is right now, but unhappy, cynical, resentful employees do not produce good software and innovations. there's a real financial cost to treating devs like cage-raised livestock. | | |
| ▲ | loeg 6 hours ago | parent [-] | | It's unclear how you would use LLMs to monitor clicks. Unless you just mean they're authoring the monitoring software with LLM assistance (which is probably right). | | |
| ▲ | saratogacx 3 hours ago | parent [-] | | LLM generates context based on what's on the screen and associates it with the action taken by the user. It is less "point of time" but more "charting the flow" For example. page content of a PR with open comments, next action is to focus on the first comment. when a new PR with no open comments is shown the approve/push button is the next action. That starts a re-enforcement loop. |
|
|
|
|
| ▲ | jimbokun an hour ago | parent | prev [-] |
| If they were competently evil they would have just done it quietly. |