|
| ▲ | recursive 9 hours ago | parent | next [-] |
| > It'll be okay. Could there ever exist anything that wouldn't be okay? What's the difference between something that will be okay and something that won't? I'm guessing the things that will be okay are the things that might pose an obstacle for AI "progress". |
|
| ▲ | throwawaysoxjje 8 hours ago | parent | prev | next [-] |
| > I'm pretty sure the company you work for owns your work chat, and that what you say on company slack constitutes business information. That’s not a valid argument. The company itself would still need to consent. |
|
| ▲ | bandrami 5 hours ago | parent | prev | next [-] |
| In general the companies are the ones showing reluctance, much more than their employees. There's still a morass of security, privacy, and legal unanswered questions about LLM use in general. Not to mention the huge unknown of total lifecycle costs |
|
| ▲ | ethbr1 5 hours ago | parent | prev | next [-] |
| It's amazing how every reply failed to realize you're (and post was) talking about (a) enterprise Slack usage & (b) AI use by the company itself. |
| |
| ▲ | darth_avocado 4 hours ago | parent [-] | | I operate with the assumption that the company can access my private DMs on enterprise slack if they want to. With that, users are still allowed to be concerned if the company is going to use that information for AI use cases. I’d prefer that all AI stay away from my private DMs. |
|
|
| ▲ | troupo 9 hours ago | parent | prev [-] |
| > I'm pretty sure the company you work for owns your work chat, and that what you say on company slack constitutes business information. It does. And a lot of this information is highly sensitive. Imagine my company's surprise if Slack would not be shameful and would just open up its data moat to AI. > There are a lot of things people don't consent to. Being born. Demagoguery and non sequiturs are not arguments. But I guess that's what passes for "arguments" for AI maximalists. |