| ▲ | wyre 5 days ago |
| I think it's ability to consume information is one of the scarier aspects of AI. NSA, other government, and multi-national corporations have years of our individual browsing and consumption patterns. What happens when AI is analyzing all of that information exponentially faster than any human code and communicating with relevant parties for their own benefit, to predict or manipulate behavior, build psychological profiles, identify vulnerabilities, etc. It's incredibly amusing to me reading some people's comments here critical of AI, that if you didn't know any better, might make you think that AI is a worthless technology. |
|
| ▲ | munificent 5 days ago | parent | next [-] |
| > if you didn't know any better, might make you think that AI is a worthless technology. "Worthless" is ambiguous in this sentence. I think people understand that AI isn't useless in that it at least to some degree does the things it is intended to do. At the same time, it might be valueless in that a world without it is preferable to some. Landmines are not useless, but they are valueless. Opinions differ is to what degree generative AI is like landmines in terms of externalities. |
| |
| ▲ | bigfudge 4 days ago | parent [-] | | This hides a lot of local detail though. Something might be valueless by your definition because on aggregate it does enough harm to balance out the good, but still have great value in specific contexts. Even landmines might look quite useful in eastern Ukraine at the moment. More important is to remember the impact of technology is not inevitable or predetermined. We can have (some) agency about how technologies are used. The impact of AI is likely to be much more negative than it could be because of he tech bro oligopoly emerging in the US. But that isn't because of 'human nature' or something inevitable or baked into the tech — it's because of local, historical factors in the US right now. | | |
| ▲ | munificent 4 days ago | parent [-] | | I agree that externalities and complex, and situational, and that the value proposition of a piece of technology is most certainly not uniformly distributed. > The impact of AI is likely to be much more negative than it could be because of the tech bro oligopoly emerging in the US. There is circularity here because the tech bro oligarchy will certainly be empowered and enriched by AI as well. |
|
|
|
| ▲ | seg_lol 5 days ago | parent | prev | next [-] |
| Jimmy Carr (comedian) https://www.youtube.com/watch?v=jaYOskvlq18 thinks that AIs ability to be a surveillance savant is one of the biggest risks that people aren't thinking enough about. |
| |
| ▲ | fcantournet 5 days ago | parent | next [-] | | It is literally the ONE thing that every AI critic has been talking about for years. Several things can be true at the same time : it's possible for the wild claims of great efficiency gains and transformative (for good) power of AI to be overblown (for the sake of stock prices) AND for AI applied to surveillance to be a terrifying prospect. Surveillance AI doesn't need to be correct to be terrifying. | | |
| ▲ | autoexec 5 days ago | parent [-] | | I hear far more concerns about putting people out of work, the environmental impact, or even copyright issues than the ways AI will be used to control people. I wish every critic of AI was putting this issue out there anywhere near as often as other concerns. |
| |
| ▲ | Libidinalecon 4 days ago | parent | prev [-] | | The real issue to me is surveillance combined with job loss. The easy solution becomes a giant bull market in law enforcement jobs. |
|
|
| ▲ | macNchz 5 days ago | parent | prev | next [-] |
| All hype and thought experiments about superintelligence and open questions about creativity and learning and IP aside, this is the area that gives me the biggest pause. We've effectively created a panopticon in recent years—there are cameras absolutely everywhere. Despite that, though, the effort to actually do something with all of those feeds has provided a sort of natural barrier to overreach: it'd be effectively impossible to have people constantly watching all of the millions of camera feeds available in a modern city and flagging things, but AI certainly could. Right now the compute for that is a barrier, but it would surprise me if we don't see cameras (which currently offer a variety of fairly basic computer vision "AI" alerting features for motion and object detection) coming with free-text prompts to trigger alerts. "Alert me if you see a red Nissan drive past the house.", "Alert me if you see a neighbor letting his dog poop in my yard.", "Alert the police if you see crime taking place [default on, opt out required]." |
| |
| ▲ | nostrademons 5 days ago | parent [-] | | The prompt becomes the bottleneck, along with the precision of the AI. You can only tell it to do what you know how to express. That makes it useless for preventing new and different types of crimes (or dissidents) but fairly effective for preventing the known types of crimes (or dissent) at scale. | | |
| ▲ | autoexec 5 days ago | parent | next [-] | | > You can only tell it to do what you know how to express. Unfortunately that doesn't prevent much of anything since our language is extremely expressive and AI will just make up anything it needs to in order to fill in the blanks. When it comes to things like oppressing a group of people, or maximizing profits by dynamically setting your prices using a guess of each individual's income you can accomplish what you set out to do even with huge margins or error. We already know that people will use this technology who simply will not care if they price out or imprison a few people who shouldn't have been so long as massive numbers of their enemies are locked up or their profits continue to climb. | |
| ▲ | macNchz 5 days ago | parent | prev | next [-] | | I think it's a significantly lower barrier than employing people to watch feeds non stop or review every instance of motion or person-detection. It would be pretty straightforward for the camera maker to test and evaluate a handful of presets that ship with the cameras, and the current state of vision models is already pretty excellent at identifying things in a nuanced and flexible way in images and video. | |
| ▲ | robotresearcher 5 days ago | parent | prev [-] | | You can ask it to report anything that looks different recently. That'll catch some new things without needing to understand them in advance. | | |
| ▲ | nostrademons 5 days ago | parent [-] | | You'll get inundated with outliers then. Everything looks different if you don't specify the baseline and tolerance. |
|
|
|
|
| ▲ | heavyset_go 5 days ago | parent | prev | next [-] |
| We didn't need generative AI for this, standard ML techniques from 10 years ago were already doing this, and are cheaper. |
| |
| ▲ | handoflixue 4 days ago | parent | next [-] | | Can you point to the consumer project / open-source program I could have run 10 years ago, for free, to do any one of the tasks listed in the article, much less all of them? Remember, it also needs a UI that works without needing any programming skill, just by asking plain-English questions I've been looking for something like this for the last 20 years, and this is the first time I've seen anything that can actually produce intelligent answers about the giant trove of personal documents I've got floating around. | |
| ▲ | lukeschlather 5 days ago | parent | prev [-] | | They're only cheaper if you ignore the cost of training a one-off model. If I'm only ever running the model 10,000 times to do some task it probably doesn't matter if I'm using ChatGPT or a more efficient custom model, and in fact ChatGPT is probably cheaper and easier. |
|
|
| ▲ | concinds 5 days ago | parent | prev | next [-] |
| For decades people tried to correlate gait to personality and behavior. Then, DNA, with IQ and all sorts of things. Now they're trying it with barely-noticeable facial features, again with personality traits. But the research is still crap bordering on woo, and barely predictive at all. It's at least plausible that we are sufficiently complex that, even with tons of NSA and corporate data and extremely sophisticated models, you still wouldn't be able to predict someone's behavior with much accuracy. |
| |
| ▲ | nowittyusername 5 days ago | parent | next [-] | | There doesn't need to be a correlation between some data structure and its effects for people to implement some sort of feature. There only need to be enough stupid people in powerful positions that believe in some sort of correlational trend AND also for the data gathering task to be trivially cheap enough for them to implement said things. And there's no shortage of that going around. That's why these technologies are dangerous. Stupid people with powerful and cheap tools to weald them. Kind of like what we saw with the first wave of Facebook algorithms being used against its users to maximize the attention at the detriment of everything else. | | |
| ▲ | edot 5 days ago | parent [-] | | Yes, exactly. “Well, the AI said to go arrest that guy, and I’ve been hearing for years that AI is super smart, so that must be the right thing to do.” |
| |
| ▲ | kazinator 5 days ago | parent | prev | next [-] | | Any tech for predicting people's behavior will likely be sooner mature in predicting the behavior of crowds of people than one individual. (They seem like related problems where the latter is much harder.) The easier one is where the $$$ incentives lie, e.g. if you correctly predict how masses of people are going to buy stocks, you're rich. | |
| ▲ | BloondAndDoom 5 days ago | parent | prev [-] | | It’s not predicting more about controlling and shutting down who doesn’t agree with you. We already know majority of governments spying and sabotaging activists. Now imagine you can query for “extreme environmentalists who lives in X” and whatever the further filtering is needed. |
|
|
| ▲ | BloondAndDoom 5 days ago | parent | prev | next [-] |
| This, even worse I was thinking about my ChatGPT account. I was doing some research on a topic government would redeem “dangerous”, and also I’m an immigrant. ChatGPT can be one of the best profiling tools, and imagine combining it with Google etc. Which we know has been done before so why not again or maybe already going in. We are not that far away from border police to possibly review my government issued “suspiciousness level” and just auto reject. I think this is beginning of the end on privacy against government. There will be a new tech movement (among hackers) focusing on e2e, local AI and all forms of disconnected private computing, but general population is absolutely doomed. |
|
| ▲ | grafmax 5 days ago | parent | prev | next [-] |
| Even if it is worthless, it will still be used for these things - because of the sense of confidence it instills in the kinds of people undertaking these sorts of activities. |
|
| ▲ | jonahrd 5 days ago | parent | prev | next [-] |
| this became extremely apparent for me watching Adam Curtis's "Russia 1985-1999: TraumaZone" series. The series documents what it was like to live in the USSR during the fall of communism and (cheekily added) democracy. It was released in Oct 2022, meaning it was written and edited just before the AI curve really hit hard. But so much of the takeaway is that it's "impossible" for top-down government to actually process all of what was happening within the system they created, and to respond appropriately and timely-- thus creating problems like food shortages, corrupt industries, etc etc. So many of the problems were traced to the monolith information processing buildings owned by the state. But honestly.. with modern LLMs all the way up the chain? I could envision a system like this working much more smoothly (while still being incredibly invasive and eroding most people's fundamental rights). And without massive food and labour shortages, where would the energy for change come from? |
| |
| ▲ | wongarsu 5 days ago | parent | next [-] | | A planned economy is certainly a lot more viable now than it was in 1950, let alone 1920. The Soviet Union was in many ways just a century too early. But a major failing of the Soviet economic system was that there simply wasn't good data to make decisions, because at every layer people had the means and incentive to make their data look better than it really was. If you just add AI and modern technology to the system they had it still wouldn't work because wrong data leads us to the wrong conclusions. The real game changer would be industrial IoT, comprehensive tracking with QR codes, etc. And even then you'd have to do a lot of work to make sure factories don't mislabel their goods | | |
| ▲ | wrs 5 days ago | parent | next [-] | | That is, assuming leadership wants good data, as opposed to data that makes them look good, or validates their world model. Certainly in recent history, agencies tasked with providing accurate data are routinely told not to (e.g., the BLS commissioner firing, or the Iraq WMD reports). | |
| ▲ | hylaride 5 days ago | parent | prev [-] | | > A planned economy is certainly a lot more viable now than it was in 1950, let alone 1920. The Soviet Union was in many ways just a century too early. If the economy were otherwise stagnant, maybe. But top-down planning just cannot take into account all the multitudes of inputs to plan anywhere near the scale that communist countries did. Bureaucrats are never going to be incentivized anywhere near the level that private decision making can be. Businesses (within a legal/regulatory framework) can "just do" things if they make economic sense via a relatively simple price signal. A top-down planner can never fully take that into account, and governments should only intervene in specific national interest situations (eg in a shortage environment legally mandating an important precursor medicine ingredient to medical companies instead of other uses). The Soviet Union decided that defence was priority number one and shoved an enormous amount of national resources into it. In the west, the US government encouraged development that also spilled over into the civilian sector and vice-versa. > But a major failing of the Soviet economic system was that there simply wasn't good data to make decisions, because at every layer people had the means and incentive to make their data look better than it really was. It wasn't just data that was the problem, but also quality control, having to plan far, far ahead due to bureaucracy in the supply chain, not being able get spare parts because wear and tear wasn't properly planned, etc. There's an old saying even in private business that if you create and measure people on a metric they'll game or over concentrate on said metric. The USSR often pumped out large numbers of various widgets, but quality would often be poor (the stories of submarine and nuclear power plant manufacturers having to repeatedly deal and replace bad inputs was a massive source of waste). |
| |
| ▲ | delaminator 5 days ago | parent | prev [-] | | What you're describing is called The Fourth Industrial Revolution in Klaus Schwab's book. Factory machines transmitting their current rate of production all the way up to International Govt. which, being all knowing, can help you regulate your production based on current and forecasted worldwide consumption. And your machines being flexible enough to reconfigure to produce something else. Stores doing the same on their sales and Central Bank Digital Currency tying it all together. |
|
|
| ▲ | teleforce 5 days ago | parent | prev | next [-] |
| > AI is a worthless technology You are making another extreme claim of AI in comparison to the other extreme worthless technology claim. I think you are spreading FUD for our poor AI not unlike Hinton but it's okay for him since he's being biased for a reason. Your conjecture is making it looks like the fictional Minority Report movie at best and Terminator movies scenario at worst, that I think is a bit extreme. |
|
| ▲ | Liquix 5 days ago | parent | prev [-] |
| > What happens when AI is analyzing all of that information... They run simulations against N million personality models, accurately predicting the outcome of any news story/event/stimulus. They use this power to shape national and global events to their own ends. This is what privacy and digital sovereignty advocates have been warning the public about for over a decade, to no avail. |
| |
| ▲ | seg_lol 5 days ago | parent [-] | | This is much worse than overt authoritarian control, because the controlled aren't even aware of it. | | |
| ▲ | wordpad 5 days ago | parent | next [-] | | I don't know how viable it is. Even for AI, there are just too many intermingled variables when it comes to human behavior. All the money in the world has been invested into trying to do it with stock markets, and they still can't do better than average. | |
| ▲ | hvb2 5 days ago | parent | prev [-] | | You should watch the matrix, especially the first 3 |
|
|