Remix.run Logo
emidln 5 hours ago

My 2025 Mazda Miata has a CAN connected Telematics Control Unit that sends a bunch of data to Mazda on ignition off. Among this data is acceleration and velocity data along with coordinates sampled for where you were. It is also used as a gateway for the Mazda app to start your car, query your vehicle's tire pressure, etc. It is claimed that you can opt out of this by calling Mazda and being persistent.

The CAN traffic is unencrypted. It was pretty easy to MITM this module with a cheap arm Linux board and a can transceiver to enable writing a two way filter capable of blocking the traffic that didn't raise any DTCs (that I observed) and could be turned on/off by the user. I preferred this approach to complete disconnection of the module (which is noticeable via errors at the diagnostic port) or trying to faraday cage or disable the antennae on the TCU so it can't remotely send/receive. I can also turn off my module or completely remove it before I sell it.

I fear the next version of Miata will be an encrypted CAN like most other cars have moved to and even with my expertise I won't be able to access the latest safety features from new cars without surrendering what little privacy I've been able to claw back.

wormslayer666 an hour ago | parent | next [-]

I opted to try the "beg the manufacturer to turn off the panopticon" approach[1]. The first time I got 2 hours of elevator music before hanging up, the second I went through 3 levels of customer support before they claimed it was done (3 days later). Might have to steal your approach to verify that though...

[1] https://www.mazdausa.com/site/privacy-connectedservices

nja 3 hours ago | parent | prev | next [-]

Have you posted any writeups or other information about how you built this? I'm eyeing a Mazda as a next car (I've never owned a car newer than a 2014, and outside of that one, any newer than 2006, but family safety needs may lead to getting a newer car soon), and telemetry seems like one of the few downsides to an otherwise good carmaker. Would be very interested to learn more!

M95D 4 hours ago | parent | prev | next [-]

> The CAN traffic is unencrypted. It was pretty easy to MITM this module with a cheap arm Linux board

And you didn't poison their databases and statistics with fake data?? OMG, I'm thinking of buying one of these cars just for this opportunity! (No, I'm not.)

emidln 4 hours ago | parent | next [-]

I suspect this data is made "anonymous" and sold to insurance companies and misc data brokers. If it's linked to my insurance company, I don't want to jack my rates. Further, I've thus far avoided a CFAA conviction and I'd like to keep it that way.

andrei_says_ 2 hours ago | parent | next [-]

As anonymous as there are Miatas in your neighborhood parking in your driveway.

mindslight 3 hours ago | parent | prev [-]

It would be an extremely totalitarian dynamic to be persecuted with the CFAA for modifying a device you own based on part of it having been (nonconsensually!) programmed by a third party to upload data to their own server. You own the device, so anything you do within that device is authorized. And the code that uploads the data is authorized to do so because it was put there by the same company that owns [controls] the servers themselves.

I do know that the CFAA essentially gets interpreted to mean whatever the corpos want it to mean - it's basically an anti-witch law - so it's best to steer clear. And this goes double with with the current overtly pay-to-play regime. But just saying.

(Awesome description btw! I really wish I'd find a buying guide for many makes/models of cars that detail how well they can be unshackled from digital authoritarianism. A Miata is not the type of vehicle I am in the market for (which is unfortunate, for several reasons))

emidln 3 hours ago | parent | next [-]

If you can be prosecuted for guessing urls you can be prosecuted for sending garbage data in a way you know will be uploaded to a remote system.

vkou a minute ago | parent | next [-]

You think criminalizing guessing URLs is unreasonable.

What about guessing passwords? Should someone be prosecuted for just trying to bruteforce them until one works?

mindslight 2 hours ago | parent | prev [-]

As a strictly logical assertion, I do not agree. Guessing URLs is crafting new types of interactions with a server. The built in surveillance uploader is still only accessing the server in the way it has already been explicitly authorized. Trying to tie some nebulous TOS to a situation that the manufacturer has deliberately created reeks of the same type of website-TOS shenanigans courts have (actually!) struck down.

As a pragmatic matter, I do completely understand where you're coming from (my second paragraph). In a sense, if one can get to the point of being convicted they have been kind of fortunate - it means they didn't kill themselves under the crushing pressure of a team of federal persecutors whose day job is making your life miserable.

monerozcash 13 minutes ago | parent [-]

>(A) knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer;

If your goal is to deliberately "poison" their data as suggested before, it's kind of obvious that you are knowingly causing the transmission of information in an effort to intentionally cause damage to a protected computer without authorization to cause such damage.

>Trying to tie some nebulous TOS to a situation that the manufacturer has deliberately created reeks of the same type of website-TOS shenanigans courts have (actually!) struck down.

This has very little to do with the TOS though, unless the TOS specifically states that you are in fact allowed to deliberately damage their systems.

And no, causing damage to a computer does not refer to hackers turning computers into bombs. But rather specifically situations like this.

6 minutes ago | parent | prev | next [-]
[deleted]
monerozcash 2 hours ago | parent | prev | next [-]

Prosecuting someone for deliberately injecting garbage data into another persons system hardly seems totalitarian.

> You own the device, so anything you do within that device is authorized

You're very clearly describing a situation where at least some of the things you're doing aren't happening on your own device.

>I do know that the CFAA essentially gets interpreted to mean whatever the corpos want it to mean - it's basically an anti-witch law

FWIW this is simply not true. The essence of the CFAA is "do not deliberately do anything bad to computers that belong to other people".

The supreme court even recently tightened the definition of "unauthorized access" to ensure that you can't play silly games with terms of service and the CFAA. https://www.supremecourt.gov/opinions/20pdf/19-783_k53l.pdf

elzbardico an hour ago | parent [-]

My device. I generate whatever the fuck the data I want. If you log it, kiss my ass.

monerozcash 23 minutes ago | parent [-]

Sure, I have the same attitude when it comes to the government telling me that I'm not allowed to use drugs. Doesn't mean I'm in the clear from a legal point of view.

However, it's worth clarifying that the important detail isn't generating the data, but sending it. Particularly the clearly stated malicious intent of "poisoning" their data.

This seems like exactly what the lawmakers writing CFAA sought to criminalize, and is frankly much better justified than perhaps the bulk of things they tend to come up with.

>(A) knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer;

Doesn't seem exactly unfair to me, even if facing federal charges over silly vandalism is perhaps a bit much. Of course, you'd realistically be facing a fine.

2 hours ago | parent | prev [-]
[deleted]
elzbardico an hour ago | parent | prev [-]

Oh man. Logging insane average speeds and ludicrous acceleration during rush hour. Deliciously tempting idea.

tehjoker 35 minutes ago | parent [-]

A data scientist will simply filter out impossible data when conducting an analysis

4 hours ago | parent | prev | next [-]
[deleted]
CamperBob2 4 hours ago | parent | prev [-]

I fear the next version of Miata will be an encrypted CAN like most other cars have moved to

As I understand it, they're required to do that now if they want to sell in the EU. They emphatically do not want anyone tinkering with their cars.

bri3d 4 hours ago | parent [-]

They don’t want people modifying ADAS systems mostly, and the main requirement is SecOC, which is cryptographic authentication but the message is still plaintext. Basically they don’t want third party modifications able to randomly send the “steer left” message to the steering rack, for example.

rconti an hour ago | parent | next [-]

The ADAS systems mandated in Europe are insanely intrusive. I had a few rental cars in Europe this summer and wanted to send them off a cliff. (and I'm not an auto tech luddite, I've had modern cars in the US with autopilot type systems, lane keep, blind spot warning, rear traffic assist radar, forward collision warning, etc. IMO rear traffic assist/FCW/AEB tend to work really well, autopilot pretty well, and lane keep and blind spot silly gimmicks at best).

Bring on the full self-driving cars, or let me drive my own car. This human-in-the-loop middle state is maddening. We're either supervising our "self-driving, but not really" cars, where the car does all of the work but we still have to be 100% aware and ready to "take over" the instant anything gets hard (which we know from studies is something humans are TERRIBLE at)... Or, we're actively _driving_ the car, but you're not really. The steering feel is going in and out as the car subtly corrects for you, so you can't trust your own human senses. Typically 40% brake pedal pressure gets you 40% brake pressure, unless you lift off the throttle and hop to the brakes quickly, in which case it decides when you apply 40% pedal pressure you actually want 80% brake pressure. Again, you can't trust your human senses. The same input gets different outputs depending on the foggy decisions of some computer. Add to that the beeping and ping-ponging and flashing lights in the cluster.

It's like clippy all over again. They've decided that, if one warning is good and helpful, constant alerts are MORE good and MORE helpful. Not a thought has been given to alert fatigue or the consequences of this mixed human-in-the-loop mode.

hdgvhicv 36 minutes ago | parent [-]

“Lane keep” yanks the wheel dangerously because it incorrectly detects the lane, or because you don’t indicate to pass a pothole on an empty road (which itself would be confusing to other road users)

Forward collision warning has misfired on 2 occasions on me in the last 3 years

The main issue is that so many cars have broken “auto dipping” headlights which don’t dip, or matrix headlights which don’t pick out other cars.

This automation shit should stop, but it won’t.

parking beepers are reasonable, they simply come on occasionally and don’t actually interfere when they go wrong. The rest of it just makes things far worse at scale.

RealityVoid 27 minutes ago | parent | prev | next [-]

I integrated SecOC on some ECU's at work. I hate myself for it. I frigging hate what they're doing with this. I think it's going to make cars less repairable, less modifiable. It's a horrible horrible stupid initiative in the name of "cybersecurity".

bri3d 15 minutes ago | parent [-]

I understand notionally where they were going, but it all sort of went off the deep end somewhere along the line. A concern that someone buying some "mileage blocker" or whatever other shady device off of AliExpress might be vulnerable to the device steering their car into a wall is actually quite a valid one, but of course the solution is some overcomplicated AUTOSAR nightmare that doesn't solve for key provisioning in a way to make modules replaceable.

CamperBob2 4 hours ago | parent | prev [-]

Yes, and to do that, CAN must be encrypted. The idea isn't just to secure it from hackers. The idea is to secure it from owners.

bri3d 4 hours ago | parent [-]

> SecOC, which is cryptographic authentication but the message is still plaintext

CamperBob2 4 hours ago | parent [-]

Oh, OK, that's better. I can see what my car is doing, I just can't do anything about it.