| ▲ | palmotea 5 hours ago | ||||||||||||||||||||||
> The method takes advantage of normal network communication between connected devices and the router. These devices regularly send feedback signals within the network, known as beamforming feedback information (BFI), which are transmitted without encryption and can be read by anyone within range. > By collecting this data, images of people can be generated from multiple perspectives, allowing individuals to be identified. Once the machine learning model has been trained, the identification process takes only a few seconds. > In a study with 197 participants, the team could infer the identity of persons with almost 100% accuracy – independently of the perspective or their gait. So what's the resolution of these images, and what's visible/invisible to them? Does it pick up your clothes? Your flesh? Or mosty your bones? | |||||||||||||||||||||||
| ▲ | mahrain 5 hours ago | parent | next [-] | ||||||||||||||||||||||
What happens is that a large body of water (pun intended) has the ability to absorb and reflect wifi signals as it moves through the room. For this you need to generate traffic and measure for instance RSSI or CSI (basically, signal strength) of the packets. If you increase frequency you can detect smaller movements such as arms moving vs. a body, or exclude pets if you reduce sensitivity. It works well for detecting presence and movement in a defined space, but ideally requires you to cross the path between two mains-powered devices, such as light bulbs or wifi mesh points. Passing a cafe doesn't seem too likely. If you want to do advanced sensing, trying to identify a person, I would postulate you need to saturate a space with high frequency wifi traffic, ideally placed mesh points, and let the algo train on identifying people first by a certain signature (combination of size/weight, movement/gait, breath / chest movements). Source: I worked on such technologies while at Signify (variants of this power Philips/Wiz "SpaceSense" feature). More here: https://www.theverge.com/2022/9/16/23355255/signify-wiz-spac... | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | brk 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Resolution and positional accuracy are very poor. It’s more like ‘an approximate bag of water detector’. Gait analysis is complete fiction. Especially with a non-visual approach like this. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | mhitza 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
From the paper linked by jbotz > The results for CSI can also be found in Figure 3. We find that we can identify individuals based on their normal walking style using CSI with high accuracy, here 82.4% ± 0.62. If you're a person of interest you could be monitored, your walking pattern internalized in the model then followed through buildings. That's my intuition at practical applications, and the level of detail. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | ghostly_s 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
> So what's the resolution of these images, and what's visible/invisible to them? The researchers never claimed to generate "images," that's editorializing by this publication. The pipeline just generates a confidence value for correlating one capture from the same sensor setup with another. [Sidenote: did ACM really go "Open Access" but gate PDF download behind the paid tier? Or is the download link just very well hidden in their crappy PDF viewer?] | |||||||||||||||||||||||
| ▲ | lukeschlather 5 hours ago | parent | prev [-] | ||||||||||||||||||||||
It's at least possible to record heart rate with wifi, so that suggests a broad variety of biometrics can be recorded. | |||||||||||||||||||||||