| |
| ▲ | amelius 3 days ago | parent | next [-] | | Using an LLM doesn't mean it has to take the final decision. You can also use it as a warning system. | | |
| ▲ | Mawr 3 days ago | parent | next [-] | | False negatives are a huge issue when designing safety systems. It is not the case that "more warnings = more better". | | | |
| ▲ | stnikolauswagne 3 days ago | parent | prev [-] | | Is there any indication that current warning systems are insufficient in any way that would be improved by LLM involvement? | | |
| ▲ | captainbland 3 days ago | parent | next [-] | | Well they don't attract nearly as much investment in the current market, I think that might be the problem people really want to solve | |
| ▲ | vidarh 3 days ago | parent | prev [-] | | We won't know that until someone has actually investigated how an LLM would do in those scenarios. | | |
| ▲ | stnikolauswagne 3 days ago | parent [-] | | That sounds like a solution looking for a problem though, i see plenty of arguments against throwing critical safety information that are in charge of peoples lives into an LLM "just in case the result is better than the result that the current battle-hardened systems already provide" | | |
| ▲ | amelius 3 days ago | parent [-] | | Nobody can be against just collecting the data and letting people experiment with it. | | |
| ▲ | stnikolauswagne 3 days ago | parent [-] | | Are all those security systems actually open right now? Because that sounds like an absolute security nightmare if so. | | |
| ▲ | amelius 3 days ago | parent [-] | | Can you give an example scenario? | | |
| ▲ | stnikolauswagne 3 days ago | parent [-] | | To properly test an LLM based emergency system against the current as-is system there needs to be a way of verifying whether the LLM detected emergency is classed as an emergency as-is. If this information was available publicaly it could enable bad actors things like stress-testing the EMP-tolerance of the current systems or what level of malware infiltration is detected. |
|
|
|
|
|
|
| |
| ▲ | KaiserPro 3 days ago | parent | prev [-] | | General LLMs I would say are uniquely bad at this sort of thing. I mean if you have a stable plane, then it'll do alright, as it'll mostly fly straight and level (assuming correct trim) reacting to turbulence however, the sampling rate would probably too slow, so you'd end up with oscillations. For recognising that you're in a shit situation, yeah, it'll probably do that fine, but won't be able to give the correct control inputs at the right time. | | |
| ▲ | stnikolauswagne 3 days ago | parent [-] | | >For recognising that you're in a shit situation, yeah, it'll probably do that fine, but won't be able to give the correct control inputs at the right time. Even that im not sure of, I know relatively little about aviation safety but I can imagine that there are all kinds of 0.0000000001% percent corner cases that no plane has ever encountered that still need some sort of reaction, who knows how easy an llm can distinguish those from the 0.000000001% corner cases that no plane has ever encountered that are completely fine and can be ignored. | | |
| ▲ | KaiserPro 3 days ago | parent [-] | | I agree with your intuition, There are lots of corner cases, but there are also a fucktonne of checklists: https://www.aviationhunt.com/boeing-737-normal-checklists/ (this is just a small "normal" one) but for loads of situations there are check lists, thats something the LLM can probably do very well. However its as far as I know the check list volume scales with how "airline-y" the plane is. so for a one seater, the checklist is small and only handles a few things. For a 777 its a binder. |
|
|
|