I’m an aerospace engineer at a research institute. I’ve published some work using Whisper and GPT-4 to perform real time monitoring of air traffic control (ATC) audio communications. This could detect dangerous anomalies or monitor ATC performance. It worked correctly on 9 out of 10 examples, using real-world ATC audio. The paper is called: “Aircraft Anomaly Detection using Large Language Models: An Air Traffic Control Application”.
In addition, I’ve linked GPT-4 to aircraft propulsion design software for aircraft conceptual design and sizing. It works well with small projects, but it is context window limited. It needs to be fed a condensed version of the user manual in every chat, driving up token costs, and then all project files need to be included as well. When it works, it works well! It still struggles with spatial reasoning enough that I would not trust it to design an aircraft engine yet.
I’ve also investigated automating satellite component ground testing, but it struggled with multi-step tasks and big-picture thinking. I think the best use case for this application is still regular pair programming with chatGPT or Copilot.
Overall, I haven’t found good ways to enable autonomous behavior, and I keep coming back to “chatbot” as the best solution to many problems. This is unfortunate, as I want to make use of this intelligence, but without strong guardrails and loss of human oversight, I haven’t been able to harness it very well yet. Aerospace engineering tasks require a lot of complex decision making, so designing rigid prompt structures is difficult. Additionally, what I think are Aerospace problems end up being 90% software engineering problems. The nice part is that GPT-4 is pretty solidly at intern level for many tasks. The trick is finding tasks that 1. Require intelligence and 2. Can tolerate failure. I call them intern problems. I’m looking forward to GPT-5 and larger context windows so I can promote it from intern to junior engineer.