Air Traffic Controllers in modern tower environmentIntroduction

Over the past few decades, Artificial Intelligence (AI) has emerged as a transformative force, revolutionising industries across the globe. In the aviation sector, AI has made significant strides, propelling the industry towards a new era of innovation, efficiency, and safety. From autonomous flight systems to intelligent maintenance solutions, AI is reshaping the way aircraft operate, navigate, and communicate.

Undeniably, AI has gained a new level of fidelity and sophistication in recent years. As demonstrated by the previous paragraph, which was written entirely by Chat GPT. Although the remainder of this blog will be written by myself, this  illustrates the advancements AI can bring to the field of aviation. In theory, more technologies can be introduced to the human-centric domain as it becomes more difficult to discern between natural and artificial forms of intelligence. The utilisation of AI in aviation can then be harmonised with human working procedures, and behaviours, to assist operational controllers to reduce the workload of staff and accommodate increased traffic levels whilst simultaneously mitigating safety risks. The shift of Research & Development focus in the industry to AI-based solutions is not only anticipated, but well underway. At EUROCONTROL alone, projects utilising AI have increased from 21 project in 2019 to over 50 in 2023 [1].

However, AI suffers unique challenges when coupled with this safety critical industry. In the current context, there is a lack of a regulatory framework due to the rapid advancements in AI that are preceding governance. This has implications on the development and validation of new solutions that require a route to implementation. Think are striving to drive discussion and progress on the matter to tackle issues faced in the industry while governing bodies implement their planned regulations.

 

Current Issues

The industry is inherently a human-centric domain because human operators, such as pilots and air traffic controllers, play the critical role in ensuring safety in operations. Complex decision making and problem solving is required in intricate and dynamic environments which are skills which have so far not been possible to replicate by machine. However, this landscape is changing. Leaps are being made in aviation technology which are only afforded by credible AI solutions, providing pivotal support in the decision-making process. However, the lack of a rigorous framework and general understanding has been inhibiting its full potential as we, as industry stakeholders, are currently unable to  make progress to determine how far to allow AI to make decisions for us in live operations.  

 

Here at Think, we have considerable expertise in human factors which comes from multidisciplinary backgrounds. Because of this, we are highly practiced in identifying pitfalls in the harmonisation between technical and human system components. Coming from a psychological perspective, here are some key concerns when combining humans with AI.

In a nutshell, the primary concern with artificially intelligent systems since the growth of deep learning neural networks and generative models is explainability. By this, we mean the ability to understand and interpret the decisions made by technologies. For a system to be understandable, the end-user should be able to follow a linear path of input, process, and output. This is not a straightforward process for AI, as inputs are typically multifaceted, drawing from many sources such as live data and historic datasets of multiple variables. AI typically utilises complex algorithms to then process this data before providing an output. This process can be very difficult, if not impossible, for the end-user to interpret during operations – creating a black box effect. Lack of explainability can create several inherent problems around trust, autonomy, and accountability.

Low trust arises as an issue when we consider how trust is formed. Whilst there are different variations of trust such as dispositional trust – one’s innate likelihood of trusting something and situational trust – where one’s trust is limited to specific scenarios such as tasks without detriment. One of the most prominent is learned trust. Learned trust is the human behaviour of observing, assessing, and evaluating a new entity. As AI can often suffer from a lack of explainability, it is not possible for the end-user to verify the reasoning for AI actions and gain trust in the system. Alternatively, one may experience dispositional trust and trust a system too much under the stereotype that the computer calculations are far superior to that of humans and disregard their own judgement.

Autonomy is an issue with AI in aviation due to concerns regarding the ability of AI systems to operate independently and make critical decisions without human intervention. While autonomy can bring efficiency and enhance capabilities, it raises questions about the system’s reliability, robustness, and adaptability in unpredictable situations, as well as challenges related to safety, ethics, and accountability. Striking the right balance between autonomous AI decision-making and human oversight is crucial to ensure the safe and responsible integration of AI in aviation operations.

With clouded perceptions of trust and autonomy in relation to AI systems, accountability begs to be questioned. There is no clear delineation between who is responsible for tasks, human or machine. Clear lines of responsibility are required, until the introduction of governance on the matter, this will continue to cloud the field as accountability is ambiguously assigned; differing from machine to machine, leaving the responsibilities of the end-user disorganised.

 

Validation & Deployment

The issues we face are broadened when the validation and deployment of AI in aviation needs to be managed. ATM solutions follow standardised validation frameworks to ensure new concepts are safe, efficient, and effective before they are implemented. Traditional validation frameworks are also designed to be applicable to a wide breadth of concepts. The European Operational Concept Validation Methodology (E-OCVM) is a widely used example of this. As the E-OCVM was co-created by the Managing Director here at Think – Conor Mullan – in 2005 [2], we are well-versed in the characteristics of validation under this framework. While the industry appreciates the changing landscape of emerging technologies and demonstrates iterative development of these frameworks to accommodate change, they typically do not capture the unique, dynamic characteristics and considerations of AI.

Typically, the assessment of human performance, operational performance, and safety, are the key drivers for determining success of a new solution. While they are still of importance, this is not necessarily the only focus for AI. These systems are often able to attain performance metrics easily as a result of complex calculations (e.g., in a reward-based system) – all the while maintaining safety standards as human error is reduced. Yet this doesn’t always account for human-machine interaction or long-term effects of intelligent assistance (e.g., system reliance, reduced situational awareness). Moreover, traditional frameworks are often designed for deterministic systems with well-defined rules and procedures. AI, on the other hand, may rely on machine learning algorithms which adapt and learn from data. Traditional frameworks overlook this dynamic and evolving nature of the technology.

These concerns, coupled with the inherent lack of explainability makes it difficult to define the scope of the validation process and to determine the appropriate validation criteria required to advance solutions towards implementation. Not to mention how the above creates ethical and safety concerns. Even technologically inclined validation frameworks, such as the Technology Readiness Levels, do not combat the unique challenges and requirements of machine learning and deep learning algorithms compared to traditional technologies. Variables such as quality, data bias, interpretability and algorithmic robustness are a few examples of the unique challenges that need to be considered with AI solutions.

When practitioners have a deep-rooted understanding of validation methodologies, as we do here at Think, it is possible to discern the pitfalls that AI will pose on the robustness of a methodology and take steps to accommodate in accordance. Nonetheless, departing from the jointly accepted methodology can undermine confidence in the findings. Therefore, we are always striving to stay agile on emerging concepts and technologies to provide trustworthy evidence and insights.

Validation frameworks specific to AI such as the ‘AI Readiness Level’ or the ‘Maturity Model for AI’ provide a good basis for how validation efforts can fully consider intelligent solutions heading for implementation. Nonetheless, a thorough validation process that incorporates the specific needs of the ATM industry is needed. This process requires collaboration between industry experts, regulators, and developers to ensure that AI-based solutions meet safety, efficiency, and performance requirements. In the meantime, it should be a common goal for practitioners in the industry to dismantle their preconceptions of validation and ensure consideration is given to the distinct issues concepts utilising AI poses on the industry.

 

Future Regulation

On 21st  April 2021, the European Commission proposed the AI Act [3] which follows a risk-based approach which categorises the restrictions and requirements for technologies (unacceptable risk, high risk, transparency risk, minimal or no risk). As it currently stands, this regulation is intended to be of ‘horizontal’ format [1], to be applied across all domains. Which means the regulations will not provide specific authority over the use of AI in the aviation industry. However, over time industry specificities are expected to mature. The UK is demonstrating similar initiatives in the regulation of AI with the ‘AI Action Plan’ [4]. Unlike the European Commission, the UK Government plans to adopt a less centralised approach. This will allow different regulators to take a tailored approach to the use of technologies which will better reflect the broad range of use of AI in different sectors. Yet as with most legal matters, this is, and will continue to be, a slow process.

The likes of European Aviation Safety Agency (EASA), European Organization for Civil Aviation Equipment (EUROCAE), and Single European Sky ATM Research (SESAR) are advancing guidelines on the creation and use of AI [5] [6] alongside other key stakeholders in aviation to deliver clarity and ensure we as an industry take full advantage of what AI has to offer. This includes AI guidance for basic levels of automation that the industry can lend advice from to ensure all factors of concern are considered with new intelligent concepts. This is considered to grow and develop to a catalogue of guidance and encompass all intelligent systems up to those completing autonomous tasks without supervision. Additionally, EASA have plans to introduce certification for AI technologies to provide stakeholders with the necessary trust and guidance on the use of system on an individual basis.

Whilst this may appear to restrict the current exploration of new technologies, it begins to address the stakeholder concerns created over trust, autonomy and accountability of AI which will ultimately drive progression.

 

Conclusion

The aviation industry is demonstrating a major shift in initiatives to aid human performance and accommodate traffic demands rising above the pre-COVID ceiling. As fidelity increases, AI is proving to be a suitable method to reach this goal, yet the guidance to ensure these solutions are safe for implementation is currently lacking. Here at Think, we are constantly striving to maintain our thinking the leading-edge on the forefront of aviation to provide quality services to our customers.

We have extended our validation expertise to complex AI concepts. This has included extensive involvement in SESAR 2020 Wave 2 solutions that apply machine learning to arrival and departure separation management [7] [8] [9] and validation planning of artificial air traffic controller ‘agents’ using multiple reward-based algorithms on a probabilistic digital twin of UK airspace [10]. Additionally, we recently attended the FlyAI forum, hosted at EUROCONTROL headquarters in Brussels to speak to the industry leaders and future minds of the field. As the field grows, we at Think are keen to continue diversifying our expertise in AI’s many applications and progress these technologies into regulatory assurance implementation.

Alfie Fuller, Analyst, Think Research

Author: Alfie Fuller, Analyst, Think Research

 


References

[1] Fly AI Forum, EUROCONTROL, April 2023

[2] European Operational Concept Validation Methodology, Version 1, EUROCONTROL, June 2005

[3] Artificial Intelligence Act, European Commission, April 2021

[4]  AI Action Plan, UK Government, Department for Digital, Culture, Media & Sport, July 2022

[5] EASA AI Roadmap, 2.0, EASA, May 2023

[6] First usable guidance for Level 1 machine learning applications, Issue 1, EASA, May 2023

[7] Dynamic pairwise separations for arrivals (D-PWS arrivals), PJ.02-W2-14.7

[8] Dynamic Pairwise Wake Separations for departures based on wake risk monitoring, PJ.02-W2-14.9a, SESAR JU

[9]  Dynamic pairwise runway separations based on ground-computed arrival ROT (D-PWS-AROT), PJ.02-W2-14.10, SESAR JU

[10] Project Bluebird, Alan Turing Institute