EASA invites input to Artificial Intelligence second edition, addresses new concepts

The European Aviation Safety Agency (EASA) has released the proposed Issue 2 of the Concept Paper for Artificial Intelligence (AI) Level 1 and Level 2 AI applications in aviation. Industry feedback is invited within the next 10 weeks.

This AI Concept Paper provides a framework for the development and deployment of safety-related machine learning applications that aims at ensuring an adequate level of AI assurance and human oversight over machine learning applications. It gives a first shape to the concept of ‘human-AI teaming’ for Level 2 AI applications, while building further on the initial concepts of learning assurance, AI explainability and ethics-based assessment that were developed in Issue 1 of this Concept Paper covering Level 1 applications (human assistance).

This new revision of the EASA AI Concept Paper builds on and refines further the Level 1 AI guidance (applications of human augmentation or assistance) that was published in December 2021, developing the foundational concepts of ‘learning assurance’, ‘AI explainability’ and ‘ethics-based assessment’.

The proposed Issue 2 of the EASA AI Concept Paper is another important step of the EASA AI Roadmap, towards the safe and responsible adoption of machine learning in the aviation industry.

Machine learning has gained popularity in recent years, with applications ranging from predictive maintenance to image and speech recognition. In the aviation industry, machine learning has the potential to improve safety and efficiency, support sustainable aviation, enhance the passengers experience and reduce costs.

However, the adoption of machine learning in the aviation industry presents some unique challenges, when it comes to ensuring the safety of the operations. In this context, this new revision of the EASA AI Concept Paper provides guidance for the development and deployment of Level 1 and Level 2 AI-based systems for safety-related applications.

Level 2 AI applications are driven by the novel concept of ‘human-AI teaming’ (HAT), which paves the way to the deployment of AI-based systems capable of performing automatic decision-making under the oversight of a human end user. This type of applications triggers the need for novel human factors guidance and design principles to ensure a safe human AI interaction (HAII).

Please use the comment-response document (CRD) to provide feedback to ai@easa.europa.eu.

For more information:

www.easa.europa.eu

Share this:
D-Fend advert. Click for website