Home System concept OPINION: Why critical systems are needed to ensure safety in urban air mobility

OPINION: Why critical systems are needed to ensure safety in urban air mobility

0

Will Keegan is the CTO of Lynx Software Technologies.

Artificial Intelligence (AI) is such a buzz term and there is a lot of interest in AI. A recent Gartner study found that 48% of enterprise CIOs have already deployed or plan to deploy AI and machine learning technologies this year. However, the interest in AI is at odds with the maturity of AI. For some industries (e.g. customer experience with chatbots), the “cost of being right” is enough to see AI experimentation and deployment. But when organizations run mission-critical AI applications — where the “cost of one mistake” on an outcome can result in death — AI maturity is a must, and accuracy and safety are key differences to ensure Security.

Rushing safety engineering processes, building with new technology that regulators are still grappling with, and generating ROI on an aircraft with historical production life cycles of 30 years, is not a model. of success. For industries like automotive and aerospace, consumer confidence in system security is essential before this market grows.

My company has partnered with several Level 4 Autonomy platforms, and we see a common design barrier when organizations create safety nets to mitigate individual points of failure for critical functions. The preferred choice for achieving redundancy is to replicate functions on independent sets of hardware (usually three sets to implement triple mode redundancy).

Size, weight, power, and budget issues aside, replicating functions on individual hardware components can lead to common-mode failures, where redundant hardware components fail together due to internal design issues. Therefore, security authorities expect to see redundancy implemented with dissimilar hardware.

The adoption of dynamic architectures is hotly debated in the community dealing with critical applications. Security systems have generally been built around static methods. The goal of security system analysis is to examine the behavior of a system to ensure that any behavior is predictable and will operate safely for its environment.

Static systems allow easy analysis of system behavior, since system functionality and parameters are revealed in advance for human and automated static analysis. The concept of letting the fundamental properties of the system change dynamically causes significant analytical hurdles.

The debate around the adoption of dynamic capabilities centers on the notion that a system can modify its behavior to adapt to unpredictable scenarios during flight. “Limp home mode” is a capability that gains a lot from leveraging a dynamic architecture. This is where a major system failure occurs (e.g. a bird gets caught in a propeller) and other parts of the system intelligently distribute required functions among available resources for sufficient functionality to protect human life .

AI is needed because without human oversight, computers have to decide how to control machines on multiple levels, including mission critical ones. The permutations of variables that can have an impact on the state of the system are numerous; the use of model-based system control and risk analysis is essential to achieve Level 5 autonomy safely. However, there are hundreds of nuanced artificial neural networks that all have trade-offs. In three decades, security standards can only support the use of a few programming languages ​​(C, C++, Ada) with sufficiently solid knowledge and clear usage guidance alongside a mature ecosystem of vendors. ‘tools.

Obviously, the vast world of neural networks should be matched, unpacked and guided according to the goals and principles stated in DO-178C DAL A and ISO26262 ASIL-D. The FAA publication TC-16/4′ “Verification of Adaptive Systems” addresses the issues particularly well. However, we still do not have solid guidelines on usage and development process standards for artificial neural networks.

The basis of advanced analysis of safety systems in the automotive industry is a massive model that maps passenger relationships to vehicle interfaces and plots vehicle characteristics into functions that translate to distributed software on computer components . In the future, these models would become much more complex when working with the dynamics of autonomous platforms. The big questions to think about already for these models are a) what is sufficient and b) what is accurate?

Obviously, we need more certification. How can system validation happen for complex systems without managers having knowledge in the technical complexities such as kernel design and memory controllers, which are essential to enforce architectural properties? Component-level vendors are typically not involved in system validation, but instead are asked to develop products according to strict documentation, coding, and testing processes, and provide proof.

However, valid concerns include whether such evidence can meaningfully demonstrate that the components’ intended behavior is consistent with the systems integrators’ intentions.

In the automotive industry, aggressive claims have been made about the timing of availability of Level 5 autonomous platforms (no driver, no steering wheel, no environmental limitations). The reality was quite different. The avionics industry is understandably more conservative. I like the framework that the European Aviation Safety Agency published last year, which focused on AI applications that provide “assistance to humans”.

The key elements of this relate to the construction of a “reliability analysis” of the artificial intelligence block based on:

  • Learning Guarantee; Cover the transition from programming to learning, as existing development assurance methods are not suitable to cover AL/ML learning processes
  • Explainability; Provide understandable information on how an AI/ML application arrives at its results
  • Security Risk Mitigation; Since it is not possible to open the “AI black box” to the extent necessary, this provides guidelines on how the security risk can be addressed to deal with the inherent uncertainty

From this, and from the conversations we’ve had with customers, it seems pragmatism is the word that describes the industry’s approach. Just as lane departure detection is becoming relatively common in new vehicles, we will first see the use of AI in applications where the human remains in control. An example would be a vision-based system that facilitates in-flight refueling procedures. These important but peripheral use cases to core system functionality are great places to increase trust in the technology.

From there, we will see the technical deployment of AI in increasingly challenging systems with “pass to human operation” waivers. Some analysts have indicated that we may never reach the point of fully autonomous vehicles on our streets. I do I think we will reach the milestone of fully autonomous vehicles in the sky. Believing in the ‘crawl, walk, run’ path the industry is currently on is exactly the right path to making it a reality.