Aerospace currently uses automation in flight-management systems, automatic-pilot systems, the protection of aircraft, etc. Considering some of the future applications of autonomy—remote sensing, war fighting, telecommunications, air taxis, package delivery, cargo transport, emergency first response—what are the technical challenges—hardware, software, interoperability, machine learning, adaptive control, object recognition? What are the integration issues in the airspace? What are the regulatory challenges—the need to validate, standardize, certify? Discussion will involve how industry gets from automation to autonomy and what standards will be needed to support technology development and certification.
Should industry engineer the human out of the system? What are the required steps to get there, and how could standardization help? Traditional human-in-the-loop engineering design considerations are shifting to accommodate human-on-the-loop (oversight and monitoring) operations. Future designs of fully autonomous systems may result in humans-out-of-the-loop—and already have to varying extents in military, space, and deep submarine systems. How can these systems augment the capabilities of humans—for instance in air-traffic management? Discussions will explore considerations for the role of standards in defining the current and future roles of humans in these types of autonomous and semi-autonomous systems and how this will impact the design of the systems.
As we move toward autonomy of flight and gradually rely on artificial intelligence (AI), processes will be less and less governed by deterministic algorithms, thereby increasing the discretionary power of machines. As self-organizing, reproducing neutral networks replace deterministic algorithms, adequate controls will be needed, including the need to understand the “ethics of AI systems”.
It is imperative that those who know the industry best are the ones who contribute to determining how AI controls are shaped. Stakeholders will need to bring their expertise together to explore the current state of AI, reach agreement on common terminology, and gain clarity on the expected AI development stages over time and the typical characteristics of each progressive level of AI performance. The panel will discuss how AI can be safely introduced into our systems, how the technology can be systematically developed in a safe environment, and how industry standards will help support this complex new field.
As the aerospace community builds more sophisticated systems, how can we ensure that intelligent automated or autonomous systems attain the level of reliability needed to satisfy safety, reliability, and operational requirements necessary to build trust. As we build in digital backbones, connectivity, preventive maintenance, decision making, and other complex advancements, trust in the system becomes increasingly critical. During this panel, discussion will focus on how trust is being considered as our systems become more reliant on big data, AI, and digitization spanning the entire aircraft and spacecraft life cycles.
Businesses and governments are placing more and more emphasis on the use of modeling to make design-based decisions about systems. The digital enterprise, also known as digital tapestry, model based engineering, digital twin, or digital thread, is very much “in the news” and a focus of activities. How much do we trust the fidelity of these models, especially when critical decisions about the design of a system rely on the models? How well do models talk to each other? As the Systems Engineering Heuristic states, “The greatest leverage in system architecting is at the interfaces; the greatest dangers are also at the interfaces” (Rechtin, 1991; Raymond, 1989). Discussions will explore the potential need for standards to manage these interfaces between models, and standards for the development and calibration of models.