Refine Your Search

Search Results

Viewing 1 to 15 of 15
Technical Paper

A Discrete-Event Simulation of the NASA Fuel Production Plant on Mars

2017-09-19
2017-01-2017
The National Aeronautics and Space Administration (NASA) is preparing for a manned mission to Mars to test the sustainment of civilization on the planet Mars. This research explores the requirements and feasibility of autonomously producing fuel on Mars for a return trip back to Earth. As a part of NASA’s initiative for a manned trip to Mars, our team’s work creates and analyzes the allocation of resources necessary in deploying a fuel station on this foreign soil. Previous research has addressed concerns with a number individual components of this mission such as power required for fuel station and tools; however, the interactions between these components and the effects they would have on the overall requirements for the fuel station are still unknown to NASA. By creating a baseline discrete-event simulation model in a simulation software environment, the research team has been able to simulate the fuel production process on Mars.
Technical Paper

A Distributed Environment for Analysis of Events Related to Range Safety

2004-11-02
2004-01-3095
This paper features a distributed environment and the steps taken to incorporate the Virtual Range model into the Virtual Test Bed (VTB) infrastructure. The VTB is a prototype of a virtual engineering environment to study operations of current and future space vehicles, spaceports, and ranges. The High-Level Architecture (HLA) is the main environment. The VTB/HLA implementation described here represents different systems that interact in the simulation of a Space Shuttle liftoff. An example implementation displays the collaboration of a simplified version of the Space Shuttle Simulation Model and a simulation of the Launch Scrub Evaluation Model.
Technical Paper

A Distributed Environment for Spaceports

2004-11-02
2004-01-3094
This paper describes the development of a distributed environment for spaceport simulation modeling. This distributed environment is the result of the applications of the High-Level Architecture (HLA) and integration frameworks based on software agents and XML. This distributed environment is called the Virtual Test Bed (VTB). A distributed environment is needed due to the nature of the different models needed to represent a spaceport. This paper provides two case studies: one related to the translation of a model from its native environment and the other one related to the integration of real-time weather.
Technical Paper

A Distributed Simulation of a Martian Fuel Production Facility

2017-09-19
2017-01-2022
The future of human exploration in the solar system is contingent on the ability to exploit resources in-situ to produce mission consumables. Specifically, it has become clear that the success of a manned mission to Mars will likely depend on fuel components created on the Martian surface. While several architectures for an unmanned fuel production surface facility on Mars exist in theory, a simulation of the performance and operation of these architectures has not been created. In this paper, the framework describing a simulation of one such architecture is defined. Within this architecture, each component of the base is implemented as a state machine, with the ability to communicate with other base elements as well as a supervisor. An environment supervisor is also created which governs low level aspects of the simulation such as movement and resource distribution, in addition to higher-level aspects such as location selection with respect to operations specific behavior.
Journal Article

A Methodology on Guiding Effectiveness-Focused Training of the Weapon Operator Using Big Data and VC Simulations

2017-09-19
2017-01-2018
Operator training using a weapon in a real-world environment is risky, expensive, time-consuming, and restricted to the given environment. In addition, governments are under intense scrutiny to provide security, yet they must also strive for efficiency and reduce spending. In other words, they must do more with less. Virtual simulation, is usually employed to solve these limitations. As the operator is trained to maximize weapon effectiveness, the effectiveness-focused training can be completed in an economical manner. Unfortunately, the training is completed in limited scenarios without objective levels of training factors for an individual operator to optimize the weapon effectiveness. Thus, the training will not be effective. For overcoming this problem, we suggest a methodology on guiding effectiveness-focused training of the weapon operator through usability assessments, big data, and Virtual and Constructive (VC) simulations.
Journal Article

An Architecture for Monitoring and Anomaly Detection for Space Systems

2013-09-17
2013-01-2090
Complex aerospace engineering systems require innovative methods for performance monitoring and anomaly detection. The interface of a real-time data stream to a system for analysis, pattern recognition, and anomaly detection can require distributed system architectures and sophisticated custom programming. This paper presents a case study of a simplified interface between Programmable Logic Controller (PLC) real-time data output, signal processing, cloud computing, and tablet systems. The discussed approach consists of three parts: First, the connectivity of real-time data from PLCs to the signal processing algorithms, using standard communication technologies. Second, the interface of legacy routines, such as NASA's Inductive Monitoring System (IMS), with a hybrid signal processing system. Third, the connectivity and interaction of the signal processing system with a wireless and distributed tablet, (iPhone/iPad) in a hybrid system configuration using cloud computing.
Journal Article

Building Multiple Resolution Modeling Systems Using the High-Level Architecture

2019-09-16
2019-01-1917
The modeling and simulation pyramid in defense states it clearly: Multi-Level modeling and simulation are required. Models and simulations are often classified by the US Department of Defense into four levels—campaign, mission, engagement, and engineering. Campaign simulation models are applied for evaluation; mission-level simulations to experiment with the integration of several macro agents; engagement simulations in engineered systems development; and engineering-level simulation models with a solid foundation in structural physics and components. Models operating at one level must be able to interact with models at another level. Therefore, the cure (“silver bullet”) is very clear: a comprehensive framework for Multiple Resolution Modeling (MRM) is needed. In this paper, we discuss our research about how to construct MRM environments.
Technical Paper

Development of the Multi-Resolution Modeling Environment through Aircraft Scenarios

2018-10-30
2018-01-1923
Multi-Resolution Modeling (MRM) is one of the key technologies for building complex and large-scale simulations using legacy simulators. MRM has been developed continuously, especially in military fields. MRM plays a crucial role to describe the battlefield and gathering the desired information efficiently by linking various levels of resolution. The simulation models interact across different local and/or distance area networks using the High Level Architecture (HLA) regardless of their operating systems and hardware. The HLA is a standard architecture developed by the US Department of Defense (DoD) and is meant to create interoperability among different types of simulators. Therefore, MRM implementations are very dependent on Interoperability and Composability. This paper summarizes the definition of MRM-related terminology and proposes a basic form of MRM system using Commercial Off-The-Shelf (COTS) simulators and HLA.
Journal Article

Modeling Space Operations Systems Using SysML as to Enable Anomaly Detection

2015-09-15
2015-01-2388
Although a multitude of anomaly detection and fault isolation programs can be found in the research, there does not appear to be any work published on architectural templates that could take advantage of multiple programs and integrate them into the desired systems. More specifically, there is an absence of a methodological process for generating anomaly detection and fault isolation designs to either embed within new system concepts, or supplement existing schemes. This paper introduces a new approach based on systems engineering and the System Modeling Language (SysML). Preliminary concepts of the proposed approach are explained. In addition, a case study is also mentioned.
Journal Article

Simulation and Systems Engineering: Lessons Learned

2019-03-19
2019-01-1331
Aerospace projects live a long time. Around the turn of the century, NASA first began to discuss multi-decadal projects with respect to the tools, methods, infrastructure and culture necessary to successfully establish outposts and bases both on the Moon as well as in adjacent space. Pilot projects were completed, capabilities developed and solutions were shared across the Agency. A decade later the Mars discussion was multi-generational with planning milestones 50 years in the future. The 1970’s Requirements Document, or the 1990’s System Model are nowhere near suitable for planning, development, integration and operations of multi-national, highly complex, incredibly expensive development efforts planned to outlast not only the careers of the developers but that of their children as well. Simulation in the different forms has become very important for this multi-decadal projects. The challenge will be to device ways to create formats and views which can stand time.
Technical Paper

Stitching The Digital Thread, Creating The Product Digital Quilt

2023-03-07
2023-01-1016
The making of a quilt is an interesting process. Historically, a quilt is a canvas of work made from old pieces of cloth cut into squares or whatever shape that make a nice connected pattern and then stitched together. The quilt could be random pieces that is not related to each other. In most recent years and more common cases, a quilt is made of different pieces of patches that are connected and laid out in a special way to tell a story. Not only does it portray a story that is put together in a certain sequence, but it also stiches the pieces of the quilt into a nice and complete narrative. A story that one can understand just by looking at the quilt spread and unfolded. Much like the making of a quilt that has a story to tell, a Product Digital Quilt will tell the story of a product. The Digital Product Quilt replaces the conventional way of telling a product story. The traditional product story is a method that is serially connecting multiple product life cycle silos together.
Journal Article

The Semantic Web and Space Operations

2011-10-18
2011-01-2506
In this paper, we introduce the use of ontologies to implement the information developed and organized by resource planning tools into standard project management documents covering integrated cost, resource modeling and analysis, and visualization. The basic upper ontology used for NASA Space Operations is explained and the results obtained are discussed. This ontology-centered approach is looking for tighter connections between software, hardware, and systems engineering.
Journal Article

Utilizing Discrete Event Simulation for Schedule Analysis: Processes and Lessons Learned from NASA's GOPD Integrated Timeline Model

2015-09-15
2015-01-2397
In planning, simulation models create microcosms, small universes that operate based on assumed principles. While this can be powerful, the information it can provide is limited by the assumptions made and the designed operation of the model. When performing schedule planning and analysis, modelers are often provided with timelines representing project tasks, their relationships, and estimates related to durations, resource requirements, etc. These timelines can be created with programs such as Microsoft Excel or Microsoft Project. There are several important attributes these timelines have; they represent a nominal flow (meaning they do not represent stochastic processes), and they are not necessarily governed by dates or subjected to a calendar. Attributes such as these become important in project planning since timelines often serve as the basis for creating schedules.
Journal Article

Utilizing Team Productivity Models in the Selection of Space Exploration Teams

2013-09-17
2013-01-2082
The term “productivity” all too often has becomes a buzz-word, ultimately diminishing its perceived importance. However, productivity is the major concern of any team, and therefore must be defined to gain an appropriate understanding of how a system is actually working. Here, productivity means the level of contribution to the throughput of a system such as defined in the Theory of Constraints. In the field of space exploration, the throughput is the number of milestones of the mission accomplished as well as the potential survival during extreme events (due to failures or other unplanned events). For a time tasks were accomplished by expert individuals (e.g., an astronaut), but recently team structures have become the norm. It is clear that with increased mission complexity, “no single entity can have complete knowledge of or the abilities to handle all matters” [10].
Journal Article

Weapon Combat Effectiveness Analytics Using Big Data and Simulations: A Literature Review

2019-03-19
2019-01-1365
The Weapon Combat Effectiveness (WCE) analytics is very expensive, time-consuming, and dangerous in the real world because we have to create data from the real operations with a lot of people and weapons in the actual environment. The Modeling and Simulation (M&S) of many techniques are used for overcoming these limitations. Although the era of big data has emerged and achieved a great deal of success in a variety of fields, most of WCE research using the Defense Modeling and Simulation (DM&S) techniques studied have considered a lot of assumptions and limited scenarios without the help of big data technologies. Furthermore, WCE analytics using previous methodologies cannot help but get the bias results. This paper reviews and combines the basic knowledge for the new WCE analytics methodology using big data and M&S to overcome these problems of bias. Then this paper reviews the general overview of WCE, DM&S, and big data.
X