Author: Jonas Rybing
Publisher: Linköping University Electronic Press
ISBN: 9176853489
Category :
Languages : en
Pages : 115
Book Description
Simulations are frequently used techniques for training, performance assessment, and prediction of future outcomes. In this thesis, the term “human-centered simulation” is used to refer to any simulation in which humans and human cognition are integral to the simulation’s function and purpose (e.g., simulation-based training). A general problem for human-centered simulations is to capture the cognitive processes and activities of the target situation (i.e., the real world task) and recreate them accurately in the simulation. The prevalent view within the simulation research community is that cognition is internal, decontextualized computational processes of individuals. However, contemporary theories of cognition emphasize the importance of the external environment, use of tools, as well as social and cultural factors in cognitive practice. Consequently, there is a need for research on how such contemporary perspectives can be used to describe human-centered simulations, re-interpret theoretical constructs of such simulations, and direct how simulations should be modeled, designed, and evaluated. This thesis adopts distributed cognition as a framework for studying human-centered simulations. Training and assessment of emergency medical management in a Swedish context using the Emergo Train System (ETS) simulator was adopted as a case study. ETS simulations were studied and analyzed using the distributed cognition for teamwork (DiCoT) methodology with the goal of understanding, evaluating, and testing the validity of the ETS simulator. Moreover, to explore distributed cognition as a basis for simulator design, a digital re-design of ETS (DIGEMERGO) was developed based on the DiCoT analysis. The aim of the DIGEMERGO system was to retain core distributed cognitive features of ETS, to increase validity, outcome reliability, and to provide a digital platform for emergency medical studies. DIGEMERGO was evaluated in three separate studies; first, a usefulness, usability, and facevalidation study that involved subject-matter-experts; second, a comparative validation study using an expert-novice group comparison; and finally, a transfer of training study based on self-efficacy and management performance. Overall, the results showed that DIGEMERGO was perceived as a useful, immersive, and promising simulator – with mixed evidence for validity – that demonstrated increased general self-efficacy and management performance following simulation exercises. This thesis demonstrates that distributed cognition, using DiCoT, is a useful framework for understanding, designing and evaluating simulated environments. In addition, the thesis conceptualizes and re-interprets central constructs of human-centered simulation in terms of distributed cognition. In doing so, the thesis shows how distributed cognitive processes relate to validity, fidelity, functionality, and usefulness of human-centered simulations. This thesis thus provides a new understanding of human-centered simulations that is grounded in distributed cognition theory.
Studying Simulations with Distributed Cognition
Author: Jonas Rybing
Publisher: Linköping University Electronic Press
ISBN: 9176853489
Category :
Languages : en
Pages : 115
Book Description
Simulations are frequently used techniques for training, performance assessment, and prediction of future outcomes. In this thesis, the term “human-centered simulation” is used to refer to any simulation in which humans and human cognition are integral to the simulation’s function and purpose (e.g., simulation-based training). A general problem for human-centered simulations is to capture the cognitive processes and activities of the target situation (i.e., the real world task) and recreate them accurately in the simulation. The prevalent view within the simulation research community is that cognition is internal, decontextualized computational processes of individuals. However, contemporary theories of cognition emphasize the importance of the external environment, use of tools, as well as social and cultural factors in cognitive practice. Consequently, there is a need for research on how such contemporary perspectives can be used to describe human-centered simulations, re-interpret theoretical constructs of such simulations, and direct how simulations should be modeled, designed, and evaluated. This thesis adopts distributed cognition as a framework for studying human-centered simulations. Training and assessment of emergency medical management in a Swedish context using the Emergo Train System (ETS) simulator was adopted as a case study. ETS simulations were studied and analyzed using the distributed cognition for teamwork (DiCoT) methodology with the goal of understanding, evaluating, and testing the validity of the ETS simulator. Moreover, to explore distributed cognition as a basis for simulator design, a digital re-design of ETS (DIGEMERGO) was developed based on the DiCoT analysis. The aim of the DIGEMERGO system was to retain core distributed cognitive features of ETS, to increase validity, outcome reliability, and to provide a digital platform for emergency medical studies. DIGEMERGO was evaluated in three separate studies; first, a usefulness, usability, and facevalidation study that involved subject-matter-experts; second, a comparative validation study using an expert-novice group comparison; and finally, a transfer of training study based on self-efficacy and management performance. Overall, the results showed that DIGEMERGO was perceived as a useful, immersive, and promising simulator – with mixed evidence for validity – that demonstrated increased general self-efficacy and management performance following simulation exercises. This thesis demonstrates that distributed cognition, using DiCoT, is a useful framework for understanding, designing and evaluating simulated environments. In addition, the thesis conceptualizes and re-interprets central constructs of human-centered simulation in terms of distributed cognition. In doing so, the thesis shows how distributed cognitive processes relate to validity, fidelity, functionality, and usefulness of human-centered simulations. This thesis thus provides a new understanding of human-centered simulations that is grounded in distributed cognition theory.
Publisher: Linköping University Electronic Press
ISBN: 9176853489
Category :
Languages : en
Pages : 115
Book Description
Simulations are frequently used techniques for training, performance assessment, and prediction of future outcomes. In this thesis, the term “human-centered simulation” is used to refer to any simulation in which humans and human cognition are integral to the simulation’s function and purpose (e.g., simulation-based training). A general problem for human-centered simulations is to capture the cognitive processes and activities of the target situation (i.e., the real world task) and recreate them accurately in the simulation. The prevalent view within the simulation research community is that cognition is internal, decontextualized computational processes of individuals. However, contemporary theories of cognition emphasize the importance of the external environment, use of tools, as well as social and cultural factors in cognitive practice. Consequently, there is a need for research on how such contemporary perspectives can be used to describe human-centered simulations, re-interpret theoretical constructs of such simulations, and direct how simulations should be modeled, designed, and evaluated. This thesis adopts distributed cognition as a framework for studying human-centered simulations. Training and assessment of emergency medical management in a Swedish context using the Emergo Train System (ETS) simulator was adopted as a case study. ETS simulations were studied and analyzed using the distributed cognition for teamwork (DiCoT) methodology with the goal of understanding, evaluating, and testing the validity of the ETS simulator. Moreover, to explore distributed cognition as a basis for simulator design, a digital re-design of ETS (DIGEMERGO) was developed based on the DiCoT analysis. The aim of the DIGEMERGO system was to retain core distributed cognitive features of ETS, to increase validity, outcome reliability, and to provide a digital platform for emergency medical studies. DIGEMERGO was evaluated in three separate studies; first, a usefulness, usability, and facevalidation study that involved subject-matter-experts; second, a comparative validation study using an expert-novice group comparison; and finally, a transfer of training study based on self-efficacy and management performance. Overall, the results showed that DIGEMERGO was perceived as a useful, immersive, and promising simulator – with mixed evidence for validity – that demonstrated increased general self-efficacy and management performance following simulation exercises. This thesis demonstrates that distributed cognition, using DiCoT, is a useful framework for understanding, designing and evaluating simulated environments. In addition, the thesis conceptualizes and re-interprets central constructs of human-centered simulation in terms of distributed cognition. In doing so, the thesis shows how distributed cognitive processes relate to validity, fidelity, functionality, and usefulness of human-centered simulations. This thesis thus provides a new understanding of human-centered simulations that is grounded in distributed cognition theory.
Cognition in the Wild
Author: Edwin Hutchins
Publisher: MIT Press
ISBN: 0262581469
Category : Psychology
Languages : en
Pages : 403
Book Description
Edwin Hutchins combines his background as an anthropologist and an open ocean racing sailor and navigator in this account of how anthropological methods can be combined with cognitive theory to produce a new reading of cognitive science. His theoretical insights are grounded in an extended analysis of ship navigation—its computational basis, its historical roots, its social organization, and the details of its implementation in actual practice aboard large ships. The result is an unusual interdisciplinary approach to cognition in culturally constituted activities outside the laboratory—"in the wild." Hutchins examines a set of phenomena that have fallen in the cracks between the established disciplines of psychology and anthropology, bringing to light a new set of relationships between culture and cognition. The standard view is that culture affects the cognition of individuals. Hutchins argues instead that cultural activity systems have cognitive properties of their own that are different from the cognitive properties of the individuals who participate in them. Each action for bringing a large naval vessel into port, for example, is informed by culture: the navigation team can be seen as a cognitive and computational system. Introducing Navy life and work on the bridge, Hutchins makes a clear distinction between the cognitive properties of an individual and the cognitive properties of a system. In striking contrast to the usual laboratory tasks of research in cognitive science, he applies the principal metaphor of cognitive science—cognition as computation (adopting David Marr's paradigm)—to the navigation task. After comparing modern Western navigation with the method practiced in Micronesia, Hutchins explores the computational and cognitive properties of systems that are larger than an individual. He then turns to an analysis of learning or change in the organization of cognitive systems at several scales. Hutchins's conclusion illustrates the costs of ignoring the cultural nature of cognition, pointing to the ways in which contemporary cognitive science can be transformed by new meanings and interpretations. A Bradford Book
Publisher: MIT Press
ISBN: 0262581469
Category : Psychology
Languages : en
Pages : 403
Book Description
Edwin Hutchins combines his background as an anthropologist and an open ocean racing sailor and navigator in this account of how anthropological methods can be combined with cognitive theory to produce a new reading of cognitive science. His theoretical insights are grounded in an extended analysis of ship navigation—its computational basis, its historical roots, its social organization, and the details of its implementation in actual practice aboard large ships. The result is an unusual interdisciplinary approach to cognition in culturally constituted activities outside the laboratory—"in the wild." Hutchins examines a set of phenomena that have fallen in the cracks between the established disciplines of psychology and anthropology, bringing to light a new set of relationships between culture and cognition. The standard view is that culture affects the cognition of individuals. Hutchins argues instead that cultural activity systems have cognitive properties of their own that are different from the cognitive properties of the individuals who participate in them. Each action for bringing a large naval vessel into port, for example, is informed by culture: the navigation team can be seen as a cognitive and computational system. Introducing Navy life and work on the bridge, Hutchins makes a clear distinction between the cognitive properties of an individual and the cognitive properties of a system. In striking contrast to the usual laboratory tasks of research in cognitive science, he applies the principal metaphor of cognitive science—cognition as computation (adopting David Marr's paradigm)—to the navigation task. After comparing modern Western navigation with the method practiced in Micronesia, Hutchins explores the computational and cognitive properties of systems that are larger than an individual. He then turns to an analysis of learning or change in the organization of cognitive systems at several scales. Hutchins's conclusion illustrates the costs of ignoring the cultural nature of cognition, pointing to the ways in which contemporary cognitive science can be transformed by new meanings and interpretations. A Bradford Book
Distributed cognition in learning and behavioral change – based on human and artificial intelligence
Author: Dietrich Albert
Publisher: Frontiers Media SA
ISBN: 283254231X
Category : Science
Languages : en
Pages : 140
Book Description
Publisher: Frontiers Media SA
ISBN: 283254231X
Category : Science
Languages : en
Pages : 140
Book Description
Foundations and Theoretical Perspectives of Distributed Team Cognition
Author: Michael McNeese
Publisher: CRC Press
ISBN: 042986177X
Category : Computers
Languages : en
Pages : 249
Book Description
The background and interwoven streams of team cognition and distributed cognition fermenting together has wielded new nuances of exploration, which continue to be relevant for a theoretical understanding of team phenomena. Foundations and Theoretical Perspectives of Distributed Teams Cognition looks at fundamentals, theoretical concepts, and how theory informs perspectives of thinking for distributed team cognition. The chapters yield a broad understanding of the nature of diverse thinking and insights into technologies, foundations, and theoretical perspectives of distributed team cognition. Features Generates historical patterns and significance that compose developmental trajectories Explains multiple perspectives that incorporate an interdisciplinary understanding that specifies diverse theories Identifies and develops particular challenges resident within team simulation studies and then illustrates research frameworks Highlights and reviews how team simulations are used to produce dynamic experimental results Investigates and studies research variables within distributed team cognition
Publisher: CRC Press
ISBN: 042986177X
Category : Computers
Languages : en
Pages : 249
Book Description
The background and interwoven streams of team cognition and distributed cognition fermenting together has wielded new nuances of exploration, which continue to be relevant for a theoretical understanding of team phenomena. Foundations and Theoretical Perspectives of Distributed Teams Cognition looks at fundamentals, theoretical concepts, and how theory informs perspectives of thinking for distributed team cognition. The chapters yield a broad understanding of the nature of diverse thinking and insights into technologies, foundations, and theoretical perspectives of distributed team cognition. Features Generates historical patterns and significance that compose developmental trajectories Explains multiple perspectives that incorporate an interdisciplinary understanding that specifies diverse theories Identifies and develops particular challenges resident within team simulation studies and then illustrates research frameworks Highlights and reviews how team simulations are used to produce dynamic experimental results Investigates and studies research variables within distributed team cognition
Health Care Comes Home
Author: National Research Council
Publisher: National Academies Press
ISBN: 0309212405
Category : Medical
Languages : en
Pages : 202
Book Description
In the United States, health care devices, technologies, and practices are rapidly moving into the home. The factors driving this migration include the costs of health care, the growing numbers of older adults, the increasing prevalence of chronic conditions and diseases and improved survival rates for people with those conditions and diseases, and a wide range of technological innovations. The health care that results varies considerably in its safety, effectiveness, and efficiency, as well as in its quality and cost. Health Care Comes Home reviews the state of current knowledge and practice about many aspects of health care in residential settings and explores the short- and long-term effects of emerging trends and technologies. By evaluating existing systems, the book identifies design problems and imbalances between technological system demands and the capabilities of users. Health Care Comes Home recommends critical steps to improve health care in the home. The book's recommendations cover the regulation of health care technologies, proper training and preparation for people who provide in-home care, and how existing housing can be modified and new accessible housing can be better designed for residential health care. The book also identifies knowledge gaps in the field and how these can be addressed through research and development initiatives. Health Care Comes Home lays the foundation for the integration of human health factors with the design and implementation of home health care devices, technologies, and practices. The book describes ways in which the Agency for Healthcare Research and Quality (AHRQ), the U.S. Food and Drug Administration (FDA), and federal housing agencies can collaborate to improve the quality of health care at home. It is also a valuable resource for residential health care providers and caregivers.
Publisher: National Academies Press
ISBN: 0309212405
Category : Medical
Languages : en
Pages : 202
Book Description
In the United States, health care devices, technologies, and practices are rapidly moving into the home. The factors driving this migration include the costs of health care, the growing numbers of older adults, the increasing prevalence of chronic conditions and diseases and improved survival rates for people with those conditions and diseases, and a wide range of technological innovations. The health care that results varies considerably in its safety, effectiveness, and efficiency, as well as in its quality and cost. Health Care Comes Home reviews the state of current knowledge and practice about many aspects of health care in residential settings and explores the short- and long-term effects of emerging trends and technologies. By evaluating existing systems, the book identifies design problems and imbalances between technological system demands and the capabilities of users. Health Care Comes Home recommends critical steps to improve health care in the home. The book's recommendations cover the regulation of health care technologies, proper training and preparation for people who provide in-home care, and how existing housing can be modified and new accessible housing can be better designed for residential health care. The book also identifies knowledge gaps in the field and how these can be addressed through research and development initiatives. Health Care Comes Home lays the foundation for the integration of human health factors with the design and implementation of home health care devices, technologies, and practices. The book describes ways in which the Agency for Healthcare Research and Quality (AHRQ), the U.S. Food and Drug Administration (FDA), and federal housing agencies can collaborate to improve the quality of health care at home. It is also a valuable resource for residential health care providers and caregivers.
Distributed Moving Base Driving Simulators
Author: Anders Andersson
Publisher: Linköping University Electronic Press
ISBN: 9176850900
Category :
Languages : en
Pages : 60
Book Description
Development of new functionality and smart systems for different types of vehicles is accelerating with the advent of new emerging technologies such as connected and autonomous vehicles. To ensure that these new systems and functions work as intended, flexible and credible evaluation tools are necessary. One example of this type of tool is a driving simulator, which can be used for testing new and existing vehicle concepts and driver support systems. When a driver in a driving simulator operates it in the same way as they would in actual traffic, you get a realistic evaluation of what you want to investigate. Two advantages of a driving simulator are (1.) that you can repeat the same situation several times over a short period of time, and (2.) you can study driver reactions during dangerous situations that could result in serious injuries if they occurred in the real world. An important component of a driving simulator is the vehicle model, i.e., the model that describes how the vehicle reacts to its surroundings and driver inputs. To increase the simulator realism or the computational performance, it is possible to divide the vehicle model into subsystems that run on different computers that are connected in a network. A subsystem can also be replaced with hardware using so-called hardware-in-the-loop simulation, and can then be connected to the rest of the vehicle model using a specified interface. The technique of dividing a model into smaller subsystems running on separate nodes that communicate through a network is called distributed simulation. This thesis investigates if and how a distributed simulator design might facilitate the maintenance and new development required for a driving simulator to be able to keep up with the increasing pace of vehicle development. For this purpose, three different distributed simulator solutions have been designed, built, and analyzed with the aim of constructing distributed simulators, including external hardware, where the simulation achieves the same degree of realism as with a traditional driving simulator. One of these simulator solutions has been used to create a parameterized powertrain model that can be configured to represent any of a number of different vehicles. Furthermore, the driver's driving task is combined with the powertrain model to monitor deviations. After the powertrain model was created, subsystems from a simulator solution and the powertrain model have been transferred to a Modelica environment. The goal is to create a framework for requirement testing that guarantees sufficient realism, also for a distributed driving simulation. The results show that the distributed simulators we have developed work well overall with satisfactory performance. It is important to manage the vehicle model and how it is connected to a distributed system. In the distributed driveline simulator setup, the network delays were so small that they could be ignored, i.e., they did not affect the driving experience. However, if one gradually increases the delays, a driver in the distributed simulator will change his/her behavior. The impact of communication latency on a distributed simulator also depends on the simulator application, where different usages of the simulator, i.e., different simulator studies, will have different demands. We believe that many simulator studies could be performed using a distributed setup. One issue is how modifications to the system affect the vehicle model and the desired behavior. This leads to the need for methodology for managing model requirements. In order to detect model deviations in the simulator environment, a monitoring aid has been implemented to help notify test managers when a model behaves strangely or is driven outside of its validated region. Since the availability of distributed laboratory equipment can be limited, the possibility of using Modelica (which is an equation-based and object-oriented programming language) for simulating subsystems is also examined. Implementation of the model in Modelica has also been extended with requirements management, and in this work a framework is proposed for automatically evaluating the model in a tool.
Publisher: Linköping University Electronic Press
ISBN: 9176850900
Category :
Languages : en
Pages : 60
Book Description
Development of new functionality and smart systems for different types of vehicles is accelerating with the advent of new emerging technologies such as connected and autonomous vehicles. To ensure that these new systems and functions work as intended, flexible and credible evaluation tools are necessary. One example of this type of tool is a driving simulator, which can be used for testing new and existing vehicle concepts and driver support systems. When a driver in a driving simulator operates it in the same way as they would in actual traffic, you get a realistic evaluation of what you want to investigate. Two advantages of a driving simulator are (1.) that you can repeat the same situation several times over a short period of time, and (2.) you can study driver reactions during dangerous situations that could result in serious injuries if they occurred in the real world. An important component of a driving simulator is the vehicle model, i.e., the model that describes how the vehicle reacts to its surroundings and driver inputs. To increase the simulator realism or the computational performance, it is possible to divide the vehicle model into subsystems that run on different computers that are connected in a network. A subsystem can also be replaced with hardware using so-called hardware-in-the-loop simulation, and can then be connected to the rest of the vehicle model using a specified interface. The technique of dividing a model into smaller subsystems running on separate nodes that communicate through a network is called distributed simulation. This thesis investigates if and how a distributed simulator design might facilitate the maintenance and new development required for a driving simulator to be able to keep up with the increasing pace of vehicle development. For this purpose, three different distributed simulator solutions have been designed, built, and analyzed with the aim of constructing distributed simulators, including external hardware, where the simulation achieves the same degree of realism as with a traditional driving simulator. One of these simulator solutions has been used to create a parameterized powertrain model that can be configured to represent any of a number of different vehicles. Furthermore, the driver's driving task is combined with the powertrain model to monitor deviations. After the powertrain model was created, subsystems from a simulator solution and the powertrain model have been transferred to a Modelica environment. The goal is to create a framework for requirement testing that guarantees sufficient realism, also for a distributed driving simulation. The results show that the distributed simulators we have developed work well overall with satisfactory performance. It is important to manage the vehicle model and how it is connected to a distributed system. In the distributed driveline simulator setup, the network delays were so small that they could be ignored, i.e., they did not affect the driving experience. However, if one gradually increases the delays, a driver in the distributed simulator will change his/her behavior. The impact of communication latency on a distributed simulator also depends on the simulator application, where different usages of the simulator, i.e., different simulator studies, will have different demands. We believe that many simulator studies could be performed using a distributed setup. One issue is how modifications to the system affect the vehicle model and the desired behavior. This leads to the need for methodology for managing model requirements. In order to detect model deviations in the simulator environment, a monitoring aid has been implemented to help notify test managers when a model behaves strangely or is driven outside of its validated region. Since the availability of distributed laboratory equipment can be limited, the possibility of using Modelica (which is an equation-based and object-oriented programming language) for simulating subsystems is also examined. Implementation of the model in Modelica has also been extended with requirements management, and in this work a framework is proposed for automatically evaluating the model in a tool.
Simulation Training through the Lens of Experience and Activity Analysis
Author: Simon Flandin
Publisher: Springer Nature
ISBN: 303089567X
Category : Education
Languages : en
Pages : 310
Book Description
This book offers various ways in which analyzing professional experience and activity in simulation training makes it possible to describe practice-based learning affordances and processes. Research has been conducted in various simulation programs in the domains of healthcare, victim rescue and population protection, involving healthcare workers, firemen, policemen, servicemen, and civil security leaders. "Work-as-done" (/ "training-as-done") in simulation has been analyzed with ergonomics, occupational psychology, and vocational training approaches. The authors describe and discuss theoretical, methodological, and/or practical issues related to practitioner experience and activity in simulation training. The book also provides evidence on the conditions under which lived experience in simulation can foster or hinder learning, and derives appropriate orientations for simulation design and implementation.
Publisher: Springer Nature
ISBN: 303089567X
Category : Education
Languages : en
Pages : 310
Book Description
This book offers various ways in which analyzing professional experience and activity in simulation training makes it possible to describe practice-based learning affordances and processes. Research has been conducted in various simulation programs in the domains of healthcare, victim rescue and population protection, involving healthcare workers, firemen, policemen, servicemen, and civil security leaders. "Work-as-done" (/ "training-as-done") in simulation has been analyzed with ergonomics, occupational psychology, and vocational training approaches. The authors describe and discuss theoretical, methodological, and/or practical issues related to practitioner experience and activity in simulation training. The book also provides evidence on the conditions under which lived experience in simulation can foster or hinder learning, and derives appropriate orientations for simulation design and implementation.
Applied Interdisciplinary Theory in Health Informatics
Author: P. Scott
Publisher: IOS Press
ISBN: 1614999910
Category : Medical
Languages : en
Pages : 242
Book Description
The American Medical Informatics Association (AMIA) defines the term biomedical informatics (BMI) as: The interdisciplinary field that studies and pursues the effective uses of biomedical data, information, and knowledge for scientific inquiry, problem solving and decision making, motivated by efforts to improve human health. This book: Applied Interdisciplinary Theory in Health Informatics: A Knowledge Base for Practitioners, explores the theories that have been applied in health informatics and the differences they have made. The editors, all proponents of evidence-based health informatics, came together within the European Federation of Medical Informatics (EFMI) Working Group on Health IT Evaluation and the International Medical Informatics Association (IMIA) Working Group on Technology Assessment and Quality Development. The purpose of the book, which has a foreword by Charles Friedman, is to move forward the agenda of evidence-based health informatics by emphasizing theory-informed work aimed at enriching the understanding of this uniquely complex field. The book takes the AMIA definition as particularly helpful in its articulation of the three foundational domains of health informatics: health science, information science, and social science and their various overlaps, and this model has been used to structure the content of the book around the major subject areas. The book discusses some of the most important and commonly used theories relevant to health informatics, and constitutes a first iteration of a consolidated knowledge base that will advance the science of the field.
Publisher: IOS Press
ISBN: 1614999910
Category : Medical
Languages : en
Pages : 242
Book Description
The American Medical Informatics Association (AMIA) defines the term biomedical informatics (BMI) as: The interdisciplinary field that studies and pursues the effective uses of biomedical data, information, and knowledge for scientific inquiry, problem solving and decision making, motivated by efforts to improve human health. This book: Applied Interdisciplinary Theory in Health Informatics: A Knowledge Base for Practitioners, explores the theories that have been applied in health informatics and the differences they have made. The editors, all proponents of evidence-based health informatics, came together within the European Federation of Medical Informatics (EFMI) Working Group on Health IT Evaluation and the International Medical Informatics Association (IMIA) Working Group on Technology Assessment and Quality Development. The purpose of the book, which has a foreword by Charles Friedman, is to move forward the agenda of evidence-based health informatics by emphasizing theory-informed work aimed at enriching the understanding of this uniquely complex field. The book takes the AMIA definition as particularly helpful in its articulation of the three foundational domains of health informatics: health science, information science, and social science and their various overlaps, and this model has been used to structure the content of the book around the major subject areas. The book discusses some of the most important and commonly used theories relevant to health informatics, and constitutes a first iteration of a consolidated knowledge base that will advance the science of the field.
System-Level Design of GPU-Based Embedded Systems
Author: Arian Maghazeh
Publisher: Linköping University Electronic Press
ISBN: 9176851753
Category :
Languages : en
Pages : 81
Book Description
Modern embedded systems deploy several hardware accelerators, in a heterogeneous manner, to deliver high-performance computing. Among such devices, graphics processing units (GPUs) have earned a prominent position by virtue of their immense computing power. However, a system design that relies on sheer throughput of GPUs is often incapable of satisfying the strict power- and time-related constraints faced by the embedded systems. This thesis presents several system-level software techniques to optimize the design of GPU-based embedded systems under various graphics and non-graphics applications. As compared to the conventional application-level optimizations, the system-wide view of our proposed techniques brings about several advantages: First, it allows for fully incorporating the limitations and requirements of the various system parts in the design process. Second, it can unveil optimization opportunities through exposing the information flow between the processing components. Third, the techniques are generally applicable to a wide range of applications with similar characteristics. In addition, multiple system-level techniques can be combined together or with application-level techniques to further improve the performance. We begin by studying some of the unique attributes of GPU-based embedded systems and discussing several factors that distinguish the design of these systems from that of the conventional high-end GPU-based systems. We then proceed to develop two techniques that address an important challenge in the design of GPU-based embedded systems from different perspectives. The challenge arises from the fact that GPUs require a large amount of workload to be present at runtime in order to deliver a high throughput. However, for some embedded applications, collecting large batches of input data requires an unacceptable waiting time, prompting a trade-off between throughput and latency. We also develop an optimization technique for GPU-based applications to address the memory bottleneck issue by utilizing the GPU L2 cache to shorten data access time. Moreover, in the area of graphics applications, and in particular with a focus on mobile games, we propose a power management scheme to reduce the GPU power consumption by dynamically adjusting the display resolution, while considering the user's visual perception at various resolutions. We also discuss the collective impact of the proposed techniques in tackling the design challenges of emerging complex systems. The proposed techniques are assessed by real-life experimentations on GPU-based hardware platforms, which demonstrate the superior performance of our approaches as compared to the state-of-the-art techniques.
Publisher: Linköping University Electronic Press
ISBN: 9176851753
Category :
Languages : en
Pages : 81
Book Description
Modern embedded systems deploy several hardware accelerators, in a heterogeneous manner, to deliver high-performance computing. Among such devices, graphics processing units (GPUs) have earned a prominent position by virtue of their immense computing power. However, a system design that relies on sheer throughput of GPUs is often incapable of satisfying the strict power- and time-related constraints faced by the embedded systems. This thesis presents several system-level software techniques to optimize the design of GPU-based embedded systems under various graphics and non-graphics applications. As compared to the conventional application-level optimizations, the system-wide view of our proposed techniques brings about several advantages: First, it allows for fully incorporating the limitations and requirements of the various system parts in the design process. Second, it can unveil optimization opportunities through exposing the information flow between the processing components. Third, the techniques are generally applicable to a wide range of applications with similar characteristics. In addition, multiple system-level techniques can be combined together or with application-level techniques to further improve the performance. We begin by studying some of the unique attributes of GPU-based embedded systems and discussing several factors that distinguish the design of these systems from that of the conventional high-end GPU-based systems. We then proceed to develop two techniques that address an important challenge in the design of GPU-based embedded systems from different perspectives. The challenge arises from the fact that GPUs require a large amount of workload to be present at runtime in order to deliver a high throughput. However, for some embedded applications, collecting large batches of input data requires an unacceptable waiting time, prompting a trade-off between throughput and latency. We also develop an optimization technique for GPU-based applications to address the memory bottleneck issue by utilizing the GPU L2 cache to shorten data access time. Moreover, in the area of graphics applications, and in particular with a focus on mobile games, we propose a power management scheme to reduce the GPU power consumption by dynamically adjusting the display resolution, while considering the user's visual perception at various resolutions. We also discuss the collective impact of the proposed techniques in tackling the design challenges of emerging complex systems. The proposed techniques are assessed by real-life experimentations on GPU-based hardware platforms, which demonstrate the superior performance of our approaches as compared to the state-of-the-art techniques.
Machine Learning-Based Bug Handling in Large-Scale Software Development
Author: Leif Jonsson
Publisher: Linköping University Electronic Press
ISBN: 9176853063
Category :
Languages : en
Pages : 149
Book Description
This thesis investigates the possibilities of automating parts of the bug handling process in large-scale software development organizations. The bug handling process is a large part of the mostly manual, and very costly, maintenance of software systems. Automating parts of this time consuming and very laborious process could save large amounts of time and effort wasted on dealing with bug reports. In this thesis we focus on two aspects of the bug handling process, bug assignment and fault localization. Bug assignment is the process of assigning a newly registered bug report to a design team or developer. Fault localization is the process of finding where in a software architecture the fault causing the bug report should be solved. The main reason these tasks are not automated is that they are considered hard to automate, requiring human expertise and creativity. This thesis examines the possi- bility of using machine learning techniques for automating at least parts of these processes. We call these automated techniques Automated Bug Assignment (ABA) and Automatic Fault Localization (AFL), respectively. We treat both of these problems as classification problems. In ABA, the classes are the design teams in the development organization. In AFL, the classes consist of the software components in the software architecture. We focus on a high level fault localization that it is suitable to integrate into the initial support flow of large software development organizations. The thesis consists of six papers that investigate different aspects of the AFL and ABA problems. The first two papers are empirical and exploratory in nature, examining the ABA problem using existing machine learning techniques but introducing ensembles into the ABA context. In the first paper we show that, like in many other contexts, ensembles such as the stacked generalizer (or stacking) improves classification accuracy compared to individual classifiers when evaluated using cross fold validation. The second paper thor- oughly explore many aspects such as training set size, age of bug reports and different types of evaluation of the ABA problem in the context of stacking. The second paper also expands upon the first paper in that the number of industry bug reports, roughly 50,000, from two large-scale industry software development contexts. It is still as far as we are aware, the largest study on real industry data on this topic to this date. The third and sixth papers, are theoretical, improving inference in a now classic machine learning tech- nique for topic modeling called Latent Dirichlet Allocation (LDA). We show that, unlike the currently dominating approximate approaches, we can do parallel inference in the LDA model with a mathematically correct algorithm, without sacrificing efficiency or speed. The approaches are evaluated on standard research datasets, measuring various aspects such as sampling efficiency and execution time. Paper four, also theoretical, then builds upon the LDA model and introduces a novel supervised Bayesian classification model that we call DOLDA. The DOLDA model deals with both textual content and, structured numeric, and nominal inputs in the same model. The approach is evaluated on a new data set extracted from IMDb which have the structure of containing both nominal and textual data. The model is evaluated using two approaches. First, by accuracy, using cross fold validation. Second, by comparing the simplicity of the final model with that of other approaches. In paper five we empirically study the performance, in terms of prediction accuracy, of the DOLDA model applied to the AFL problem. The DOLDA model was designed with the AFL problem in mind, since it has the exact structure of a mix of nominal and numeric inputs in combination with unstructured text. We show that our DOLDA model exhibits many nice properties, among others, interpretability, that the research community has iden- tified as missing in current models for AFL.
Publisher: Linköping University Electronic Press
ISBN: 9176853063
Category :
Languages : en
Pages : 149
Book Description
This thesis investigates the possibilities of automating parts of the bug handling process in large-scale software development organizations. The bug handling process is a large part of the mostly manual, and very costly, maintenance of software systems. Automating parts of this time consuming and very laborious process could save large amounts of time and effort wasted on dealing with bug reports. In this thesis we focus on two aspects of the bug handling process, bug assignment and fault localization. Bug assignment is the process of assigning a newly registered bug report to a design team or developer. Fault localization is the process of finding where in a software architecture the fault causing the bug report should be solved. The main reason these tasks are not automated is that they are considered hard to automate, requiring human expertise and creativity. This thesis examines the possi- bility of using machine learning techniques for automating at least parts of these processes. We call these automated techniques Automated Bug Assignment (ABA) and Automatic Fault Localization (AFL), respectively. We treat both of these problems as classification problems. In ABA, the classes are the design teams in the development organization. In AFL, the classes consist of the software components in the software architecture. We focus on a high level fault localization that it is suitable to integrate into the initial support flow of large software development organizations. The thesis consists of six papers that investigate different aspects of the AFL and ABA problems. The first two papers are empirical and exploratory in nature, examining the ABA problem using existing machine learning techniques but introducing ensembles into the ABA context. In the first paper we show that, like in many other contexts, ensembles such as the stacked generalizer (or stacking) improves classification accuracy compared to individual classifiers when evaluated using cross fold validation. The second paper thor- oughly explore many aspects such as training set size, age of bug reports and different types of evaluation of the ABA problem in the context of stacking. The second paper also expands upon the first paper in that the number of industry bug reports, roughly 50,000, from two large-scale industry software development contexts. It is still as far as we are aware, the largest study on real industry data on this topic to this date. The third and sixth papers, are theoretical, improving inference in a now classic machine learning tech- nique for topic modeling called Latent Dirichlet Allocation (LDA). We show that, unlike the currently dominating approximate approaches, we can do parallel inference in the LDA model with a mathematically correct algorithm, without sacrificing efficiency or speed. The approaches are evaluated on standard research datasets, measuring various aspects such as sampling efficiency and execution time. Paper four, also theoretical, then builds upon the LDA model and introduces a novel supervised Bayesian classification model that we call DOLDA. The DOLDA model deals with both textual content and, structured numeric, and nominal inputs in the same model. The approach is evaluated on a new data set extracted from IMDb which have the structure of containing both nominal and textual data. The model is evaluated using two approaches. First, by accuracy, using cross fold validation. Second, by comparing the simplicity of the final model with that of other approaches. In paper five we empirically study the performance, in terms of prediction accuracy, of the DOLDA model applied to the AFL problem. The DOLDA model was designed with the AFL problem in mind, since it has the exact structure of a mix of nominal and numeric inputs in combination with unstructured text. We show that our DOLDA model exhibits many nice properties, among others, interpretability, that the research community has iden- tified as missing in current models for AFL.