Advanced search in Research products
Research products
arrow_drop_down
Searching FieldsTerms
Subject
arrow_drop_down
includes
arrow_drop_down
Include:
The following results are related to NEANIAS Space Research Community. Are you interested to view more results? Visit OpenAIRE - Explore.
7,408 Research products, page 1 of 741

  • NEANIAS Space Research Community

10
arrow_drop_down
Relevance
arrow_drop_down
  • Authors: 
    Tadeja, Slawomir;
    Publisher: University of Cambridge
    Country: United Kingdom

    Recent advancements and technological breakthroughs in the development of so-called immersive interfaces, such as augmented (AR), mixed (MR), and virtual reality (VR), coupled with the growing mass-market adoption of such devices has started to attract attention from academia and industry alike. Out of these technologies, VR offers the most mature option in terms of both hardware and software, as well as the best available range of different off-the-shelf offerings. VR is a term interchangeably used to denote both head-mounted displays (HMDs) and fully immersive, bespoke 3D environments which these devices transport their users to. With modern devices, developers can leverage a range of different interaction modalities, including visual, audio, and even haptic feedback, in the creation of these virtual worlds. With such a rich interaction space it is thus natural to think of VR as a well-suited environment for interactive visualisation and analytical reasoning of complex multidimensional data. Research in \textit{visual analytics} (VA) combines these two themes, spanning the last one and a half decades, and has revealed a number of research findings. This includes a range of new advanced and effective visualisation and analysis tools for even more complex, more noisy and larger data sets. Furthermore, the extension of this research and the use of immersive interfaces to facilitate visual analytics has spun-off a new field of research: \textit{immersive analytics} (IA). Immersive analytics leverages the potential bestowed by immersive interfaces to aid the user in swift and effective data analysis. Some of the most promising application domains of such immersive interfaces in the industry are various branches of engineering, including aerospace design and in civil engineering. The range of potential applications is vast and growing as new stakeholders are adopting these immersive tools. However, the use of these technologies brings its own challenges. One such difficulty is the design of appropriate interaction techniques. There is no optimal choice, instead such a choice varies depending on available hardware, the user���s prior experience, their task at hand, and the nature of the dataset. To this end, my PhD work has focused on designing and analysing various interactive, VR-based immersive systems for engineering visual analytics. One of the key elements of such an immersive system is the selection of an adequate interaction method. In a series of both qualitative and quantitative studies, I have explored the potential of various interaction techniques that can be used to support the user in swift and effective data analysis. Here, I have investigated the feasibility of using techniques such as hand-held controllers, gaze-tracking and hand-tracking input methods used solo or in combination in various challenging use cases and scenarios. For instance, I developed and verified the usability and effectiveness of the AeroVR system for aerospace design in VR. This research has allowed me to trim the very large design space of such systems that have been not sufficiently explored thus far. Moreover, building on top of this work, I have designed, developed, and tested a system for digital twin assessment in aerospace that coupled gaze-tracking and hand-tracking, achieved via an additional sensor attached to the front of the VR headset, with no need for the user to hold a controller. The analysis of the results obtained from a qualitative study with domain experts allowed me to distill and propose design implications when developing similar systems. Furthermore, I worked towards designing an effective VR-based visualisation of complex, multidimensional abstract datasets. Here, I developed and evaluated the immersive version of the well-known Parallel Coordinates Plots (IPCP) visualisation technique. The results of the series of qualitative user studies allowed me to obtain a list of design suggestions for IPCP, as well as provide tentative evidence that the IPCP can be an effective tool for multidimensional data analysis. Lastly, I also worked on the design, development, and verification of the system allowing its users to capture information in the context of conducting engineering surveys in VR. Furthermore, conducting a meaningful evaluation of immersive analytics interfaces remains an open problem. It is difficult and often not feasible to use traditional A/B comparisons in controlled experiments as the aim of immersive analytics is to provide its users with new insights into their data rather than focusing on more quantifying factors. To this end, I developed a generative process for synthesising clustered datasets for VR analytics experiments that can be used in the process of interface evaluation. I further validated this approach by designing and carrying out two user studies. The statistical analysis of the gathered data revealed that this generative process for synthesising clustered datasets did indeed result in datasets that can be used in experiments without the datasets themselves being the dominant contributor of the variability between conditions. Engineering and Physical Sciences Research Council (EPSRC-1788814); Trinity Hall and Cambridge Commonwealth, European & International Trust; Cambridge Philosophical Society

  • Publication . Article . 2021
    Open Access Italian
    Authors: 
    Cristina Marino;
    Publisher: Politecnico di Torino, Associazione culturale progresso grafico
    Country: Italy
  • Publication . Other literature type . 2012
    Closed Access English
    Country: Croatia

    EuroVA 2012 is the third international workshop on Visual Analytics held in Europe and aims at continuing the success of the two previous editions held in Bordeaux, France, on June 8th, 2010, and in Bergen, Norway, on May 31st, 2011. EuroVA 2012 takes place on June 4th and 5th, 2012, in Vienna, Austria, as a EuroVis 2012 Workshop. As in the previous years the goal of the workshop is to promote and advance the combination and integration of visualization and analytics methods for the purpose of problem solving in a variety of application domains. Visual Analytics is the science of analytical reasoning supported by interactive visual interfaces, which requires interdisciplinary science integrating techniques from visualization and computer graphics, statistics and mathematics, data management and knowledge representation, data analysis and machine learning, cognitive and perceptual sciences, and more. Topics of interest included, but were not limited to • Visual representations and interaction techniques • Data management and knowledge representation • Data analysis and machine learning • Cognitive and perceptual aspects • Infrastructure and evaluation • Applications, as far as they are related to visual analytics. The review process was supported by an international program committee (IPC) of 33 experts. Each paper was reviewed by three reviewers from this committee. Based on the reviews, the workshop co-chairs accepted 17 out of 31 submissions and organized the final program. The EuroVA program also features two invited talks by Jean-Daniel Fekete, INRIA, France, and Helwig Hauser, University of Bergen, Norway. We would like to thank all members of the EuroVA IPC for their effort in the reviewing process, and the invited speakers for their presentations and their willingness to support this workshop. EuroVA 2012 acknowledges the financial support of the Promise NoE European project, within the Seventh Framework Programme for Research of the European Commission, grant number 258191, and of Fraunhofer IGD. We also want to thank the Eurographics Association for their support.

  • Authors: 
    Kwon, Bum Chul; Lee, Jungsoo; Chung, Chaeyeon; Lee, Nyoungwoo; Choi, Ho-Jin; Choo, Jaegul;
    Publisher: The Eurographics Association

    Image classification models often learn to predict a class based on irrelevant co-occurrences between input features and an output class in training data. We call the unwanted correlations ''data biases,'' and the visual features causing data biases ''bias factors.'' It is challenging to identify and mitigate biases automatically without human intervention. Therefore, we conducted a design study to find a human-in-the-loop solution. First, we identified user tasks that capture the bias mitigation process for image classification models with three experts. Then, to support the tasks, we developed a visual analytics system called DASH that allows users to visually identify bias factors, to iteratively generate synthetic images using a state-of-the-art image-toimage translation model, and to supervise the model training process for improving the classification accuracy. Our quantitative evaluation and qualitative study with ten participants demonstrate the usefulness of DASH and provide lessons for future work. Bum Chul Kwon, Jungsoo Lee, Chaeyeon Chung, Nyoungwoo Lee, Ho-Jin Choi, and Jaegul Choo Visual Analysis and Machine Learning

  • Authors: 
    Berger, Philip; Schumann, Heidrun; Tominski, Christian;
    Publisher: The Eurographics Association

    The visual analysis of multivariate graphs increasingly involves not only exploring the data, but also editing them. Existing editing approaches for multivariate graphs support visual analytics workflows by facilitating a seamless switch between data exploration and editing. However, it remains difficult to comprehend performed editing operations in retrospect and to compare different editing results. Addressing these challenges, we propose a model describing what graph aspects can be edited and how. Based on this model, we develop a novel approach to visually track and understand data changes due to edit operations. To visualize the different graph states resulting from edits, we extend an existing graph visualization approach so that graph structure and the associated multivariate attributes can be represented together. Branching sequences of edits are visualized as a node-link tree layout where nodes represent graph states and edges visually encode the performed edit operations and the graph aspects they affect. Individual editing operations can be inspected by dynamically expanding edges to detail views on demand. In addition, we support the comparison of graph states through an interactive creation of attribute filters that can be applied to other states to highlight similarities. Philip Berger, Heidrun Schumann, and Christian Tominski Visual Analytics Techniques

  • Publication . Conference object . 2019
    Open Access English
    Authors: 
    Corvo, Alberto; Garcia Caballero, Humberto; Westenberg, Michel A.;
    Country: Netherlands

    The increasing quantity of data in biomedical informatics is leading towards better patient profiling and personalized medicine. Lab tests, medical images, and clinical data represent extraordinary sources for patient characterization. While retrospective studies focus on finding correlations in this sheer volume of data, potential new biomarkers are difficult to identify. A common approach is to observe patient mortality with respect to different clinical variables in what is called survival analysis. Kaplan-Meier plots, also known as survival curves, are generally used to examine patient survival in retrospective and prognostic studies. The plot is very intuitive and hence very popular in the medical domain to disclose evidence of poor or good prognosis. However, the Kaplan-Meier plots are mostly static and the data exploration of the plotted cohorts can be performed only with additional analysis. There is a need to make survival plots interactive and to integrate potential prognostic data that may reveal correlations with disease progression. We introduce SurviVIS, a visual analytics approach for interactive survival analysis and data integration on Kaplan-Meier plots. We demonstrate our work on a melanoma dataset and in the perspective of a potential use case in precision imaging. Alberto Corvò, Humberto Simon Garcia Caballero, and Michel A. Westenberg CCS Concepts: Applications --> Visual Analytics EuroVis Workshop on Visual Analytics (EuroVA) Applications of Visual Analytics 77 73

  • Open Access
    Authors: 
    Fiaux, Patrick O.;
    Publisher: Virginia Tech
    Country: United States

    Analysts must filter through an ever-growing amount of data to obtain information relevant to their investigations. Looking at every piece of information individually is in many cases not feasible; there is hence a growing need for new filtering tools and techniques to improve the analyst process with large datasets. We present MineVis — an analytics system that integrates biclustering algorithms and visual analytics tools in one seamless environment. The combination of biclusters and visual data glyphs in a visual analytics spatial environment enables a novel type of filtering. This design allows for rapid exploration and navigation across connected documents. Through a user study we conclude that our system has the potential to help analysts filter data by allowing them to i) form hypotheses before reading documents and subsequently ii) validating them by reading a reduced and focused set of documents. Master of Science

  • Open Access Spanish; Castilian
    Authors: 
    Saiz-Álvaro, Fátima Aurora;

    In this work a study of data extraction and mining architectures applied to superficial defects produced in the steel-rolling using Deep Learning techniques is carried out, as well as the storage of the data in a Big Data architecture and its exploitation using Visual Analytics tools that allow making agile decisions. For this purpose, the acquisition of data using computer vision is performed. Then, some experiments to configure the ideal acquisition of the data using neural networks are carried out. The obtained results and accuracies are compared with those of the current State of the Art, surpassing them. An architecture is designed according the production needs to store captured data. This architecture guarantees scalability, security and speed. Finally, some visualizations focused on different people-roles in production environment are developed to provide knowledge about the state of manufacturing that allows to improve the process. En este trabajo se realiza un estudio de arquitecturas de extracción y explotación de datos sobre defectos superficiales producidos en la laminación del acero mediante técnicas de Deep Learning, así como el almacenamiento de los datos en una arquitectura de Big Data y su explotación empleando herramientas de Visual Analytics que permiten tomar decisiones ágiles. Para ello se adquieren los datos con técnicas de visión por computador y se realizan experimentos para configurar la explotación de los datos empleando redes neuronales y se comparan los resultados y precisiones obtenidos con los del Estado del Arte actual, comprobando que son mejorados. Se diseña una arquitectura para almacenar los datos de la captura que se adapta a las necesidades de producción, garantizando la escalabilidad, la seguridad y la rapidez. Por último, se desarrollan visualizaciones enfocadas a diferentes roles de personas en la producción que aportan conocimiento sobre el estado de la fabricación y permiten mejorar el proceso.

  • Open Access
    Authors: 
    Richard Khulusi; Stephanie Billib; Stefan Jänicke;
    Publisher: MDPI AG
    Country: Denmark

    Diaries are private documentations of people’s lives. They contain descriptions of events, thoughts, fears, and desires. While diaries are usually kept in private, published ones, such as the diary of Anne Frank, show that they bear the potential to give personal insight into events and into the emotional impact on their authors. We present a visualization tool that provides insight into the Bergen-Belsen memorial’s diary corpus, which consists of dozens of diaries written by concentration camp prisoners. We designed a calendar view that documents when authors wrote about concentration camp life. Different modes support quantitative and sentiment analyses, and we provide a solution for historians to create thematic concepts that can be used for searching and filtering for specific diary entries. The usage scenarios illustrate the importance of the tool for researchers and memorial visitors as well as for commemorating the Holocaust.

  • English
    Authors: 
    Ziegelbecker, Thomas;
    Publisher: TU Wien

    Bei ann��hernd jedem Schritt den wir heutzutage t��tigen, werden Daten gespeichert. In vielen F��llen handelt es sich dabei allerdings um fehlerhafte Daten, wobei die Gr��nde daf��r sehr vielf��ltig sein k��nnen. Diese reichen einerseits von M��ngeln, wie beispielsweise Messungenauigkeit, ��ber Messfehler bis hin zu menschlichem Versagen. Problematisch werden die fehlerhaften Daten vor allem dann, wenn zum Beispiel Analysten unwissentlich ihre Entscheidungen auf deren Basis treffen. M��gliche Konsequenzen aus diesen Entscheidungen k��nnen beispielsweise falsche Schlussfolgerungen sein, die wiederum zu h��heren Kosten f��hren k��nnen. Eine M��glichkeit dieses Problem zu adressieren, ist die Qualit��t der fehlerhaften Daten zu visualisieren, um damit bei den Entscheidungstr��gern ein Bewusstsein f��r die Problematik zu schaffen. Dar��ber hinaus haben Forschungsergebnisse gezeigt, dass die Visualisierung von Datenqualit��tsproblemen die Entscheidungsfindung verbessert. Trotz dieser Beobachtung wurde bisher auf dem Gebiet der Datenqualit��tsvisualisierung von uni- und multivariaten Daten nur wenig Forschung betrieben. Der Schwerpunkt dieser Arbeit liegt aus den genannten Gr��nden auf der Visualisierung von Datenqualit��tsproblemen und ihrer Integration in den Datenexplorationsprozess. Das prim��re Ziel dabei ist, einen neuen Ansatz zu finden, der die Qualit��tsprobleme von multivariaten und zeitorientierten Daten sowohl im ��berblick als auch im Detail darstellt. Zu diesem Zweck wird ein Dom��nenproblem aus der Bohrindustrie herangezogen. Bei den von mehreren Sensoren zur Verf��gung gestellten Daten handelt es sich um Bohrdaten, welche unter anderem fehlende Werte, ung��ltige Werte und Ausrei��er enthalten. Zur L��sung dieser Probleme werden die vorhandenen Visualisierungsm��glichkeiten bewertet und auf deren Grundlage Gestaltungsm��glichkeiten entwickelt um Datenqualit��tsprobleme sowohl im ��berblick als auch im Detail darzustellen. In einem nachfolgenden Schritt werden die zuvor getroffenen Designentscheidungen im Rahmen eines Prototyps implementiert und im Zuge von Experteninterviews evaluiert. Die Ergebnisse dieser Interviews werden zusammengefasst, diskutiert und als Argumentationsgrundlage f��r zuk��nftige Designentscheidungen festgehalten. Dar��ber hinaus bieten die Ergebnisse auch Argumente f��r bestimmte Interaktionstechniken sowie Einblicke in die zur Implementierung eingesetzten Algorithmen und Technologien. Die abschlie��enden Ergebnisse lassen Schlussfolgerungen hinsichtlich der Auswahl von Ans��tzen zur Visualisierung von Datenqualit��tsproblemen zu und bilden die Grundlage f��r weitere Forschung. Today we produce and capture data at almost each and every step. In many cases, this data is imperfect, due to various defects such as sensor variability, errors in measurement, or by human error. Analysts and decision makers unknowingly base their decisions on such imperfect data, which often leads to poor decisions and high costs. One way to address this problem is to visualize data quality problems to make decision makers more aware of them. Despite existing literature proving that data quality visualization improves decision-making, only little research has been conducted in the field of univariate and multivariate data quality visualization. Therefore, the focus of this work will be on incorporating data quality visualization into the data exploration process, where the main contribution is to provide a novel approach for visualizing data quality problems of multivariate time-oriented data in both, overview and detail. For this purpose, a particular domain problem from the drilling industry will be used. The data itself is provided from multiple sensors that transmit time-stamped raw drilling-data, which contains data quality problems such as missing values, invalid values and outliers. In this work I examine existing data quality visualizations for multivariate time-oriented data. Based on this literature research I develop and discuss several design options in overview and detail for visualizing the data quality problems identified in combination with the domain problem. In a subsequent step I implement selected design approaches in a prototype and evaluate them in the context of expert interview sessions. The results of these session are then reported and discussed, providing further rationales for the design choices made. In addition, the results also provide arguments for specific interaction techniques (i.e., combined interactive views) as well as they offer insights into algorithms and technologies used. Overall, the results give conclusions for selecting data quality visualization approaches and make suggestions for further research areas such as the aggregation algorithms for data quality problems.

Advanced search in Research products
Research products
arrow_drop_down
Searching FieldsTerms
Subject
arrow_drop_down
includes
arrow_drop_down
Include:
The following results are related to NEANIAS Space Research Community. Are you interested to view more results? Visit OpenAIRE - Explore.
7,408 Research products, page 1 of 741
  • Authors: 
    Tadeja, Slawomir;
    Publisher: University of Cambridge
    Country: United Kingdom

    Recent advancements and technological breakthroughs in the development of so-called immersive interfaces, such as augmented (AR), mixed (MR), and virtual reality (VR), coupled with the growing mass-market adoption of such devices has started to attract attention from academia and industry alike. Out of these technologies, VR offers the most mature option in terms of both hardware and software, as well as the best available range of different off-the-shelf offerings. VR is a term interchangeably used to denote both head-mounted displays (HMDs) and fully immersive, bespoke 3D environments which these devices transport their users to. With modern devices, developers can leverage a range of different interaction modalities, including visual, audio, and even haptic feedback, in the creation of these virtual worlds. With such a rich interaction space it is thus natural to think of VR as a well-suited environment for interactive visualisation and analytical reasoning of complex multidimensional data. Research in \textit{visual analytics} (VA) combines these two themes, spanning the last one and a half decades, and has revealed a number of research findings. This includes a range of new advanced and effective visualisation and analysis tools for even more complex, more noisy and larger data sets. Furthermore, the extension of this research and the use of immersive interfaces to facilitate visual analytics has spun-off a new field of research: \textit{immersive analytics} (IA). Immersive analytics leverages the potential bestowed by immersive interfaces to aid the user in swift and effective data analysis. Some of the most promising application domains of such immersive interfaces in the industry are various branches of engineering, including aerospace design and in civil engineering. The range of potential applications is vast and growing as new stakeholders are adopting these immersive tools. However, the use of these technologies brings its own challenges. One such difficulty is the design of appropriate interaction techniques. There is no optimal choice, instead such a choice varies depending on available hardware, the user���s prior experience, their task at hand, and the nature of the dataset. To this end, my PhD work has focused on designing and analysing various interactive, VR-based immersive systems for engineering visual analytics. One of the key elements of such an immersive system is the selection of an adequate interaction method. In a series of both qualitative and quantitative studies, I have explored the potential of various interaction techniques that can be used to support the user in swift and effective data analysis. Here, I have investigated the feasibility of using techniques such as hand-held controllers, gaze-tracking and hand-tracking input methods used solo or in combination in various challenging use cases and scenarios. For instance, I developed and verified the usability and effectiveness of the AeroVR system for aerospace design in VR. This research has allowed me to trim the very large design space of such systems that have been not sufficiently explored thus far. Moreover, building on top of this work, I have designed, developed, and tested a system for digital twin assessment in aerospace that coupled gaze-tracking and hand-tracking, achieved via an additional sensor attached to the front of the VR headset, with no need for the user to hold a controller. The analysis of the results obtained from a qualitative study with domain experts allowed me to distill and propose design implications when developing similar systems. Furthermore, I worked towards designing an effective VR-based visualisation of complex, multidimensional abstract datasets. Here, I developed and evaluated the immersive version of the well-known Parallel Coordinates Plots (IPCP) visualisation technique. The results of the series of qualitative user studies allowed me to obtain a list of design suggestions for IPCP, as well as provide tentative evidence that the IPCP can be an effective tool for multidimensional data analysis. Lastly, I also worked on the design, development, and verification of the system allowing its users to capture information in the context of conducting engineering surveys in VR. Furthermore, conducting a meaningful evaluation of immersive analytics interfaces remains an open problem. It is difficult and often not feasible to use traditional A/B comparisons in controlled experiments as the aim of immersive analytics is to provide its users with new insights into their data rather than focusing on more quantifying factors. To this end, I developed a generative process for synthesising clustered datasets for VR analytics experiments that can be used in the process of interface evaluation. I further validated this approach by designing and carrying out two user studies. The statistical analysis of the gathered data revealed that this generative process for synthesising clustered datasets did indeed result in datasets that can be used in experiments without the datasets themselves being the dominant contributor of the variability between conditions. Engineering and Physical Sciences Research Council (EPSRC-1788814); Trinity Hall and Cambridge Commonwealth, European & International Trust; Cambridge Philosophical Society

  • Publication . Article . 2021
    Open Access Italian
    Authors: 
    Cristina Marino;
    Publisher: Politecnico di Torino, Associazione culturale progresso grafico
    Country: Italy
  • Publication . Other literature type . 2012
    Closed Access English
    Country: Croatia

    EuroVA 2012 is the third international workshop on Visual Analytics held in Europe and aims at continuing the success of the two previous editions held in Bordeaux, France, on June 8th, 2010, and in Bergen, Norway, on May 31st, 2011. EuroVA 2012 takes place on June 4th and 5th, 2012, in Vienna, Austria, as a EuroVis 2012 Workshop. As in the previous years the goal of the workshop is to promote and advance the combination and integration of visualization and analytics methods for the purpose of problem solving in a variety of application domains. Visual Analytics is the science of analytical reasoning supported by interactive visual interfaces, which requires interdisciplinary science integrating techniques from visualization and computer graphics, statistics and mathematics, data management and knowledge representation, data analysis and machine learning, cognitive and perceptual sciences, and more. Topics of interest included, but were not limited to • Visual representations and interaction techniques • Data management and knowledge representation • Data analysis and machine learning • Cognitive and perceptual aspects • Infrastructure and evaluation • Applications, as far as they are related to visual analytics. The review process was supported by an international program committee (IPC) of 33 experts. Each paper was reviewed by three reviewers from this committee. Based on the reviews, the workshop co-chairs accepted 17 out of 31 submissions and organized the final program. The EuroVA program also features two invited talks by Jean-Daniel Fekete, INRIA, France, and Helwig Hauser, University of Bergen, Norway. We would like to thank all members of the EuroVA IPC for their effort in the reviewing process, and the invited speakers for their presentations and their willingness to support this workshop. EuroVA 2012 acknowledges the financial support of the Promise NoE European project, within the Seventh Framework Programme for Research of the European Commission, grant number 258191, and of Fraunhofer IGD. We also want to thank the Eurographics Association for their support.

  • Authors: 
    Kwon, Bum Chul; Lee, Jungsoo; Chung, Chaeyeon; Lee, Nyoungwoo; Choi, Ho-Jin; Choo, Jaegul;
    Publisher: The Eurographics Association

    Image classification models often learn to predict a class based on irrelevant co-occurrences between input features and an output class in training data. We call the unwanted correlations ''data biases,'' and the visual features causing data biases ''bias factors.'' It is challenging to identify and mitigate biases automatically without human intervention. Therefore, we conducted a design study to find a human-in-the-loop solution. First, we identified user tasks that capture the bias mitigation process for image classification models with three experts. Then, to support the tasks, we developed a visual analytics system called DASH that allows users to visually identify bias factors, to iteratively generate synthetic images using a state-of-the-art image-toimage translation model, and to supervise the model training process for improving the classification accuracy. Our quantitative evaluation and qualitative study with ten participants demonstrate the usefulness of DASH and provide lessons for future work. Bum Chul Kwon, Jungsoo Lee, Chaeyeon Chung, Nyoungwoo Lee, Ho-Jin Choi, and Jaegul Choo Visual Analysis and Machine Learning

  • Authors: 
    Berger, Philip; Schumann, Heidrun; Tominski, Christian;
    Publisher: The Eurographics Association

    The visual analysis of multivariate graphs increasingly involves not only exploring the data, but also editing them. Existing editing approaches for multivariate graphs support visual analytics workflows by facilitating a seamless switch between data exploration and editing. However, it remains difficult to comprehend performed editing operations in retrospect and to compare different editing results. Addressing these challenges, we propose a model describing what graph aspects can be edited and how. Based on this model, we develop a novel approach to visually track and understand data changes due to edit operations. To visualize the different graph states resulting from edits, we extend an existing graph visualization approach so that graph structure and the associated multivariate attributes can be represented together. Branching sequences of edits are visualized as a node-link tree layout where nodes represent graph states and edges visually encode the performed edit operations and the graph aspects they affect. Individual editing operations can be inspected by dynamically expanding edges to detail views on demand. In addition, we support the comparison of graph states through an interactive creation of attribute filters that can be applied to other states to highlight similarities. Philip Berger, Heidrun Schumann, and Christian Tominski Visual Analytics Techniques

  • Publication . Conference object . 2019
    Open Access English
    Authors: 
    Corvo, Alberto; Garcia Caballero, Humberto; Westenberg, Michel A.;
    Country: Netherlands

    The increasing quantity of data in biomedical informatics is leading towards better patient profiling and personalized medicine. Lab tests, medical images, and clinical data represent extraordinary sources for patient characterization. While retrospective studies focus on finding correlations in this sheer volume of data, potential new biomarkers are difficult to identify. A common approach is to observe patient mortality with respect to different clinical variables in what is called survival analysis. Kaplan-Meier plots, also known as survival curves, are generally used to examine patient survival in retrospective and prognostic studies. The plot is very intuitive and hence very popular in the medical domain to disclose evidence of poor or good prognosis. However, the Kaplan-Meier plots are mostly static and the data exploration of the plotted cohorts can be performed only with additional analysis. There is a need to make survival plots interactive and to integrate potential prognostic data that may reveal correlations with disease progression. We introduce SurviVIS, a visual analytics approach for interactive survival analysis and data integration on Kaplan-Meier plots. We demonstrate our work on a melanoma dataset and in the perspective of a potential use case in precision imaging. Alberto Corvò, Humberto Simon Garcia Caballero, and Michel A. Westenberg CCS Concepts: Applications --> Visual Analytics EuroVis Workshop on Visual Analytics (EuroVA) Applications of Visual Analytics 77 73

  • Open Access
    Authors: 
    Fiaux, Patrick O.;
    Publisher: Virginia Tech
    Country: United States

    Analysts must filter through an ever-growing amount of data to obtain information relevant to their investigations. Looking at every piece of information individually is in many cases not feasible; there is hence a growing need for new filtering tools and techniques to improve the analyst process with large datasets. We present MineVis — an analytics system that integrates biclustering algorithms and visual analytics tools in one seamless environment. The combination of biclusters and visual data glyphs in a visual analytics spatial environment enables a novel type of filtering. This design allows for rapid exploration and navigation across connected documents. Through a user study we conclude that our system has the potential to help analysts filter data by allowing them to i) form hypotheses before reading documents and subsequently ii) validating them by reading a reduced and focused set of documents. Master of Science

  • Open Access Spanish; Castilian
    Authors: 
    Saiz-Álvaro, Fátima Aurora;

    In this work a study of data extraction and mining architectures applied to superficial defects produced in the steel-rolling using Deep Learning techniques is carried out, as well as the storage of the data in a Big Data architecture and its exploitation using Visual Analytics tools that allow making agile decisions. For this purpose, the acquisition of data using computer vision is performed. Then, some experiments to configure the ideal acquisition of the data using neural networks are carried out. The obtained results and accuracies are compared with those of the current State of the Art, surpassing them. An architecture is designed according the production needs to store captured data. This architecture guarantees scalability, security and speed. Finally, some visualizations focused on different people-roles in production environment are developed to provide knowledge about the state of manufacturing that allows to improve the process. En este trabajo se realiza un estudio de arquitecturas de extracción y explotación de datos sobre defectos superficiales producidos en la laminación del acero mediante técnicas de Deep Learning, así como el almacenamiento de los datos en una arquitectura de Big Data y su explotación empleando herramientas de Visual Analytics que permiten tomar decisiones ágiles. Para ello se adquieren los datos con técnicas de visión por computador y se realizan experimentos para configurar la explotación de los datos empleando redes neuronales y se comparan los resultados y precisiones obtenidos con los del Estado del Arte actual, comprobando que son mejorados. Se diseña una arquitectura para almacenar los datos de la captura que se adapta a las necesidades de producción, garantizando la escalabilidad, la seguridad y la rapidez. Por último, se desarrollan visualizaciones enfocadas a diferentes roles de personas en la producción que aportan conocimiento sobre el estado de la fabricación y permiten mejorar el proceso.

  • Open Access
    Authors: 
    Richard Khulusi; Stephanie Billib; Stefan Jänicke;
    Publisher: MDPI AG
    Country: Denmark

    Diaries are private documentations of people’s lives. They contain descriptions of events, thoughts, fears, and desires. While diaries are usually kept in private, published ones, such as the diary of Anne Frank, show that they bear the potential to give personal insight into events and into the emotional impact on their authors. We present a visualization tool that provides insight into the Bergen-Belsen memorial’s diary corpus, which consists of dozens of diaries written by concentration camp prisoners. We designed a calendar view that documents when authors wrote about concentration camp life. Different modes support quantitative and sentiment analyses, and we provide a solution for historians to create thematic concepts that can be used for searching and filtering for specific diary entries. The usage scenarios illustrate the importance of the tool for researchers and memorial visitors as well as for commemorating the Holocaust.

  • English
    Authors: 
    Ziegelbecker, Thomas;
    Publisher: TU Wien

    Bei ann��hernd jedem Schritt den wir heutzutage t��tigen, werden Daten gespeichert. In vielen F��llen handelt es sich dabei allerdings um fehlerhafte Daten, wobei die Gr��nde daf��r sehr vielf��ltig sein k��nnen. Diese reichen einerseits von M��ngeln, wie beispielsweise Messungenauigkeit, ��ber Messfehler bis hin zu menschlichem Versagen. Problematisch werden die fehlerhaften Daten vor allem dann, wenn zum Beispiel Analysten unwissentlich ihre Entscheidungen auf deren Basis treffen. M��gliche Konsequenzen aus diesen Entscheidungen k��nnen beispielsweise falsche Schlussfolgerungen sein, die wiederum zu h��heren Kosten f��hren k��nnen. Eine M��glichkeit dieses Problem zu adressieren, ist die Qualit��t der fehlerhaften Daten zu visualisieren, um damit bei den Entscheidungstr��gern ein Bewusstsein f��r die Problematik zu schaffen. Dar��ber hinaus haben Forschungsergebnisse gezeigt, dass die Visualisierung von Datenqualit��tsproblemen die Entscheidungsfindung verbessert. Trotz dieser Beobachtung wurde bisher auf dem Gebiet der Datenqualit��tsvisualisierung von uni- und multivariaten Daten nur wenig Forschung betrieben. Der Schwerpunkt dieser Arbeit liegt aus den genannten Gr��nden auf der Visualisierung von Datenqualit��tsproblemen und ihrer Integration in den Datenexplorationsprozess. Das prim��re Ziel dabei ist, einen neuen Ansatz zu finden, der die Qualit��tsprobleme von multivariaten und zeitorientierten Daten sowohl im ��berblick als auch im Detail darstellt. Zu diesem Zweck wird ein Dom��nenproblem aus der Bohrindustrie herangezogen. Bei den von mehreren Sensoren zur Verf��gung gestellten Daten handelt es sich um Bohrdaten, welche unter anderem fehlende Werte, ung��ltige Werte und Ausrei��er enthalten. Zur L��sung dieser Probleme werden die vorhandenen Visualisierungsm��glichkeiten bewertet und auf deren Grundlage Gestaltungsm��glichkeiten entwickelt um Datenqualit��tsprobleme sowohl im ��berblick als auch im Detail darzustellen. In einem nachfolgenden Schritt werden die zuvor getroffenen Designentscheidungen im Rahmen eines Prototyps implementiert und im Zuge von Experteninterviews evaluiert. Die Ergebnisse dieser Interviews werden zusammengefasst, diskutiert und als Argumentationsgrundlage f��r zuk��nftige Designentscheidungen festgehalten. Dar��ber hinaus bieten die Ergebnisse auch Argumente f��r bestimmte Interaktionstechniken sowie Einblicke in die zur Implementierung eingesetzten Algorithmen und Technologien. Die abschlie��enden Ergebnisse lassen Schlussfolgerungen hinsichtlich der Auswahl von Ans��tzen zur Visualisierung von Datenqualit��tsproblemen zu und bilden die Grundlage f��r weitere Forschung. Today we produce and capture data at almost each and every step. In many cases, this data is imperfect, due to various defects such as sensor variability, errors in measurement, or by human error. Analysts and decision makers unknowingly base their decisions on such imperfect data, which often leads to poor decisions and high costs. One way to address this problem is to visualize data quality problems to make decision makers more aware of them. Despite existing literature proving that data quality visualization improves decision-making, only little research has been conducted in the field of univariate and multivariate data quality visualization. Therefore, the focus of this work will be on incorporating data quality visualization into the data exploration process, where the main contribution is to provide a novel approach for visualizing data quality problems of multivariate time-oriented data in both, overview and detail. For this purpose, a particular domain problem from the drilling industry will be used. The data itself is provided from multiple sensors that transmit time-stamped raw drilling-data, which contains data quality problems such as missing values, invalid values and outliers. In this work I examine existing data quality visualizations for multivariate time-oriented data. Based on this literature research I develop and discuss several design options in overview and detail for visualizing the data quality problems identified in combination with the domain problem. In a subsequent step I implement selected design approaches in a prototype and evaluate them in the context of expert interview sessions. The results of these session are then reported and discussed, providing further rationales for the design choices made. In addition, the results also provide arguments for specific interaction techniques (i.e., combined interactive views) as well as they offer insights into algorithms and technologies used. Overall, the results give conclusions for selecting data quality visualization approaches and make suggestions for further research areas such as the aggregation algorithms for data quality problems.