NO.145 The Moving Target of Visualization Software for an Ever More Complex World
February 11 - 15, 2019 (Check-in: February 10, 2019 )
- Hank Childs
- University of Oregon, USA
- Takayuki Itoh
- Ochanomizu University, Japan
- Michael Krone
- University of Tübingen, Germany
- Guido Reina
- University of Stuttgart, Germany
Summary and Overview
Visualization has not only evolved into a mature field of science, but it also has become widely accepted as a standard approach in diverse fields, ranging from physics to life sciences to business intelligence. Despite this success, there are still many open research questions that require customized implementations, for establishing concepts and for performing experiments and taking measurements. Over the years, a wealth of methods and tools have been developed and published – however, most of these are stand-alone prototypes and never reach a mature state where they can be reliably used by a domain scientist. Furthermore, these prototypes are neither sustainable nor extensible for subsequent research projects.
On the other hand, the few existing visualization software systems are fairly complex. Their development teams, sometimes consciously and sometimes unconsciously, make decisions that affect their overall performance, extensibility, interoperability, etc. When considering multiple systems, it is almost always the case that the decisions made by one system differs from another. As an example, we include summaries of two visualization systems (VTK and derivates, MegaMol) we are very familiar with below. The net effect of these differences is that the barriers to sharing become too high between systems. New practitioners have to choose a system (or write their own), and then get isolated from benefits in other systems. Additionally, current problems and data sets have grown so large and complex that any novel method requires an exceedingly large amount of engineering to approach. This barrier is a major hindrance for our community, and especially for the newest generation of Ph.D. students. The research challenge in this is very much a technical one, not a social one. Specifically, how can we design visualization software to maximize impact? One solution, although unlikely, would be the creation of a new system that meets all requirements. A more likely solution would allow for modules to be effectively shared between systems. Another possible solution would involve identifying the portions that can be shared (for example, rendering infrastructures or user interface) and focusing on that. In this context, there is the additional issue of funding software engineering efforts that are fundamental to furthering research.
As a community, we need a way to find a way to enable developing sophisticated and novel visualization solutions with minimum overhead. The need for software engineering and framework development as a necessary component of visualization research has started receiving attention at other venues. One prominent example is the “Visualization in Practice (VIP)” workshop at the premier visualization conference (IEEE VIS). This workshop allows people to speak about their visualization systems and engineering efforts to some extent.
Approach I: VTK/VisIt/ParaView
The VTK/VisIt/ParaView software was written with extensibility in mind, at the cost of performance. The basic idea is to encapsulate individual algorithms as their own modules, which are called filters. The filters can then be composed together to form “pipelines.” This basic design, referred to as data flow networks, allows for a series of algorithms to be applied to data, and also allows for end users to direct which algorithms are applied and in what order. This basic approach has been found to be generally powerful for end users. It further enables extensibility, since new algorithms can be added as new filters. The downside of this approach is in performance. Each filter creates an intermediate output. These intermediate outputs require additional memory (often in short supply). Worse, they create negative caching effects, as the output of each individual filter is often large enough to prevent caching efficiencies by the time the next filter executes.
In another example, the VTK/VisIt/ParaView model has put considerable effort into a general interface for data sets. This effort has led to a powerful data model, which can represent many types of data. That said, the interface itself is realized with virtual functions and takes additional overhead. One analysis of VTK showed that the number of instructions issued by a VTK algorithm is ten times greater than the number of instructions issued by an algorithm that avoids VTK’s interface and virtual functions.
Approach II: MegaMol
The MegaMol visualization framework was written with two aspects in mind; the first one being minimization of overhead for maximum performance, thus prioritizing data structures that fit the GPU and a zero-copy paradigm. The second aspect is creating a rapid prototyping environment for vis research, providing a stable core framework and a plugin mechanism for adding cutting-edge rendering and data processing modules. Modules can communi- cate via strongly typed data channels. Similar to VTK, this ensures flexibility and extensibility; however, since MegaMol is essentially just a thin abstraction layer above OpenGL, the generality is lower. Furthermore, none of the paradigms is enforced; that is, developers can implement arbitrary data flows that are sub- optimal and do not fit existing modules.
Another issue is that MegaMol offers a lot less ready-to-use modules due to the smaller user base and a higher barrier of entry due to the higher freedom for developers. In contrast to VTK, there are only data structures for few specific use cases. That is, developers have to choose from the existing ones or have to make sure that their own data structures fit the GPU to reach maximum performance
When we started the development of MegaMol 10 years ago, we had an advantage over more general, CPU-centric approaches. This is still true for single desktop usage using data sets of moderate size. Due to the steadily increasing data set sizes and the widespread use of HPC systems, we now added CPU-based raytracing using intel’s OSPRay engine.
Aims of the Meeting
In the proposed Shonan meeting, we want to bring together influential people with the ability to envision a change in mindset for the whole visualization community. Specifically, we hope that we can additionally further the exchange between the Asian visualization community and the U.S. and European community, since the latter two usually have a much higher number of participants at the two largest international conferences (IEEE Vis and EG EuroVis). In particular, we want to inspire research on the software side: how can we design visualization software so that it is less cumbersome to implement and extend with novel algorithms? What are the building blocks, algorithms, and designs that we can reuse for many, diverse projects?
We need to develop a new way of tackling the problems related to software and engineering in our community to reduce the barrier of entry for new researchers and to reward those investing time into frameworks with the goal of easing the work of others. Our second goal is to improve awareness of and interoperability of available frameworks, thereby boosting the reusability of existing software. A third aspect is the reproducibility of published research. Research prototypes are often not published together with the paper (mostly because they are hard to use, code quality is low, or authors do not want to disclose their source code). However, this effectively prevents other researchers from reproducing the results and comparing their own work with existing, published solutions. In other fields like bioinformatics, it is already common for publishers to require authors to publish their software together with the article detailing the scientific insight.
We believe a Shonan Meeting would be a success if it (1) helped advance the problem, (2) created a community interested in pursuing the problem after the Meeting, and/or (3) increased the recognition within our community that this is a substantial research problem.
- What does “usable” and “useful” software actually mean (for visualization researcher as well as for users)?
- Which challenges will drive the visualization research community in the next decade?
- Which novel techniques, methodological advances, or new visual representations will be needed?
- Which implications arise from these challenges for software systems?
- How to attain reproducibility and comparability, which is required by future research?
- How to obtain sustainable, future-proof software?
- What are common building blocks of every visualization software?
- Can we create a community-curated repository of such reusable building blocks?
- How to choose a system and how do existing systems relate to each other?
- Can we lower the barrier of entry for existing systems?
- Can we improve the interoperability between systems to foster re-use and obviate re-implementation?
- Which data formats lend themselves well to data exchange across systems?
- How to cope with software, especially licensing and open source?
- Compile and publish guidelines, for researchers at all levels, that define the role of software in our community and establishes the relationship between research and software as a community service
- Develop guidelines for writing extensible software beneficial to the broadest possible audience
- Devise a way for interfacing existing frameworks or abstract processing steps in such a way that algorithms can be shared between different frameworks
- Ensure and maintain the relevance of published software
- Develop a strategy for perpetuating software (versions) as an integral component of reproducible research
- Compile and publish a roadmap to lower the implementation workload for future visualization research
- Lower the barrier of entry for future Ph.D. students concerning available software in all forms (e.g., frameworks and libraries)
In addition to inviting people with background in engineering visualization systems for practical and sustained use, we will invite visible members of our community as well as the heads and directors of large visualization research groups. On the one hand, the goal is to gather the most knowledgeable people for gaining pathbreaking results. On the other hand, the most influential people are crucial for the dissemination and implementation of the results of this meeting. We have already contacted a number of influential people, who gave us very positive feedback and confirmed their strong interest in participating in such a meeting.