Digitalization requires us to formulate statements of the real world in a way that computers can store and process them. Different areas of computer science have been developing techniques for modeling and representing data, often isolated and in parallel, leading to methods being known under different names in different areas. Modeling, however, also means understanding complex relationships. The best methods to date come from design, such as interactive workshops and graphical representations. On February 8, 2019 we hosted an evening of talks and discussions from different professional perspectives.
Dr. Alexandra Kirsch is expert for artificial intelligence and machine learning at Intuity. In more then 15 years of research and teaching she has been investigating different aspects of cognitive systems.
Artificial intelligence as a field tries to reproduce at least the human level of thought capacity. A minimum prerequisite is to be able to represent every fact and aspect of the world that humans think and talk about. Objects and relations between them can be well represented in logic and ontologies. If we lack exact information or a mechanism is nondeterministic, we can use probability theory to represent and reason about such facts. Vague concepts such as “a relatively good book” are representable by fuzzy logic. However, probability theory and fuzzy logic have reduced expressive power compared to first-order logic. All these techniques represent a single aspect of human thinking and speaking, but there is no representation to cover all of them. In fact, all the representations of artificial intelligence have some equivalent in other areas of computer science and are subsumed in standard database technology. The advantage of being representable in a database is that efficient tools exists. And it should open new perspectives to AI to develop representations further away from standard computer science.
Prof. Dr. Torsten Grust is professor for database systems at the University of Tübingen. His chair researches, among other things, how to efficiently process nonrelational data with database technology.
Databases represent the world in the form of tables, so basically in a rectangular form. This may look like a restriction, but a surprising variety of knowledge can be squeezed into such a form. In addition, one gains a clean, well-understood mathematical basis to allow for an effient processing of queries.
The following examples illustrate the variety of data that does not look rectangular at first glance, but can be processed well with databases:
Simple line diagrams can be represented as turte graphics. Imagine a turtle walking over a canvas with a pen. The pen can touch the canvas and leave a line, or the turtle can walk without leaving a trace. In this way, a picture can be represented by a list of straight line segments. Such graphics could naively be represented as BLOBs, i.e. objects that have no further meaning for a database system. But this loses the ability to query the structure of the objects. A better solution is to represent the list of line segments in a table of its own, enabling queries such as “return all objects that can be drawn in one stroke without lifting the pen”.
Trees are a typical data structure for hierarchical data, for example XML documents. Typically XML files are processed with specific parsers to find tags. This functionality, however, is provided for free by a database if the tree is coded in an adequate form. The resulting queries are not only possible, they are even of a type that is particularly efficiently executable.
Graphs are another common data structure. They are, for example, used to represent towns and streets on a map. Typical queries on graphs are whether all towns are reachable from any other town via the road system or to get all possible routes between two towns. Representing graph nodes and edges in separate tables and using recursive SQL queries, answers such questions without specialised software.
Automata, time series and physics simulations: the chair for database systems at the University of Tübingen has mangaged to transform them all into rectangular form, and up to now the researchers have encountered no task that could not be “rectangularized”.
Steffen Süpple is founder and partner of Intuity. Before that he was a professor for interaction design at the HfG Schwäbisch Gmünd and scientist at the Institute for Organization and Management of Information Systems at the University of Ulm.
We live in a programmable world. This results in complex systems that interact with other complex systems. We have to learn to model these new links; the better we model them, the better tools we create, the better we can shape the world!
In a design process a model of a situation is built in close interation with stakeholders. The model is represented in sketches and drawings, it is used to build prototypes and eventually products. These products change the world, which means that the models also have to be adapted continually.
Different examples from Intuity show how designers approach such a task and how the process of modeling can change the original task itself. For instance, Intuity had an assignment from a medium-sized manufacturing enterprise to design a user interface for calibrating welding tongs. Modeling the process of calibration revealed that the problem was not the user interface, but the process itself. Intuity then analyzed the whole process, restructured it and designed tools to support it.
Another example is the cooperation with Cuboid Parking, where the whole operation of small parking decks in city centres is developed. The challenge is to coordinate and cross-link a complex system of subtasks.
Intuity has learned from these and other projects that it is important to
A model is a clipping of the world that reflects some aspects of the world very accurately, and others only roughly or not at all. The fact that such a model is always a compromise, is often neglected. For example, the Mercator projection represents exact angles of the world map, but distorts lengths and areas. This representation is adequate for sea maps, but inadequate for a comparison of country sizes.
The myth of computer scientists sitting in dark chambers is still prevalent. Studies have shown, however, that the best software is written by well-functioning interdisciplinary teams. Many software projects today fail because of missing discussion between developers and domain experts. The reason is not just that developing a culture of discussion is not part of computer science curricula, but also that it is more comfortable for both sides to exchange documents in a well-defined process instead of grappling with a joint task and different professional viewpoints.
Team-driven modeling is an approach to improve the interaction between domain experts and developers in a way that everyone has fun. Domain-driven design serves as a language for the jointly developed domain model, event storming as a method to develop a joint understanding. Both merge into a workshop format where everyone can contribute in equal measure. Hypothetical workflows recorded with sticky notes can easily be created and just as easily be discarded. In this way the participants develop a joint mental model and also a group feeling, where everyone feels responsible for the project.
In the resulting process more time is spent on communication and less time on coding. the native web develops wolkenkit at as a tool to translate a domain-driven design model into code.
We thank the presenters for their interesting contributions and the audience for their interest and the interesting discussions.