Hannah Bast, University of FreiburgFabien Gandon, Université Côte d’Azur, InriaVanessa López, IBM Research Europe
Hannah Bast Fabien Gandon Vanessa López
The QLever SPARQL engine A shift in our research focus: from knowledge acquisition to knowledge augmentation Knowledge for the era of accelerated discovery
Abstract: QLever is a new SPARQL engine, which can search very large knowledge graphs (100 billion triples and more) efficiently with very moderate resources (a standard PC is enough). QLever features live context-sensitive autocompletion, a text search component, support for difficult geographic queries, and an interactive visualization of their results even when large. Building such an engine from the ground up is a lot of work, but also very rewarding. I will give you a guided tour with lots of demos and various glimpses under the hood, with examples of clever algorithms, algorithm engineering, and modern C++. Abstract: While EKAW started in 1987 as the European Knowledge Acquisition Workshop, in 2000 it transformed into a conference where we advance knowledge engineering and modelling in general. At the time, this transition also echoed shifts of focus such as moving from the paradigm of expert systems to the more encompassing one of knowledge-based systems. Nowadays, with the current strong interest for knowledge graphs, it is important again to reaffirm that our ultimate goal is not the acquisition of bigger siloed knowledge bases but to support knowledge requisition by and for all kinds of intelligence. Knowledge without intelligence is a highly perishable resource. Intelligence without knowledge is doomed to stagnation. We will defend that intelligence and knowledge, and their evolutions, have to be considered jointly and that the Web is providing a social hypermedia to link them in all their forms. Using examples from several projects, we will suggest that, just like intelligence augmentation and amplification insist on putting humans at the center of the design of artificial intelligence methods, we should think in terms of knowledge augmentation and amplification and we should design a knowledge web to be an enabler of the futures we want. Abstract: Computer-assisted scientific discovery promises to revolutionise how humans discover new materials, find novel drugs or identify new uses for existing ones, and improve clinical trial design and efficiency. The potential of technology to accelerate scientific discovery when the space of possible candidate solutions is too large for human evaluation alone is unlike anything we’ve seen before.

Research into the tools and technologies required to enable accelerate discovery is an emerging and rapidly evolving field. In drug discovery, for example, learning a good protein and molecule representations is a fundamental step to applying predictive and generative models and propose new candidate compounds. But the potential of these methods to accelerate discovery is not fully uncovered. Combining existing, background knowledge from sources, such as scientific literature together with human expertise, in computable knowledge representations may enhance predictive and generative models for candidate solution generation. In this talk I will explore a basic question – can we revisit existing rich knowledge to uncover what hasn’t been yet discovered?

I will share a perspective on research and core technologies that may help us accelerate scientific discovery by leveraging multisource and multimodal knowledge, from extraction to consumption. This perspective is grounded in practical experiences built from real-world shared knowledge challenges across diverse projects. I will draw on lessons learned from experiences gained supporting governments and healthcare agencies to safeguard the integrity of providers’ claims by creating human readable and machine consumable knowledge from policy text, or supporting the integration of scientific literature with person-centred (social, health or behavioural) data to improve the identification of ‘at-risk’ cohorts.