Dr. Paul Groth
Title: Machines are People Too (slides)
The theory and practice of digital libraries provides a long history of thought around how to manage knowledge ranging from collection development, to cataloging and resource description. These tools were all designed to make knowledge findable and accessible to people. Even technical progress in information retrieval and question answering are all targeted to helping answer a human’s information need.
However, increasingly demand is for data. Data that is needed not for people’s consumption but to drive machines. As an example of this demand, there has been explosive growth in job openings for Data Engineers – professionals who prepare data for machine consumption. In this talk, I overview the information needs of machine intelligence and ask the question: Are our knowledge management techniques applicable for serving this new consumer?
Paul Groth is Disruptive Technology Director at Elsevier Labs. He holds a Ph.D. in Computer Science from the University of Southampton (2007) and has done research at the University of Southern California and the Vrije Universiteit Amsterdam. His research focuses on dealing with large amounts of diverse contextualized knowledge with a particular focus on the web and science applications. This includes research in data provenance, data science, data integration and knowledge sharing. He lead architecture development for the Open PHACTS drug discovery data integration platform. Paul was co-chair of the W3C Provenance Working Group that created a standard for provenance interchange. He is co-author of “Provenance: an Introduction to PROV” and “The Semantic Web Primer: 3rd Edition” as well as numerous academic articles. He blogs at http://thinklinks.wordpress.com. You can find him on twitter: @pgroth.
Title: Back to the future: annotating, collaborating and linking in a digital ecosystem (slides)
Classical philology has rarely been a self-enclosed discipline: in order to interpret Greek and Latin texts, it is necessary to place them in context—grounding them in the histories of the time and exploring them in and against those cultural horizons. Using the linking potential of the Web, Pelagios Commons (http://commons.pelagios.org/) has been pioneering a means of digital ‘mutual contextualization’, whereby any online document—be it a text, map, database or image—can be connected to another simply by virtue of having something in common with it, and then draw on this external content to enrich its own, or in turn be drawn upon by and enrich another. In Pelagios this linking is achieved through the method of annotating places. From having originally been seeded in collaboration with partners who already curated data and had the technical know-how to align datasets, Pelagios Commons now offers any researcher, librarian, museum curator, student or member of the public a simple, intuitive means to encode place information in a document of their choosing.
This presentation will set out and explain this annotation process in the Web-based, Open Source platform, Recogito (http://recogito.pelagios.org/) developed by the Pelagios team. It will go through the steps that the researcher would take in order to geoannotate their material—first identifying the place entity in their document, then resolving that information to a central authority file: i.e. a gazetteer of placenames (e.g. http://pleiades.stoa.org/). It also considers the potential uses of this kind of semantic annotation, outlining the mapping of places in texts, the repurposing of the data in other systems (such as GIS), and the linking to other related resources. Throughout, however, it will be concerned to identify challenges and persistent issues that are not only related to the technical development and use; using Recogito puts a primary demand on defining and conceptualising place. Thus, contrary to much current thinking, this presentation hopes to show how digital tools can enhance the close reading of texts and facilitate a more nuanced understanding of the status and role of places in our historical sources.
Elton Barker is Reader in Classical Studies, having joined The Open University as a Lecturer in July 2009. Before then, he had been a Tutor and Lecturer at Christ Church, Oxford (2004-09), and also lectured at Bristol, Nottingham and Reading. He has been a Junior Research Fellowship at Wolfson College, Cambridge (2002-04) and a Visiting Fellow at Venice International University (2003-04). From 2012-2013 he had a Research Fellowship for Experienced Researchers awarded by the Alexander von Humboldt Foundation for research at the Freie Universität Berlin and the University of Leipzig. He has been awarded a Graduate Teaching Award from Pembroke College (Cambridge) and twice won awards from the University of Oxford for an Outstanding Contribution to Teaching.
His research interests cross generic and disciplinary boundaries. Since 2008, he has been leading and co-running a series of collaborative projects, which are using digital resources to rethink spatial understanding of the ancient world. The Hestia project investigates the underlying ways in which Herodotus constructs space in book 5 of his Histories. Meanwhile, the Pelagios project has been establishing the Web infrastructure by which data produced and curated by different content providers – from academic projects like the Perseus Classical Library to cultural heritage institutions like the British Museum – can be linked through their common references to places.
Dr. Dimitrios Tzovaras
Title: Visualization in the Big Data Era: Data Mining from Networked Information
Network graphs have long formed a widely adapted and acknowledged practice for the representation of inter- and intra-dependent information streams. Nowadays, they are largely attracting the interest of the research community mainly due to the vastly growing amount (size & complexity) of semantically dependent data produced world-wide as a result of the rapid expansion of data sources.
In this context, the efficient processing of the big amounts of information, also known as Big Data forms a major challenge for both the research community and a wide variety of industrial sectors, involving security, health and financial applications.
In order to address these needs the current presentation describes a proprietary platform built upon state-of-the-art algorithms that are combined to implement a top-down approach for the facilitation of Data & Graph Mining processes, like behavioral clustering, interactive visualizations, etc.
The applicability of this platform has been validated on α series of distinct real-world use cases that involve large amounts of intra-exchanged information and can be thus help as characteristic examples of modern Big Data problems. In particular, they refer to (i) DoS attacks in a real-world mobile networks and (ii) early event detection in social media communities, (iii) traffic management and (iv) DNA sequences analysis.
In all these cases, the large volumes of data are addressed via a Data Minimization approach that starts with an aggregated overview of network at its whole, and gradually the focus is put on smaller data subsets (i.e. approach upon successive levels of abstraction). In parallel, insights on the network’s operations are allowed through the detection of behavioral patterns. Similarly, a dynamic hypothesis formulator and the corresponding backend solver can subsequently be exploited through graph traversing and pattern mining. This way, an analyst is provided with the appropriate equipment to set and verify concrete hypotheses through simulation and extract useful conclusions.
Dr. Dimitrios Tzovaras is a Senior Researcher Grade A’ (Professor) and Director at CERTH/ITI (the Information Technologies Institute of the Centre for Research and Technology Hellas). He received the Diploma in Electrical Engineering and the Ph.D. in 2D and 3D Image Compression from the Aristotle University of Thessaloniki, Greece in 1992 and 1997, respectively. Prior to his current position, he was a Senior Researcher on the Information Processing Laboratory at the Electrical and Computer Engineering Department of the Aristotle University of Thessaloniki. His main research interests include network and visual analytics for network security, computer security, data fusion, biometric security, virtual reality, machine learning and artificial intelligence. He is author or co-author of over 110 articles in refereed journals and over 300 papers in international conferences.
Since 2004, he has been Associate Editor in the following International journals: Journal of Applied Signal Processing (JASP) and Journal on Advances in Multimedia of EURASΙP. Additionally, he is Associate Editor in the IEEE Signal Processing Letters journal (since 2009) and Senior Associate Editor in the IEEE Signal Processing Letters journal (since 2012), while since mid-2012 he has been also Associate Editor in the IEEE Transactions on Image Processing journal. Over the same period, Dr. Tzovaras acted as ad hoc reviewer for a large number of International Journals and Magazines such as IEEE, ACM, Elsevier and EURASIP, as well as International Scientific Conferences (ICIP, EUSIPCO, CVPR, etc.).
Since 1992, Dr. Tzovaras has been involved in more than 100 European projects, funded by the EC and the Greek Ministry of Research and Technology. Within these research projects, he has acted as the Scientific Responsible of the research group of CERTH/ITI, but also as the Coordinator and/or the Technical/Scientific Manager of many of them (coordinator of technical manager in 21 projects – 10 H2020, 1 FP7 ICT IP, 7 FP7 ICT STREP, 3 FP6 IST STREP and 1 Nationally funded project).