Faculty of Informatics and Statistics, Department of Information and Knowledge Engineering (DIKE)

Forthcoming and recent conferences

  • Discovery Challenge 2005: 7th edition held at ECML/PKDD 2005
  • EKAW 2006: 15th International Conference on Knowledge Engineering and Knowledge Management
  • KDO 2005: 2nd Workshop on Knowledge Discovery and Ontologies at ECML/PKDD 2005
  • RAWS 2005: First International Workshop on Representation and Analysis of Web Space
  • XML Prague 2012: Annual XML conference
  • Znalosti 2008: 7th annual conference on discovery, processing, retrieval, organization and presentation of knowledge

RDF Resource Description Framework Metadata

Funded and voluntary projects

  • K–Space: IST Network of Excellence, started on January 1st, 2006, coordinated by Queen Mary College, University of London
  • KP–Lab: Knowledge Practises Laboratory. KP–Lab focuses on creating a learning system aimed at facilitating innovative practices of sharing, creating and working with knowledge in education and workplaces.
  • LinkedTV: Television Linked To The Web (LinkedTV) provides a novel practical approach to Future Networked Media. It is based on four phases: annotation, interlinking, search, and usage (including personalization, filtering, etc.). The result will make Networked Media more useful and valuable, and it will open completely new areas of application for Multimedia information on the Web.See more information at the LinkedTV project website
  • LOD2: LOD2 is a large-scale integrating project co-funded by the European Commission within the FP7 Information and Communication Technologies Work Programme. Its main objective is to develop an integrated set of software tools that can be used to work with linked open data. University of Economics, Prague, joint the project in September 2011, focusing mainly on the work package dedicated to applying linked open data to build a distributed marketplace for public contracts, also contributing with work in knowledge base refactoring, schema mapping, or data mining. More information about the project can be found on the its web site.
  • M–CAST: Multi–lingual content aggregation in digital library environment
  • MedIEQ: Quality Labelling of Medical Web Content using Multilingual Information Extraction

RDF Resource Description Framework Metadata

Software and artifact development

  • Advanced FOAF Explorer: User–friendly browser for metadata stored in FOAF profiles.
  • Association Rule to Natural Language Conversion: LISp-Miner component for conversion of association rules into natural language based on linguistic knowledge.
  • Core Ontology for Multimedia: Ontology modelling the multimedia domain. Based on the MPEG–7 standard and the DOLCE foundational ontology. Joint research with CWI Amsterdam and Univ. Koblenz, originated in the K-Space project.
  • Expert System for Atherosclerosis Risk Assessment: Rule–based expert system for evaluation of atherosclerosis risk according to the information about patients life style, personal and family history. Joint research with the EuroMISE Centre, Prague. Based on the NEST framework.
  • Ferda DataMiner: Data mining toolbox with visual programming interface. Contains efficient implementation of multiple procedures of the GUHA approach to data mining. Extensions for dealing e.g. with prior domain knowledge (connection to Sewebar project) and fuzzy information (see FuzzyGUHA project).
  • FuzzyGUHA: Enriching the GUHA data mining method with concepts of fuzzy logic.
  • Google Analytics INterceptor: The goal of GAIN is to provide more granular data than Google Analytics to allow for enhanced preference learning and data mining. In addition, GAIN puts one of the existing features in Google Analytics to a new use by allowing the website to transmit the semantic data on–line. Google Analytics provide some extensibility what regards the kind of data that it can send to the server. This can be used to sent the semantic information along with clickstream within the existing Google Analytics format and process it with GAIN. Although Google Analytics is not able to read this data, it can be instructed to ignore them.
  • Intelligent Assistant for Buying Computers on the Internet: Intelligent support for selecting a computer product in on–line catalogue. Based on Bayesian Network technology.
  • JNVDL: Java–based implementation of the NVDL specification; NVDL is a simple “meta–schema” language which allows to control the processing and validation of compound documents.
  • KEGweb: Website of the KEG research group, powered by RDF and complying with the Linked Data
  • Knowledge Explorer: Machine learning tool allowing to automatically build expert system knowledge bases allowing compositional (Prospector–like) processing of uncertainty.
  • LISp–Miner: Data mining toolbox. It contains efficient implementation of multiple procedures of the GUHA approach to data mining, plus additional tools related to the KDD process, developed in connection with projects such as AR2NL or Sewebar.
  • LISp–Miner Grid Infrastructure: Grid–based infrastructure for LISp-Miner.
  • New Expert System Tool: Environment for developing rule–based and CBR expert systems with multiple ways of uncertainty processing. Operates in stand–alone as well as web–service mode. Used in education of AI. Tested on multimedia semantic region merging within the EU K–Space project. Currently primarily used as basis for the atherosclerosis risk assessment system AtherEx.
  • Ontology Farm: Collection of ‘parallel’ ontologies describing the same domain: organization of a conference. Different ontologies are based on conceptualization underlying different available resources, most of which are conference organization support tools. The principal use of the collection is in benchmarking ontology matching tools, especially within the OAEI initiative.
  • OntologyDesignPatterns.org: International initiative led by STLab, ISTC–CNR, Rome. Development of a large collection of ontology design patterns (in multiple categories) and tools for their management.
  • Ontopolis.net: Social–semantic web application intended for supporting self–organization of politically engaged social groups. The system comprises intelligent suggestions of similar–minded people, support for argumentation, democratic polls, etc.
  • Relaxed: Validator of HTML pages and compound XML documents, based on Relax NG and Schematron patterns.
  • Semanti–CS Updates: A RDF–based reader to syndicate updates on academic sites of Semanti–CS initiative members.
  • Semantic Concept Mapping with Targeted Hypernym Discovery: Performs classification of entities appearing in text into a given set of classes using wordnet similarity and Wikipedia (for resolving entities not in Wordnet). Joint research with Queen Mary, University of London, originated in the K–Space project, now supported by the PetaMedia project.
  • Semantic web technology for analytical reporting from data mining: Dissemination of analytical reports from data mining via semantic web. Analytical reports are annotated by means of PMML (Predictive Model Mark-up Language) and formalized background knowledge (see BKEF project) descriptions and stored in an enhanced CMS. Semantic querying of semantic analytical reports using SPARQL (for RDF) and tolog (Topic Maps query language) is under investigation. Development of ontology patterns for mapping between data structures and domain ontologies is foreseen. Pilot applications in the medical domain (cardiology, tinnitus).
  • SEWEBAR-CMS: Content Management System for Association Rule Mining
  • Spidereq: Spider tool for MedIEQ. Tool for ‘intelligent’ spidering of websites, originally developed for the MedIEQ project as component of the AQUA tool.
  • Stepper: Tool for step–by–step interactive formalization of content of specialized expert–level documents, in particular medical guidelines, aiming at populating a knowledge base. Its development started in the MGT project and continued in the EuroMISE–Kardio project. Also used in the EU Protocure project. The executable is functional but no longer maintained.
  • Web Information Extraction using Extraction Ontologies: Tool and methodology for web information extraction primarily based on rich data models, possibly coupled with statistical learning models and local wrapper induction. Used in the MedIEQ and K-Space projects, to be also used in the Web Semantization project.

RDF Resource Description Framework Metadata

Powered by Resource Description Framework (RDF)