I am Chief Scientist for Knowledge Representation at Elsevier Content & Innovation. You can have a look at e.g. my publications, presentations, videos and code. I even have a relatively up-to-date Linkedin profile and a neglected Twitter account.
Since february 2015, I am the senior technical lead for the “Structured Data Hub” (Datalegend), work package 4 within the larger CLARIAH project. The hub is meant to bring together a wide variety of heterogeneous datasets from the field of socio-economic history and will facilitate cross-dataset querying at a scale that is hitherto impossible. Similar to the Data2Semantics project, I strive to bring the benefits of semantic technologies to users through lightweight user-friendly applications rather than expose users directly to the technology. The challenge remains to adapt to current practice, and develop sufficiently powerful incentives. See http://datalegend.net
QBer - a tool for uploading datasets (CSV files) to Datalegend. Researchers can inspect the data and annotate it against existing vocabularies (code books). Conversion to linked data takes place out of sight. A visual inspector provides direct feedback on new links between datasets and the variables they use. https://vimeo.com/158153564
Since september 2011, I have been leading the COMMIT Data2Semantics project. The goal of our project is to lower the threshold for potential adopters of semantic technology, in this case science, medicine and the humanities. I am a strong advocate of the rapid development of lightweight, simple showcases of the potential of this complicated technology. Data2Semantics received the COMMIT/ 2013 Valorization Award for the valorization efforts undertaken within the project.
Linkitup - Link Discovery for Research Data - a tool for enriching the metadata present in existing data publication repositories with links to the Linked Data cloud, author identifiers, and other content services. http://linkitup.data2semantics.org
PROV-O-Matic - a provenance tracker integrated in the Jupyter Notebook environment. This brings provenance tracing to one of the preferred environments of data scientists, rather than forcing them to learn new tools. https://vimeo.com/109672900
Because of my knowledge representation work in the legal domain (see below), linked data expertise, and affinity with the government domain in general, I am frequently approached by government and semi-government parties to give talks or advice.
Examples: GeoNovum, CB-NL, Belastingdienst (Dutch Tax and Customs Administration), IND (Dutch Immigration and Naturalization Service), Veiligheidsregio Kennemerland, Bureau Forum Standaardisatie, ICTU, KPMG, Elsevier, Wolters Kluwer, TNO, Zenc, Logica/CGI, Schweizerische Bundeskanzlei, Figshare, ING, Reed Business, Elsevier, and others.
During the development of the LKIF Core Ontology (see below), and LRI Core, prior to that, I was an active member of the OWL community, and became the University of Amsterdam representative for the standardisation of OWL2.
The MetaLex document server (MDS) publishes all versions of all Dutch regulations as Linked Data and as CEN MetaLex XML. It is the largest Linked Data repository of legislation of its kind outside the UK. I have used the MDS for automatic
annotation of legislation for the Tax Administration, and as source for large-scale network analysis and comparison to the Web.
http://doc.metalex.eu and http://figshare.com/articles/A_Network_Analysis_of_Dutch_Regulations/689880
The LKIF Core ontology was developed as part of the EU ESTRELLA project as core ontology for legal knowledge based systems. During this period I joined the OWL working group of the W3C which resulted in a number of publications in which I tried to push the limits of the OWL2 knowledge representation language. https://github.com/RinkeHoekstra/lkif-core