http://ijhe.sciedupress.com International Journal of Higher Education Vol. 8, No. 3; 2019
Published by Sciedu Press 104 ISSN 1927-6044 E-ISSN 1927-6052
Bao, et al (2014) proposed a translation-based KB-QA method that integrates semantic parsing and QA in one
unified framework and showed better results on a general domain evaluation set. Zhang et al, (2016) adopt a
heterogeneous network embedding method, termed as TransR, to extract items' structural representations by
considering the heterogeneity of both nodes and relationships. They proposed Collaborative Knowledge Base
Embedding (CKE) to jointly learn the latent representations in collaborative filtering as well as items' semantic
representations from the knowledge base. Park (Park et al, 2016; Zesch et al, 2007; Lehmannm et al, 2015; Rebele et
al, 2016; Ponzetto and Strube, 2013; Wang and Kim, 2017; Tezcan Kardas and Sadik, 2018; Vafa et al, 2018;
Wadmany and Melamed, 2018; Wyatt et al, 2018; Yang et al, 2017) proposed a method to automatically generate the
object name recognition corpus using knowledge base. Two methods are applied according to the type of knowledge
base. The first method is to create a learning corpus by attaching an object name tag to a sentence of Wikipedia text
based on Wikipedia. The second method generates a learning corpus by collecting various types of sentences from
the Internet and attaching an object name tag using a pre-base which holds the relation between various objects in the
database.
Wikipedia is a useful resource for building knowledge bases and is actively used in many areas (Zesch et al, 2007;
Lehmannm et al, 2015; Rebele et al, 2016; Ponzetto and Strube, 2013; Mokhtar, 2017; Khan, Hassan, &
Marimuthu, 2017; Garaeva and Ahmetzyanov, 2018; Kamau., Mwania and Njue, 2018; Aina and Ayodele,
2018; Audu, 2018; Promsri, 2018; Wang and Yang, 2018; Hassan and Kommers, 2018;
Agbabiaka-Mustapha and Adebola, 2018). Zesch et al, (2007) developed a general purpose, high performance
Java-based Wikipedia API to use Wikipedia as a lexical semantic resource in large-scale NLP tasks. DBpedia project
(Lehmannm et al, 2015; Rebele et al, 2016; Ponzetto and Strube, 2013; Wang and Kim, 2017; Tezcan Kardas and
Sadik, 2018; Vafa et al, 2018; Wadmany and Melamed, 2018; Wyatt et al, 2018; Yang et al, 2017; Yildirim, 2018)
extracts knowledge from 111 different language editions of Wikipedia. The largest DBpedia knowledge base which
is extracted from the English edition of Wikipedia consists of over 400 million facts that describe 3.7 million things.
The DBpedia knowledge bases that are extracted from the other 110 Wikipedia editions together consist of 1.46
billion facts and describe 10 million additional things. Yago (Rebele et al, 2016; Ponzetto and Strube, 2013; Wang
and Kim, 2017; Tezcan Kardas and Sadik, 2018; Vafa et al, 2018; Wadmany and Melamed, 2018; Wyatt et al, 2018;
Yang et al, 2017; Yildirim, 2018; Yildirim and Çoban, 2018) is a large knowledge base that is built automatically
from Wikipedia, WordNet and GeoNames. This project combines information from Wikipedias in 10 different
languages, thus giving the knowledge a multilingual dimension. Wikitaxonomy (Ponzetto and Strube, 2013; Wang
and Kim, 2017; Tezcan Kardas and Sadik, 2018; Vafa et al, 2018; Wadmany and Melamed, 2018; Wyatt et al, 2018;
Yang et al, 2017; Yildirim, 2018; Yildirim and Çoban, 2018; Lee et al, 2017) is a taxonomy automatically generated
from the system of categories in Wikipedia. Categories in the resource are identified as either classes or instances
and included in a large subsumption. Knowledge base is used as language resources in various research fields
including search and classification fields (Wang and Kim, 2017), (Tezcan Kardas and Sadik, 2018).
The workbench is used in various studies to build knowledge base (Vafa et al, 2018; Wadmany and Melamed, 2018;
Wyatt et al, 2018). Rybina (Vafa et al, 2018; Wadmany and Melamed, 2018; Wyatt et al, 2018; Yang et al, 2017;
Yildirim, 2018; Yildirim and Çoban, 2018; Lee et al, 2017; Rybina et al, 2017) suggested knowledge acquisition
processes that use technologic knowledge base of intelligent planner of AT-TECHNOLOGY workbench and special
program tools. This work is focused on models and methods of distributed knowledge acquisition from databases as
additional knowledge sources and automation of the process via intelligent program environment. Choi (Wadmany
and Melamed, 2018; Wyatt et al, 2018; Yang et al, 2017; Yildirim, 2018; Yildirim and Çoban, 2018; Lee et al, 2017;
Rybina et al, 2017; Choi et al, 2012) suggested SINDI-WALKS, an integrated workbench that can extract and
systematically manage technical knowledge inherent in scientific and technical literature such as academic papers
and patents. SINDI-WALKS basically includes a technology knowledge extraction engine that identifies the PLOT,
ie, names, names, institutions, and technical terms in text and extracts semantic relationships between them, and a
testbed function for monitoring and error analysis of these engines. do. It also supports the ability to build test
collections to efficiently build a learning set that can be utilized by a technology knowledge extraction engine. A
workbench was developed and used to support all the processes needed to build a terminology dictionary in the
defence field (Wyatt et al, 2018; Yang et al, 2017; Yildirim, 2018; Yildirim and Çoban, 2018; Lee et al, 2017;
Rybina et al, 2017; Choi et al, 2012; Choi et al, 2012). The workbench is composed of terminology dictionary
construction process and organization structure, definition of headwords, selection of target document for extracting
terminology candidate, extraction of terminology candidate, creation of terminology candidate group, dictionary
construction, verification of dictionary.