The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Data mapping among different data standards in health institutes is often a necessity when data exchanges occur among different institutes. However, no matter rule-based approaches or traditional machine learning methods, none of these methods have achieved satisfactory results yet. In this work, we propose a deep learning method, mixture feature embedding convolutional neural network (MfeCNN), to...
We present a compilation tool SBVR2Alloy which is used to automatically generate as well as validate service choreographies specified in structured natural language. The proposed approach builds on a model transformation between Semantics of Business Vocabulary and Rules (SBVR), an OMG standard for specifying business models in structured English, and the Alloy Analyzer which is a SAT based constraint...
The importance of functional status information (FSI) has become increasingly evident in recent years [1, 2]. However, implementation, application, and normalization of FSI in health care and Electronic Health Records (EHRs) have been largely underexplored. The World Health Organization's International Classification of Functioning, Disability and Health (ICF) [3] is considered to be the international...
Code bloat is a phenomenon in Genetic Programming (GP) that increases the size of individuals during the evolutionary process. Over the years, there has been a large number of research that attempted to address this problem. In this paper, we propose a new method to control code bloat and reduce the complexity of the solutions in GP. The proposed method is called Substituting a subtree with an Approximate...
The construction of knowledge graph of dangerous goods (KGDG) is with great significance of inferring relative information of dangerous goods, developing corresponding policy for its storage and transport, preventing disaster caused by dangerous goods(DG), and providing emergency plan when the disaster happens. Since distributed representation of natural language is an effective method for knowledge...
Proof assistants such as Coq are used to construct and check formal proofs in many large-scale verification projects. As proofs grow in number and size, the need for tool support to quickly find failing proofs after revising a project increases. We present a technique for large-scale regression proof selection, suitable for use in continuous integration services, e.g., Travis CI. We instantiate the...
The recently developed variational autoencoders (VAEs) have proved to be an effective confluence of the rich representational power of neural networks with Bayesian methods. However, most work on VAEs use a rather simple prior over the latent variables such as standard normal distribution, thereby restricting its applications to relatively simple phenomena. In this work, we propose hierarchical non-parametric...
Many existing scene parsing methods adopt Convolutional Neural Networks with fixed-size receptive fields, which frequently result in inconsistent predictions of large objects and invisibility of small objects. To tackle this issue, we propose a scale-adaptive convolution to acquire flexiblesize receptive fields during scene parsing. Through adding a new scale regression layer, we can dynamically infer...
There is an overwhelming variety of multimedia ontologies used to narrow the semantic gap, many of which are overlapping, not richly axiomatized, do not provide a proper taxonomical structure, and do not define complex correlations between concepts and roles. Moreover, not all ontologies used for image annotation are suitable for video scene representation, due to the lack of rich high-level semantics...
Form ontology was built to complement the knowledge base of XReformer system, a system to generate web forms design automatically with case-based reasoning (CBR) approach. Case base is used to store cases of form design while the ontology is used to define forms and its elements and the relationship between them as well as between the elements itself. The ontology acts as a small-scale knowledge base...
Recently, improvements of latent semantic analysis or LSA which stems from singular value decomposition to derive latent semantic classes, especially hk-LSA model, have been proposed. The hk-LSA model is based on reducing dimension of vector space and like-probabilistic relationship between document-term and latent-topic space. This improved model overcomes some shortcomings of standard LSA such as...
In this paper, we present Mnogoznal, an unsupervised system for word sense disambiguation. Given a sentence, the system chooses the most relevant sense of each input word w.r.t. to the semantic similarity between the given sentence and the synset constituting the sense of the target word. Mnogoznal has two modes of operation. The sparse mode uses the traditional vector space model to estimate the...
A promising way to improve the programming process is increasing the declarativeness of programming, approximation to natural language on the base of accumulating and actively using knowledge. The essence of the proposed approach is representing the semantics of a program in the form of a set of concepts about actions, participants, resources and relations between them, accumulation and classification...
Paraphrase Detection is the task of examining if two sentences convey the same meaning or not. Here, in this paper, we have chosen a sentence embedding by unsupervised RAE vectors for capturing syntactic as well as semantic information. The RAEs learn features from the nodes of the parse tree and chunk information along with unsupervised word embedding. These learnt features are used for measuring...
The operation and maintenance of large distributed systems that are subject to high QoS conditions has led to the need of designing and developing advanced monitoring tools that facilitate the administration of the critical services required by the user communities. RASSMon is a portable, reliable, secure software platform able to collect monitoring data from multiple sources in heterogeneous environments...
This article presents an approach to examining the similarity of the sentences. In our approach, Euler algorithm was used to generate a series of words based on tree and S⊘rensen-Dice coefficient was applied to determine the similarity between compared trees. The emphasis is on defining the similarity between the correct and incorrect answers from the Yahoo Question and Answer of the Non-Factual Data...
In the academic literature, many uses of the Object Constraint Language (OCL) have been proposed. By contrast, the utilization of OCL in contemporary modelling tools lags behind, suggesting that leverage of OCL remains limited in practice. We consider this undeserved, and present a scheme for partially evaluating OCL expressions that allows one to capitalize on given OCL specifications for a wide...
Ontology Matching is a process to find correspondences between semantically related entities of two ontologies. Most matching systems do evaluation by comparing the correspondences with reference alignment. Since 2010 another method has been used to measure a logic-based of correspondence or mapping, called incoherent mapping measurement. The more incoherent of the mapping the lower quality of mapping...
The area of secure compilation aims to design compilers which produce hardened code that can withstand attacks from low-level co-linked components. So far, there is no formal correctness criterion for secure compilers that comes with a clear understanding of what security properties the criterion actually provides. Ideally, we would like a criterion that, if fulfilled by a compiler, guarantees that...
This paper designs and implements an automatic evaluation system for experimental reports in the field of university computer virtual experiment. The evaluation type is divided into three types: the only answer type, the rule-related type and the subjective short answer type. For the problems of the subjective short answer, a simple and effective method based on the participle of the standard answer...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.