13 results on '"N. H. N. D. De Silva"'
Search Results
2. Classifying Sentences in Court Case Transcripts using Discourse and Argumentative Properties
- Author
-
Menuka Warushavithana, G. Rathnayake, Amal Shehan Perera, Thejan Rupasinghe, Viraj Gamage, N. H. N. D. de Silva, and M. Perera
- Subjects
Argumentative ,Computer science ,business.industry ,General Medicine ,computer.software_genre ,Task (project management) ,Support vector machine ,Information extraction ,Court case ,Relationship Type ,Artificial intelligence ,Value (semiotics) ,business ,computer ,Sentence ,Natural language processing - Abstract
Information that is available in court case transcripts which describes the proceedings of previous legal cases are of significant importance to legal officials. Therefore, automatic information extraction from court case transcripts can be considered as a task of huge importance when it comes to facilitating the processes related to the legal domain. A sentence can be considered as a fundamental textual unit of any document which is made up of text. Therefore, analyzing the properties of sentences can be of immense value when it comes to information extraction from machine-readable text. This paper demonstrates how the properties of sentences can be used to extract valuable information from court case transcripts. As the first task, the sentence pairs were classified based on the relationship type which can be observed between the two sentences. There, we defined relationship types that can be observed between sentences in court case transcripts. A system combining a machine learning model and a rule-based approach was used to classify pairs of sentences according to the relationship type. The next classification task was performed based on whether a given sentence provides a legal argument or not. The results obtained through the proposed methodologies were evaluated using human judges. To the best of our knowledge, this is the first study where discourse relationships between sentences have been used to determine relationships among sentences in legal court case transcripts. Similarly, this study provides novel and effective approaches to identify argumentative sentences in court case transcripts.
- Published
- 2019
3. Design and implementation of an optimized communication method for Remote Meter Reading using Zigbee
- Author
-
G.D. Porawagamage, N. H. N. D. de Silva, Ktmu Hemapala, and M.S. Dunuweera
- Subjects
business.industry ,Computer science ,010401 analytical chemistry ,Real-time computing ,020206 networking & telecommunications ,02 engineering and technology ,Concentrator ,01 natural sciences ,0104 chemical sciences ,Software ,Resource (project management) ,Server ,0202 electrical engineering, electronic engineering, information engineering ,Global Positioning System ,Metre ,General Packet Radio Service ,business ,Computer hardware ,Automatic meter reading - Abstract
This presents a research work which is carried out to optimize the Zigbee based remote meter reading network. There are various technologies available to automate the meter reading. As far as utility providers are concerned, their focus is on a reliable Remote Meter Reading (RMR) system to read the meter at minimum possible cost. The development of a reliable RMR system is highly dependent on telecommunication infrastructure which is costly if GPRS (General Packet Radio Service) is used as a way of communication. This research was done in depth to analyze the cost and function of RMR system. An analysis of communication delay and resource optimization was studied for data concentrator based RMR system. Several simulations were carried out for simulating communication speed, communication path and their optimizations. Based on the GPS location and generated algorithm software was developed for selecting the correct Zigbee power ratings.
- Published
- 2017
4. Ontology Based Information Extraction
- Author
-
N. H. N. D. De Silva
- Published
- 2016
- Full Text
- View/download PDF
5. Ontology-Based Information Extraction on PubMed abstracts using the OMIT ontology to discover inconsistencies (Full Presentation)
- Author
-
N. H. N. D. De Silva
- Published
- 2016
- Full Text
- View/download PDF
6. SAFS3 algorithm: Frequency statistic and semantic similarity based semantic classification use case
- Author
-
N. H. N. D. de Silva
- Subjects
Computer science ,business.industry ,Sentiment analysis ,Context (language use) ,computer.software_genre ,Machine learning ,Set (abstract data type) ,Semantic similarity ,Benchmark (computing) ,Artificial intelligence ,tf–idf ,business ,Algorithm ,computer ,Natural language processing ,Statistic ,Movie reviews - Abstract
Sentiment analysis on movie reviews is a topic of interest for artists and businessmen alike for the purpose of gauging the reception of an artwork or to understand the trends in the market for the benefit of future productions. In this study we introduce an algorithm (SAFS3) to classify documents into multiple classes. This paper then evaluates the SAFS3 algorithm through the use case of analysing a set of reviews from Rotten Tomatoes. Thenovel algorithm results in an accuracy of 53.6%. SAFS3 algorithm outperforms the benchmark for this context as well as the set of generic machine learning algorithms commonly used for tasks of this nature.
- Published
- 2015
7. Comparison Between Performance of Various Database Systems for Implementing a Language Corpus
- Author
-
N. H. N. D. de Silva, Chamila Wijayarathna, Maduranga Siriwardena, Chinthana Wimalasuriya, Lahiru Lasandun, Dimuthu Upeksha, and Gihan Dias
- Subjects
Indexed file ,Graph database ,Database ,business.industry ,Computer science ,Relational database ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,computer.software_genre ,NoSQL ,Database design ,Column (database) ,Computer data storage ,Data_FILES ,Architecture ,business ,computer - Abstract
Data storage and information retrieval are some of the most important aspects when it comes to the development of a language corpus. Currently most corpora use either relational databases or indexed file systems. When selecting a data storage system, most important facts to consider are the speeds of data insertion and information retrieval. Other than the aforementioned two approaches, currently there are various database systems which have different strengths that can be more useful. This paper compares the performance of data storage and retrieval mechanisms which use relational databases, graph databases, column store databases and indexed file systems for various steps such as inserting data into corpus and retrieving information from it, and tries to suggest an optimal storage architecture for a language corpus.
- Published
- 2015
8. Sentence similarity measuring by vector space model
- Author
-
U. L. D. N. Gunasinghe, W. D. T. P. Premasiri, Amal Shehan Perera, N. H. N. D. de Silva, W. A. M. De Silva, and W. A. D. Sashika
- Subjects
Computer science ,business.industry ,Similarity measure ,computer.software_genre ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Semantic role labeling ,Semantic similarity ,Similarity (network science) ,Noun ,Normalized compression distance ,Vector space model ,Artificial intelligence ,business ,computer ,Natural language processing ,Sentence - Abstract
In Natural Language Processing and Text mining related works, one of the important aspects is measuring the sentence similarity. When measuring the similarity between sentences there are three major branches which can be followed. One procedure is measuring the similarity based on the semantic structure of sentences while the other procedures are based on syntactic similarity measure and hybrid measures. Syntactic similarity based methods take into account the co-occurring words in strings. Semantic similarity measures consider the semantic similarity between words based on a Semantic Net. In most of the time, easiest way to calculate the sentence similarity is using the syntactic measures, which do not consider grammatical structure of sentences. There are sentences which have the same meaning with different words. By considering both semantic and syntactic similarity we can improve the quality of the similarity measure rather than depending only on semantic or syntactic similarity. This paper follows the sentence similarity measure algorithm which is developed based on both syntactic and semantic similarity measures. This algorithm is based on measuring the sentence similarity by adhering to a vector space model generated for the word nodes in the sentences. In this implementation we consider two types of relationships. One of them is relationship between verbs in the sentence pairs while the other one is the relationship between nouns in the sentence pairs. One of the major advantages of this method is, it can be used for variable length sentences. In the experiment and results section we have been included our gain with this algorithm for a selected set of sentence pairs and have been compared with the actual human ratings for the similarity of the sentence pairs.
- Published
- 2014
9. Document analysis based automatic concept map generation for enterprises
- Author
-
E. L. Karannagoda, Kulakshi Fernando, N. H. N. D. de Silva, Amal Shehan Perera, H. M. T. C. Herath, and M. W. I. D. Karunarathne
- Subjects
Structure (mathematical logic) ,Information retrieval ,business.industry ,Concept map ,Computer science ,Rule-based system ,Document clustering ,computer.software_genre ,Automatic summarization ,Ranking (information retrieval) ,Information extraction ,Relevance (information retrieval) ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
Ever growing knowledge bases of enterprises present the demanding challenge of proper organization of information that would enable fast retrieval of related and intended information. Document repositories of enterprises consist of large collections of documents of varying size, format and writing styles. This diversified and unstructured nature of documents restrict the possibilities of developing uniform techniques for extracting important concepts and relationships for summarization, structured representation and fast retrieval. The documented textual content is used as the input for the construction of a concept map. Here a rule based approach is used to extract concepts and relationships among them. Sentence level breakdown enables these rules to identify those concepts and relationships. These rules are based on elements in a phase structure tree of a sentence. For improving accuracy and the relevance of the extracted concepts and relationships, the special features such as titles, bold and upper case texts are used. This paper discusses how to overcome the above mentioned challenges by utilizing high level natural language processing techniques, document pre-processing techniques and developing easily understandable and extractable compact representation of concept maps. Each document in the repository is converted to a concept map representation to capture concepts and relationships among concepts described in the said document. This organization would represent a summary of the document. These individual concept maps are utilized to generate concept maps that represent sections of the repository or the entire document repository. This paper discusses how statistical techniques are used to calculate certain metrics which are used to facilitate certain requirements of the solution. Principle component analysis is used in ranking the documents by importance. The concept map is visualized using force directed type graphs which represent concepts by nodes and relationships by edges.
- Published
- 2013
10. Semi-supervised algorithm for concept ontology based word set expansion
- Author
-
N. H. N. D. de Silva, M. K. D. T. Maldeniya, and Amal Shehan Perera
- Subjects
Computer science ,business.industry ,Supervised learning ,WordNet ,Ontology (information science) ,computer.software_genre ,Artificial general intelligence ,Proof of concept ,Artificial intelligence ,Complement (linguistics) ,business ,computer ,Natural language processing ,Word (computer architecture) ,Natural language - Abstract
Word lists that contain closely related sets of words is a critical requirement in machine understanding and processing of natural languages. Creating and maintaining such closely related word lists is a critical and complex process that requires human input and carried out manually in the absence of tools. We describe a supervised learning mechanism which employs a word ontology to expand word lists containing closely related sets of words. The approach described in this paper uses two novel supervised learning techniques that complement each other for the purpose of expanding existing lists of related words. Expanding concept variable lists of RelEx2Frame component of OpenCog Artificial General Intelligence Framework using WordNet is used as a proof of concept. Intervention of this project would enable OpenCog applications to attempt to understand words that they were not able to understand before, due to the limited size of existing lists of related words.
- Published
- 2013
11. SELCHI : Travel Profiling
- Author
-
N. H. N. D. De Silva, Danaja Maldeniya, Chiran Chathuranga, Eranga Mapa, Lasitha Wattaladeniya, and Samith Dassanayake
- Published
- 2013
- Full Text
- View/download PDF
12. SigmaC : Document analysis based Automatic Concept Map Generation for Enterprises
- Author
-
E L Karannagoda, M W I D Karunarathne, N. H. N. D. De Silva, H. M. T. C. Herath, K. N. J. Fernando, and A. S. Perera
- Published
- 2011
- Full Text
- View/download PDF
13. Maintainability Risk Factors of Flat Roofs in Multi-Storey Buildings in Sri Lanka
- Author
-
Raufdeen Rameezdeen, N. H. N. D. de Silva, and Malik Ranasinghe
- Subjects
Risk analysis ,Engineering ,Scoring system ,Flat roof ,business.industry ,Maintenance actions ,Maintainability ,Forensic engineering ,Sri lanka ,business - Abstract
The research into the issue of maintainability of multi-storey buildings in Sri Lanka is still in its adolescent stage. One of the critical building elements that requires immediate attention for maintainability is flat roof areas. The flat roofs are often subjected to alternate drying and wetting cycles under tropical conditions, causing many defects and subsequent deterioration when proper detailing related to design, construction and maintenance actions are lacking. The inherent risks of maintainability of the buildings can be identified by analyzing their defects causing factors. In this research, 12 such risk factors related to the maintainability of flat roofs were identified. Further, a scoring system using Artificial Neural Networks (ANN) is developed to forecast the level of maintainability, which is projected, based on risk analysis. The level of maintainability of a typical flat roof is shown as 51%. The risk factors are also prioritized to give guidance.
- Published
- 2008
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.