22 results on '"Ralph Ewerth"'
Search Results
2. B!SON: A Tool for Open Access Journal Recommendation
- Author
-
Elias Entrup, Anita Eppelin, Ralph Ewerth, Josephine Hartwig, Marco Tullney, Michael Wohlgemuth, and Anett Hoppe
- Abstract
Finding a suitable open access journal to publish scientific work is a complex task: Researchers have to navigate a constantly growing number of journals, institutional agreements with publishers, funders’ conditions and the risk of Predatory Publishers. To help with these challenges, we introduce a web-based journal recommendation system called B!SON. It is developed based on a systematic requirements analysis, built on open data, gives publisher-independent recommendations and works across domains. It suggests open access journals based on title, abstract and references provided by the user. The recommendation quality has been evaluated using a large test set of 10,000 articles. Development by two German scientific libraries ensures the longevity of the project.
- Published
- 2022
3. Citation Recommendation for Research Papers via Knowledge Graphs
- Author
-
Arthur Brack, Ralph Ewerth, and Anett Hoppe
- Subjects
Citation network ,Research knowledge ,Information retrieval ,Knowledge graph ,Exploit ,Computer science ,media_common.quotation_subject ,Quality (business) ,State (computer science) ,Citation ,Task (project management) ,media_common - Abstract
Citation recommendation for research papers is a valuable task that can help researchers improve the quality of their work by suggesting relevant related work. Current approaches for this task rely primarily on the text of the papers and the citation network. In this paper, we propose to exploit an additional source of information, namely research knowledge graphs (KGs) that interlink research papers based on mentioned scientific concepts. Our experimental results demonstrate that the combination of information from research KGs with existing state-of-the-art approaches is beneficial. Experimental results are presented for the STM-KG (STM: Science, Technology, Medicine), which is an automatically populated knowledge graph based on the scientific concepts extracted from papers of ten domains. The proposed approach outperforms the state of the art with a mean average precision of 20.6% (+0.8) for the top-50 retrieved results.
- Published
- 2021
4. Visualizing Copyright-Protected Video Archive Content Through Similarity Search
- Author
-
Kader Pustu-Iren, Sherzod Hakimov, Eric Müller-Budack, and Ralph Ewerth
- Subjects
Information retrieval ,Computer science ,Nearest neighbor search ,Content (measure theory) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Person recognition ,Visualization - Abstract
Providing access to protected media archives can be difficult due to licensing restrictions. In this paper, an alternative way to examine video content without violating terms of use is proposed. For this purpose, keyframes of the original, archived videos are replaced with images from publicly available sources using person recognition and visual similarity search for scenes and locations.
- Published
- 2021
5. Image Analytics in Web Archives
- Author
-
Eric Müller-Budack, Ralph Ewerth, Sebastian Diering, Kader Pustu-Iren, and Matthias Springstein
- Subjects
Exploit ,Computer science ,business.industry ,Semantic search ,computer.software_genre ,Metadata ,World Wide Web ,Identification (information) ,Named-entity recognition ,Analytics ,Web page ,The Internet ,business ,computer - Abstract
The multimedia content published on the World Wide Web is constantly growing and contains valuable information in various domains. The Internet Archive initiative has gathered billions of time-versioned web pages since the mid-nineties, but unfortunately, they are rarely provided with appropriate metadata. This lack of structured data limits the exploration of the archives, and automated solutions are required to enable semantic search. While many approaches exploit the textual content of news in the Internet Archive to detect named entities and their relations, visual information is generally disregarded. In this chapter, we present an approach that leverages deep learning techniques for the identification of public personalities in the images of news articles stored in the Internet Archive. In addition, we elaborate on how this approach can be extended to enable detection of other entity types such as locations or events. The approach complements named entity recognition and linking tools for text and allows researchers and analysts to track the media coverage and relations of persons more precisely. We have analysed more than one million images from news articles in the Internet Archive and demonstrated the feasibility of the approach with two use cases in different domains: politics and entertainment.
- Published
- 2021
6. Predicting Knowledge Gain During Web Search Based on Multimedia Resource Consumption
- Author
-
Ran Yu, Georg Pardi, Christian Otto, Ralph Ewerth, Yvonne Kammerer, Markus Rokicki, Peter Holtz, Anett Hoppe, Stefan Dietze, and Johannes von Hoyer
- Subjects
Resource (project management) ,Exploit ,Multimedia ,Computer science ,Feature (computer vision) ,Feature extraction ,Informal learning ,Set (psychology) ,computer.software_genre ,computer ,Field (computer science) ,Document layout analysis - Abstract
In informal learning scenarios the popularity of multimedia content, such as video tutorials or lectures, has significantly increased. Yet, the users’ interactions, navigation behavior, and consequently learning outcome, have not been researched extensively. Related work in this field, also called search as learning, has focused on behavioral or text resource features to predict learning outcome and knowledge gain. In this paper, we investigate whether we can exploit features representing multimedia resource consumption to predict the knowledge gain (KG) during Web search from in-session data, that is without prior knowledge about the learner. For this purpose, we suggest a set of multimedia features related to image and video consumption. Our feature extraction is evaluated in a lab study with 113 participants where we collected data for a given search as learning task on the formation of thunderstorms and lightning. We automatically analyze the monitored log data and utilize state-of-the-art computer vision methods to extract features about the seen multimedia resources. Experimental results demonstrate that multimedia features can improve KG prediction. Finally, we provide an analysis on feature importance (text and multimedia) for KG prediction.
- Published
- 2021
7. Coreference Resolution in Research Papers from Multiple Domains
- Author
-
Ralph Ewerth, Daniel Uwe Müller, Anett Hoppe, and Arthur Brack
- Subjects
0301 basic medicine ,education.field_of_study ,Coreference ,Computer science ,business.industry ,Population ,02 engineering and technology ,Resolution (logic) ,computer.software_genre ,Task (project management) ,03 medical and health sciences ,Information extraction ,030104 developmental biology ,0202 electrical engineering, electronic engineering, information engineering ,Question answering ,020201 artificial intelligence & image processing ,Artificial intelligence ,education ,F1 score ,Transfer of learning ,business ,computer ,Natural language processing - Abstract
Coreference resolution is essential for automatic text understanding to facilitate high-level information retrieval tasks such as text summarisation or question answering. Previous work indicates that the performance of state-of-the-art approaches (e.g. based on BERT) noticeably declines when applied to scientific papers. In this paper, we investigate the task of coreference resolution in research papers and subsequent knowledge graph population. We present the following contributions: (1) We annotate a corpus for coreference resolution that comprises 10 different scientific disciplines from Science, Technology, and Medicine (STM); (2) We propose transfer learning for automatic coreference resolution in research papers; (3) We analyse the impact of coreference resolution on knowledge graph (KG) population; (4) We release a research KG that is automatically populated from 55,485 papers in 10 STM domains. Comprehensive experiments show the usefulness of the proposed approach. Our transfer learning approach considerably outperforms state-of-the-art baselines on our corpus with an F1 score of 61.4 (+11.0), while the evaluation against a gold standard KG shows that coreference resolution improves the quality of the populated KG significantly with an F1 score of 63.5 (+21.8).
- Published
- 2021
8. Evaluation of Automated Image Descriptions for Visually Impaired Students
- Author
-
Anett Hoppe, Ralph Ewerth, and David Morris
- Subjects
Computer science ,business.industry ,If and only if ,Visually impaired ,media_common.quotation_subject ,Quality (business) ,Artificial intelligence ,computer.software_genre ,business ,computer ,Natural language processing ,Image (mathematics) ,media_common - Abstract
Illustrations are widely used in education, and sometimes, alternatives are not available for visually impaired students. Therefore, those students would benefit greatly from an automatic illustration description system, but only if those descriptions were complete, correct, and easily understandable using a screenreader. In this paper, we report on a study for the assessment of automated image descriptions. We interviewed experts to establish evaluation criteria, which we then used to create an evaluation questionnaire for sighted non-expert raters, and description templates. We used this questionnaire to evaluate the quality of descriptions which could be generated with a template-based automatic image describer. We present evidence that these templates have the potential to generate useful descriptions, and that the questionnaire identifies problems with description templates.
- Published
- 2021
9. Requirements Analysis for an Open Research Knowledge Graph
- Author
-
Markus Stocker, Arthur Brack, Ralph Ewerth, Anett Hoppe, and Sören Auer
- Subjects
Open research ,Computer science ,Science communication ,Use case ,Subject (documents) ,Design science research ,Data science ,Scholarly communication ,Requirements analysis ,Field (computer science) - Abstract
Current science communication has a number of drawbacks and bottlenecks which have been subject of discussion lately: Among others, the rising number of published articles makes it nearly impossible to get a full overview of the state of the art in a certain field, or reproducibility is hampered by fixed-length, document-based publications which normally cannot cover all details of a research work. Recently, several initiatives have proposed knowledge graphs (KGs) for organising scientific information as a solution to many of the current issues. The focus of these proposals is, however, usually restricted to very specific use cases. In this paper, we aim to transcend this limited perspective by presenting a comprehensive analysis of requirements for an Open Research Knowledge Graph (ORKG) by (a) collecting daily core tasks of a scientist, (b) establishing their consequential requirements for a KG-based system, (c) identifying overlaps and specificities, and their coverage in current solutions. As a result, we map necessary and desirable requirements for successful KG-based science communication, derive implications and outline possible solutions.
- Published
- 2020
10. SlideImages: A Dataset for Educational Image Classification
- Author
-
David Morris, Eric Müller-Budack, and Ralph Ewerth
- Subjects
Focus (computing) ,Data visualization ,Information retrieval ,Contextual image classification ,business.industry ,Computer science ,Architecture ,business ,Convolutional neural network ,Test data ,Test (assessment) ,Task (project management) - Abstract
In the past few years, convolutional neural networks (CNNs) have achieved impressive results in computer vision tasks, which however mainly focus on photos with natural scene content. Besides, non-sensor derived images such as illustrations, data visualizations, figures, etc. are typically used to convey complex information or to explore large datasets. However, this kind of images has received little attention in computer vision. CNNs and similar techniques use large volumes of training data. Currently, many document analysis systems are trained in part on scene images due to the lack of large datasets of educational image data. In this paper, we address this issue and present SlideImages, a dataset for the task of classifying educational illustrations. SlideImages contains training data collected from various sources, e.g., Wikimedia Commons and the AI2D dataset, and test data collected from educational slides. We have reserved all the actual educational images as a test dataset in order to ensure that the approaches using this dataset generalize well to new educational images, and potentially other domains. Furthermore, we present a baseline system using a standard deep neural architecture and discuss dealing with the challenge of limited training data.
- Published
- 2020
11. Visual Summarization of Scholarly Videos Using Word Embeddings and Keyphrase Extraction
- Author
-
Christian Otto, Hang Zhou, and Ralph Ewerth
- Subjects
Exploit ,business.industry ,Computer science ,Process (engineering) ,media_common.quotation_subject ,A domain ,02 engineering and technology ,Optical character recognition ,computer.software_genre ,Automatic summarization ,Textual information ,03 medical and health sciences ,0302 clinical medicine ,030221 ophthalmology & optometry ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Quality (business) ,Artificial intelligence ,business ,computer ,Word (computer architecture) ,Natural language processing ,media_common - Abstract
Effective learning with audiovisual content depends on many factors. Besides the quality of the learning resource’s content, it is essential to discover the most relevant and suitable video in order to support the learning process most effectively. Video summarization techniques facilitate this goal by providing a quick overview over the content. It is especially useful for longer recordings such as conference presentations or lectures. In this paper, we present a domain specific approach that generates a visual summary of video content using solely textual information. For this purpose, we exploit video annotations that are automatically generated by speech recognition and video OCR (optical character recognition). Textual information is represented by semantic word embeddings and extracted keyphrases. We demonstrate the feasibility of the proposed approach through its incorporation into the TIB AV-Portal (http://av.tib.eu/), which is a platform for scientific videos. The accuracy and usefulness of the generated video content visualizations is evaluated in a user study.
- Published
- 2019
12. 'Is This an Example Image?' – Predicting the Relative Abstractness Level of Image and Text
- Author
-
Christian Otto, Sebastian Holzki, and Ralph Ewerth
- Subjects
Multimodal search ,Computer science ,business.industry ,Deep learning ,02 engineering and technology ,Mutual information ,computer.software_genre ,Autoencoder ,Set (abstract data type) ,03 medical and health sciences ,0302 clinical medicine ,Test set ,Metric (mathematics) ,030221 ophthalmology & optometry ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Natural language processing ,Abstraction (linguistics) - Abstract
Successful multimodal search and retrieval requires the automatic understanding of semantic cross-modal relations, which, however, is still an open research problem. Previous work has suggested the metrics cross-modal mutual information and semantic correlation to model and predict cross-modal semantic relations of image and text. In this paper, we present an approach to predict the (cross-modal) relative abstractness level of a given image-text pair, that is whether the image is an abstraction of the text or vice versa. For this purpose, we introduce a new metric that captures this specific relationship between image and text at the Abstractness Level (ABS). We present a deep learning approach to predict this metric, which relies on an autoencoder architecture that allows us to significantly reduce the required amount of labeled training data. A comprehensive set of publicly available scientific documents has been gathered. Experimental results on a challenging test set demonstrate the feasibility of the approach.
- Published
- 2019
13. An Analytics Tool for Exploring Scientific Software and Related Publications
- Author
-
Helge Holzmann, Günter Kniesel, Ralph Ewerth, Anett Hoppe, and Jascha Hagen
- Subjects
Computer science ,business.industry ,020207 software engineering ,02 engineering and technology ,Scientific software ,Software ,Work (electrical) ,Analytics ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Key (cryptography) ,Use case ,business ,Software engineering - Abstract
Scientific software is one of the key elements for reproducible research. However, classic publications and related scientific software are typically not (sufficiently) linked, and tools are missing to jointly explore these artefacts. In this paper, we report on our work on developing the analytics tool SciSoftX (https://labs.tib.eu/info/projekt/scisoftx/) for jointly exploring software and publications. The presented prototype, a concept for automatic code discovery, and two use cases demonstrate the feasibility and usefulness of the proposal.
- Published
- 2018
14. Geolocation Estimation of Photos Using a Hierarchical Model and Scene Classification
- Author
-
Ralph Ewerth, Kader Pustu-Iren, and Eric Müller-Budack
- Subjects
Computer science ,Process (engineering) ,business.industry ,020207 software engineering ,02 engineering and technology ,Machine learning ,computer.software_genre ,Hierarchical database model ,Task (project management) ,Geolocation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Geographic coordinate system ,business ,computer - Abstract
While the successful estimation of a photo’s geolocation enables a number of interesting applications, it is also a very challenging task. Due to the complexity of the problem, most existing approaches are restricted to specific areas, imagery, or worldwide landmarks. Only a few proposals predict GPS coordinates without any limitations. In this paper, we introduce several deep learning methods, which pursue the latter approach and treat geolocalization as a classification problem where the earth is subdivided into geographical cells. We propose to exploit hierarchical knowledge of multiple partitionings and additionally extract and take the photo’s scene content into account, i.e., indoor, natural, or urban setting etc. As a result, contextual information at different spatial resolutions as well as more specific features for various environmental settings are incorporated in the learning process of the convolutional neural network. Experimental results on two benchmarks demonstrate the effectiveness of our approach outperforming the state of the art while using a significant lower number of training images and without relying on retrieval methods that require an appropriate reference dataset.
- Published
- 2018
15. TIB-arXiv: An Alternative Search Portal for the arXiv Pre-print Server
- Author
-
Huu Hung Nguyen, Matthias Springstein, Anett Hoppe, and Ralph Ewerth
- Subjects
Focus (computing) ,020205 medical informatics ,Computer science ,010102 general mathematics ,02 engineering and technology ,01 natural sciences ,Quantitative biology ,Ranking (information retrieval) ,Visualization ,World Wide Web ,Open source ,Print server ,Natural Science Discipline ,0202 electrical engineering, electronic engineering, information engineering ,0101 mathematics ,User interface - Abstract
arXiv is a popular pre-print server focusing on natural science disciplines (e.g., physics, computer science, quantitative biology). As a platform with an emphasis on easy publishing services it does not provide enhanced search functionality – but offers programming interfaces which allow external parties to add these services. This paper presents extensions of the open source framework arXiv Sanity Preserver (SP). With respect to the original framework, it derestricts SP’s topical focus and allows for text-based search and visualisation of all papers in arXiv. To this end, all papers are stored in a unified back-end; the extension provides enhanced search and ranking facilities and allows the exploration of arXiv papers by a novel user interface.
- Published
- 2018
16. Finding Person Relations in Image Data of News Collections in the Internet Archive
- Author
-
Ralph Ewerth, Kader Pustu-Iren, Eric Müller-Budack, and Sebastian Diering
- Subjects
Computer science ,business.industry ,Deep learning ,Semantic search ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Facial recognition system ,Entertainment ,Metadata ,World Wide Web ,Web page ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,The Internet ,Use case ,Artificial intelligence ,business ,0105 earth and related environmental sciences - Abstract
The amount of multimedia content in the World Wide Web is rapidly growing and contains valuable information for many applications in different domains. The Internet Archive initiative has gathered billions of time-versioned web pages since the mid-nineties. However, the huge amount of data is rarely labeled with appropriate metadata and automatic approaches are required to enable semantic search. Normally, the textual content of the Internet Archive is used to extract entities and their possible relations across domains such as politics and entertainment, whereas image and video content is usually disregarded. In this paper, we introduce a system for person recognition in image content of web news stored in the Internet Archive. Thus, the system complements entity recognition in text and allows researchers and analysts to track media coverage and relations of persons more precisely. Based on a deep learning face recognition approach, we suggest a system that detects persons of interest and gathers sample material, which is subsequently used to identify them in the image data of the Internet Archive. We evaluate the performance of the face recognition system on an appropriate standard benchmark dataset and demonstrate the feasibility of the approach with two use cases.
- Published
- 2018
17. Recommending Scientific Videos Based on Metadata Enrichment Using Linked Open Data
- Author
-
Ralph Ewerth, Christian Otto, and Justyna Medrek
- Subjects
Integrated Authority File ,Information retrieval ,Exploit ,business.industry ,Computer science ,Semantic search ,02 engineering and technology ,Linked data ,Optical character recognition ,computer.software_genre ,Metadata ,Entertainment ,020204 information systems ,Similarity (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business ,computer - Abstract
The amount of available videos in the Web has significantly increased not only for entertainment etc., but also to convey educational or scientific information in an effective way. There are several web portals that offer access to the latter kind of video material. One of them is the TIB AV-Portal of the Leibniz Information Centre for Science and Technology (TIB), which hosts scientific and educational video content. In contrast to other video portals, automatic audiovisual analysis (visual concept classification, optical character recognition, speech recognition) is utilized to enhance metadata information and semantic search. In this paper, we propose to further exploit and enrich this automatically generated information by linking it to the Integrated Authority File (GND) of the German National Library. This information is used to derive a measure to compare the similarity of two videos which serves as a basis for recommending semantically similar videos. A user study demonstrates the feasibility of the proposed approach.
- Published
- 2018
18. 'When Was This Picture Taken?' – Image Date Estimation in the Wild
- Author
-
Ralph Ewerth, Eric Müller, and Matthias Springstein
- Subjects
Estimation ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Contrast (statistics) ,Pattern recognition ,02 engineering and technology ,Color photography ,Convolutional neural network ,Regression ,Task (project management) ,law.invention ,Image (mathematics) ,law ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Baseline (configuration management) ,business - Abstract
The problem of automatically estimating the creation date of photos has been addressed rarely in the past. In this paper, we introduce a novel dataset Date Estimation in the Wild for the task of predicting the acquisition year of images captured in the period from 1930 to 1999. In contrast to previous work, the dataset is neither restricted to color photography nor to specific visual concepts. The dataset consists of more than one million images crawled from Flickr and contains a large number of different motives. In addition, we propose two baseline approaches for regression and classification, respectively, relying on state-of-the-art deep convolutional neural networks. Experimental results demonstrate that these baselines are already superior to annotations of untrained humans.
- Published
- 2017
19. Content-Based Video Retrieval in Historical Collections of the German Broadcasting Archive
- Author
-
Ralph Ewerth, Markus Mühling, Manja Meister, Jörg Wehling, Angelika Hörth, Bernd Freisleben, and Nikolaus Korfhage
- Subjects
Multimedia ,business.industry ,Computer science ,Shot (filmmaking) ,Nearest neighbor search ,Perspective (graphical) ,02 engineering and technology ,Broadcasting ,computer.software_genre ,language.human_language ,German ,Cultural heritage ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,language ,020201 artificial intelligence & image processing ,business ,Content (Freudian dream analysis) ,computer ,Video retrieval - Abstract
The German Broadcasting Archive (DRA) maintains the cultural heritage of radio and television broadcasts of the former German Democratic Republic (GDR). The uniqueness and importance of the video material stimulates a large scientific interest in the video content. In this paper, we present an automatic video analysis and retrieval system for searching in historical collections of GDR television recordings. It consists of video analysis algorithms for shot boundary detection, concept classification, person recognition, text recognition and similarity search. The performance of the system is evaluated from a technical and an archival perspective on 2,500 hours of GDR television recordings.
- Published
- 2016
20. Improving Cross-Domain Concept Detection via Object-Based Features
- Author
-
Ralph Ewerth, Bernd Freisleben, and Markus Mühling
- Subjects
Multiple kernel learning ,Generalization ,business.industry ,Computer science ,Computer vision ,Artificial intelligence ,Visual appearance ,Object (computer science) ,business ,Convolutional neural network ,TRECVID ,Object detection ,Domain (software engineering) - Abstract
Learned visual concept models often do not work well for other domains not considered during training, because a concept's visual appearance strongly depends on the domain of the corresponding image or video source. In this paper, a novel approach to improve cross-domain concept detection is presented. The proposed approach uses features based on object detection results in addition to Bag-of-Visual-Words features as inputs to concept classifiers. Experiments conducted on TRECVid videos using a high-performance computing cluster show that the additional use of object-based features significantly improves the generalization properties of the learned concept models in cross-domain settings, for example, from broadcast news videos to documentary films and vice versa.
- Published
- 2015
21. MM-Locate-News: Multimodal Focus Location Estimation in News
- Author
-
Golsa Tahmasebzadeh, Eric Müller-Budack, Sherzod Hakimov, and Ralph Ewerth
- Full Text
- View/download PDF
22. Domain-Independent Extraction of Scientific Concepts from Research Articles
- Author
-
Arthur Brack, Jennifer D'Souza, Ralph Ewerth, Sören Auer, and Anett Hoppe
- Subjects
Training set ,business.industry ,Computer science ,Deep learning ,05 social sciences ,02 engineering and technology ,computer.software_genre ,3. Good health ,Task (project management) ,Domain (software engineering) ,Set (abstract data type) ,Active learning ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Artificial intelligence ,0509 other social sciences ,050904 information & library sciences ,F1 score ,business ,computer ,Natural language processing - Abstract
We examine the novel task of domain-independent scientific concept extraction from abstracts of scholarly articles and present two contributions. First, we suggest a set of generic scientific concepts that have been identified in a systematic annotation process. This set of concepts is utilised to annotate a corpus of scientific abstracts from 10 domains of Science, Technology and Medicine at the phrasal level in a joint effort with domain experts. The resulting dataset is used in a set of benchmark experiments to (a) provide baseline performance for this task, (b) examine the transferability of concepts between domains. Second, we present two deep learning systems as baselines. In particular, we propose active learning to deal with different domains in our task. The experimental results show that (1) a substantial agreement is achievable by non-experts after consultation with domain experts, (2) the baseline system achieves a fairly high F1 score, (3) active learning enables us to nearly halve the amount of required training data.
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.