16 results on '"Keyword"'
Search Results
2. Data or mathematics? Solutions to semantic problems in artificial intelligence.
- Author
-
Bu, Weijun
- Subjects
- *
ARTIFICIAL intelligence , *GRAPH algorithms , *VALUES (Ethics) , *PROBLEM solving , *LOGIC , *QUESTION answering systems - Abstract
Data support is already driving the development of artificial intelligence. But it cannot solve the semantic problem of artificial intelligence. This requires improving the semantic understanding ability of artificial intelligence. Therefore, a question answering system based on semantic problem processing is proposed in this study. The question answering system utilizes an improved unsupervised method to extract keywords. This technology integrates the semantic feature information of text into traditional word graph model algorithms. On this basis, semantic similarity information is used to calculate and allocate the initial values and edge weights of each node in the PageRank model. And corresponding restart probability matrices and transition probability matrices are constructed for iterative calculation and keyword extraction. Simultaneously, an improved semantic dependency tree was utilized for answer extraction. The improved keyword extraction method shows a decreasing trend in P and R values. The improved answer extraction method has a maximum P -value of 0.876 in the training set and 0.852 in the test set. In a question answering system based on keyword and answer extraction, the improved method has lower loss function values and running time. The improved method has a larger area under ROC. The results of the validation analysis confirm that the improved method in this experiment has high accuracy and robustness when dealing with semantic problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. EUS-Guided Diagnosis of Gastric Subepithelial Lesions, What Is New?
- Author
-
Vasilakis, Thomas, Ziogas, Dimitrios, Tziatzios, Georgios, Gkolfakis, Paraskevas, Koukoulioti, Eleni, Kapizioni, Christina, Triantafyllou, Konstantinos, Facciorusso, Antonio, and Papanikolaou, Ioannis S.
- Subjects
- *
GASTRIC mucosa , *ENDOSCOPIC ultrasonography , *ARTIFICIAL intelligence , *DIAGNOSIS , *ELASTOGRAPHY - Abstract
Gastric subepithelial lesions (SELs) are intramural lesions that arise underneath the gastric mucosa. SELs can be benign, but can also be malignant or have malignant potential. Therefore, correct diagnosis is crucial. Endosonography has been established as the diagnostic gold standard. Although the identification of some of these lesions can be carried out immediately, solely based on their echo characteristics, for certain lesions histological examination is necessary. Sometimes histology can be inconclusive, especially for smaller lesions. Therefore, new methods have been developed in recent years to assist decision making, such as contrast enhanced endosonography, EUS elastography, and artificial intelligence systems. In this narrative review we provide a complete overview of the gastric SELs and summarize the new data of the last ten years concerning the diagnostic advances of endosonography on this topic. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. WORK-LIFE BALANCE AND CAREER SUCCESS OF ACADEMIC STAFF IN NIGERIANPUBLIC UNIVERSITIES: THE MODERATING ROLE OF DIGITAL TECHNOLOGY.
- Author
-
Don-Solomon, Amakiri
- Subjects
WORK-life balance ,PUBLIC universities & colleges ,DIGITAL technology ,TECHNOLOGICAL innovations ,ARTIFICIAL intelligence - Abstract
Achieving success in career is a major accomplishment an average employee looks forward to and the academics of Nigerian Public Universities are not an exception. The purpose of this research was to examine the empirical relationship between work-life balance and career success of academic staff of Nigerian Public Universities with digital technology as the moderating variable. Work-life balance dimensions used are flexi-time and job-sharing, while, job security, and promotion were used to measure career success. 6,836 academic staff formed the population of the study. Taro Yamane's formula was used to obtain a sample size of 378. Descriptive survey design was used. Spearman Rank Order Correlation Coefficient and Partial Correlation Coefficient was used as tool for analysis. Findings showed that flexi-time and job sharing have low but positive significant relationship with job security and promotion.Digital technology positively and significantly influences the relationship between work-life balance and career success. Based on the findings the study recommends policymakers at every level involved in university management to advocate for academic's flex-time arrangement, this will increase their satisfaction on the job thereby jettison the intention to quit. also, public universities should employ more academic staff for effective job-sharing practice. [ABSTRACT FROM AUTHOR]
- Published
- 2023
5. ДОСЛІДЖЕННЯ КОГНІТИВНИХ СЕРВІСІВ ДЛЯ ПОШУКОВОЇ ОПТИМІЗАЦІЇ САЙТІВ
- Subjects
когнітивний сервіс ,machine learning ,пошукова оптимізація ,cognitive service ,search engine optimization ,штучний інтелект ,keyword ,ключове слово ,машинне навчання ,artificial intelligence - Abstract
The subject of research in the article is machine learning models for classifying web-pages by quality and compliance with SEO rules. The goal of the article is improving the efficiency of search engines by establishing and using factors that have the greatest impact on the degree of SEO optimization of web pages. The article solves the following tasks: study of the effectiveness of using machine learning methods to build a classification model that automatically classifies web pages according to the degree of adaptation to SEO optimization recommendations; assessment of the influence of relevant page factors (text on a web page, text in meta tags, links, image, HTML code) on the degree of SEO optimization using the developed classification models. The following methods are used: machine learning methods, classification methods and statistical methods. The following results were obtained: analysis of the effectiveness of the application of machine learning methods to determine the degree of adaptation of a web page to SEO recommendations was carried out; classifiers were trained on a data set of web pages randomly selected from the DMOZ catalog and rated by three independent SEO experts in the categories: “low SEO”, “medium SEO” and “high SEO”; five main classifiers were tested (decision trees, naive Bayes, logistic regression, KNN and SVM), on the basis of which it was revealed that all the studied models received greater accuracy (from 54.69% to 69.67%) than the accuracy of the baseline (48.83%); the results of the experiments confirm the hypothesis about the effectiveness of adapting web pages to SEO recommendations using classification algorithms based on machine learning. Conclusions. It was confirmed that with the help of classification algorithms built on the basis of machine learning and the knowledge of experts, it is possible to effectively adjust web pages to SEO recommendations. The considered methods can be adapted for various search engines and applicable to different languages, provided that a stamping or lemmatization algorithm has been developed for them. The results of the study can be used in the development of automated software to support the work of SEO in audit technologies to identify web pages in need of optimization and in spam detection processes., Сформульована постановка задачі та проведено експериментальне дослідження ефективності використання методів машинного навчання для побудови моделей автоматичної класифікації веб-сторінок за ступенем адаптації до рекомендацій пошукової оптимізації SEO. Результати дослідження підходів та методів машинного навчання створюють засади для підвищення ефективності роботи пошукових систем. Вони можуть бути використані при розробці автоматизованого програмного забезпечення для підтримки роботи SEO в технологіях проведення аудиту для виявлення веб-сторінок, які потребують оптимізації, і в процесах виявлення спам-сторінок.
- Published
- 2022
- Full Text
- View/download PDF
6. User-Oriented Paraphrase Generation With Keywords Controlled Network
- Author
-
Lingyun Xiang, Daojian Zeng, Jin Wang, Guoliang Ji, and Haoran Zhang
- Subjects
Seq2Seq ,General Computer Science ,Downstream (software development) ,Computer science ,keyword ,paraphrase generation ,02 engineering and technology ,computer.software_genre ,Paraphrase ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,User oriented ,BLEU ,Focus (computing) ,business.industry ,Copy mechanism ,General Engineering ,020206 networking & telecommunications ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,business ,lcsh:TK1-9971 ,computer ,Natural language processing ,Sentence - Abstract
Paraphrase generation can help with both downstream tasks in natural language processing (NLP) and human writing in our daily life. Most of the prevalent neural models focus on the former usages and generate uncontrolled paraphrase while they ignore the subtleties of users' requirement. In addition, the existing tools for users are usually rule-based which is unnatural due to the complexity of the paraphrase nature. To this end, we propose a keyword controlled network (KCN) which can be used as an assistant paraphrase generation tool. The KCN works in an interactive manner and generates different paraphrases given different keywords. The model is based on a Sequence-to-Sequence (Seq2Seq) framework integrated with copy mechanism. Given the source sentence and the keywords, two encoders transform them into vector representations. Then, the representations are fused together and used for the decoder. The decoder with attention mechanism either copies the words from the keywords or generates words from the whole dictionary. In the training stage, as the source sentence and the target sentence are all valid paraphrases, the model is trained to generate each given different keywords, which simulates the behaviors of users. The extensive experiments on three datasets show that our method outperforms baselines in the automatic evaluation (0.06 absolute improvement in BLEU) and the generated paraphrases meet user expectation in the human evaluation.
- Published
- 2019
- Full Text
- View/download PDF
7. Derin öğrenme ile anahtar kelime ve anahtar ifade çıkarımı üzerine bir inceleme
- Author
-
Ozlem Unlu Kilic, Aydin Cetin, and Rektörlük
- Subjects
Machine translation ,Artificial neural network ,business.industry ,Computer science ,Key Phrase ,Deep learning ,Keyword extraction ,Byte ,Review ,computer.software_genre ,RNN ,Text mining ,Deep Learning ,Key (cryptography) ,Artificial intelligence ,business ,Tera ,computer ,Natural language processing ,Keyword - Abstract
Ünlü Kılıç, Özlem ( Aksaray, Yazar ), With the technological developments, a large amount of data has been produced. Tera bytes of data previously recorded by manpower were digitized with the use of personal computers. As a result, rapidly growing data stacks were formed, making it difficult to find information among these unanticipated data. The need to make sense of this data has made predefined statistical methods more important. It is possible to access the required information from a single document or from the document stacks by means of text mining methods. This problem, which was previously solved mostly by statistical methods or Natural Language Processing (NLP) techniques, has been started to be solved by machine learning algorithms and artificial neural networks. In recent years, deep learning, which is a specialized study area of artificial neural networks, gives better results than the current statistical and NLP methods in many problems and has provided the application of these methods in problems such as machine translation, keyword extraction and summarizing. In this study, deep learning methods used in the extraction of keywords and key phrases are examined.
- Published
- 2019
8. Chinese Traditional Musical Instrument Evaluation Based on a Smart Microphone Array Sensor
- Author
-
Kun Li, Yan Han, Hao Jiang, and Yanwen Chen
- Subjects
Mel-frequency cepstral coefficients ,Chinese traditional instrument ,Microphone array ,Artificial neural network ,business.industry ,Computer science ,Microphone ,Sound field correlation coefficient ,Speech recognition ,Deep learning ,keyword ,lcsh:A ,Musical instrument ,BP neural network ,Instrument evaluation ,Microphone array sensor ,Computer Science::Sound ,Artificial intelligence ,Mel-frequency cepstrum ,lcsh:General Works ,business ,Constant-Q transform ,Constant Q transform ,Physical law - Abstract
For Chinese traditional musical instruments, the general subjective evaluation method by experts is not cost-effective and is limited by fewer and fewer experts, but a clear physical law is very hard to established by physicists. Considering the effectiveness of artificial neural networks (ANNs) for complex system, for a Chinese lute case, a neural network based 8-microphone array is applied to correlate the objective instrument acoustic features with expert subjective evaluations in this paper. The acoustic features were recorded by a microphone array sensor and extracted as the constant-Q transform coefficients, Mel-frequency cepstral coefficients and correlation coefficients between each microphone for ANNs input. The acoustic library establishment, acoustic features extractions, and deep learning model for Chinese lutes evaluation are reported in this paper.
- Published
- 2019
- Full Text
- View/download PDF
9. Effective Training Data Extraction Method to Improve Influenza Outbreak Prediction from Online News Articles: Deep Learning Model Study
- Author
-
Beakcheol Jang, Inhwan Kim, and Jong Wook Kim
- Subjects
Word embedding ,020205 medical informatics ,Computer science ,Computer applications to medicine. Medical informatics ,Feature extraction ,keyword ,R858-859.7 ,Keyword extraction ,Health Informatics ,02 engineering and technology ,Machine learning ,computer.software_genre ,infodemiology ,infoveillance ,03 medical and health sciences ,symbols.namesake ,Health Information Management ,training data extraction ,Pearson correlation coefficient ,0202 electrical engineering, electronic engineering, information engineering ,030304 developmental biology ,Original Paper ,0303 health sciences ,model ,business.industry ,Deep learning ,Sorting ,word embedding ,Pearson product-moment correlation coefficient ,Infoveillance ,surveillance ,symbols ,Artificial intelligence ,influenza ,long short-term memory ,business ,computer ,sorting ,Word (computer architecture) - Abstract
Background Each year, influenza affects 3 to 5 million people and causes 290,000 to 650,000 fatalities worldwide. To reduce the fatalities caused by influenza, several countries have established influenza surveillance systems to collect early warning data. However, proper and timely warnings are hindered by a 1- to 2-week delay between the actual disease outbreaks and the publication of surveillance data. To address the issue, novel methods for influenza surveillance and prediction using real-time internet data (such as search queries, microblogging, and news) have been proposed. Some of the currently popular approaches extract online data and use machine learning to predict influenza occurrences in a classification mode. However, many of these methods extract training data subjectively, and it is difficult to capture the latent characteristics of the data correctly. There is a critical need to devise new approaches that focus on extracting training data by reflecting the latent characteristics of the data. Objective In this paper, we propose an effective method to extract training data in a manner that reflects the hidden features and improves the performance by filtering and selecting only the keywords related to influenza before the prediction. Methods Although word embedding provides a distributed representation of words by encoding the hidden relationships between various tokens, we enhanced the word embeddings by selecting keywords related to the influenza outbreak and sorting the extracted keywords using the Pearson correlation coefficient in order to solely keep the tokens with high correlation with the actual influenza outbreak. The keyword extraction process was followed by a predictive model based on long short-term memory that predicts the influenza outbreak. To assess the performance of the proposed predictive model, we used and compared a variety of word embedding techniques. Results Word embedding without our proposed sorting process showed 0.8705 prediction accuracy when 50.2 keywords were selected on average. Conversely, word embedding using our proposed sorting process showed 0.8868 prediction accuracy and an improvement in prediction accuracy of 12.6%, although smaller amounts of training data were selected, with only 20.6 keywords on average. Conclusions The sorting stage empowers the embedding process, which improves the feature extraction process because it acts as a knowledge base for the prediction component. The model outperformed other current approaches that use flat extraction before prediction.
- Published
- 2021
- Full Text
- View/download PDF
10. Main keyword comparison based on document analysis system
- Author
-
Hoe-Kyung Jung, Jae Seung Lee, and Jongwon Lee
- Subjects
Control and Optimization ,Computer Networks and Communications ,computer.internet_protocol ,Computer science ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,computer.software_genre ,Morpheme ,Document analysis ,Data deduplication ,Electrical and Electronic Engineering ,Keyword ,Sequence maintenance ,Sequence ,Paragraph extraction ,business.industry ,Deduplication ,Hardware and Architecture ,Feature (computer vision) ,Signal Processing ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Artificial intelligence ,Paragraph ,business ,computer ,Natural language processing ,XML ,Information Systems - Abstract
Existing document analysis systems list words in the document using a morpheme analyzer. Such a structural feature is difficult to help users to understand the document. To understand a document, you need to analyze the keyword in the document and extract the paragraphs including the keyword. The proposed system retrieves keywords from documents written in XML format, extracts them, and displays them to the user. In addition, it extracts the paragraphs including the keyword entered by the user and maintains paragraph sequence and delete for duplicate paragraphs. Then, the frequency and weight of the keyword are calculated, and the number of paragraphs is reduced by removing the paragraphs including the keyword having a weight less than other keywords weighed. This method may reduce the time and effort required for the user to understand the document as compared to the existing document analysis systems.
- Published
- 2020
- Full Text
- View/download PDF
11. A supervised keyphrase extraction system
- Author
-
Adebayo Kolawole John, Luigi Di Caro, Guido Boella, Adebayo Kolawole, John, Di Caro, Luigi, and Boella, Guido
- Subjects
Keyphrase extraction ,Computer science ,Decision trees ,Keyword extraction ,Decision tree ,Extraction ,02 engineering and technology ,F-measure scores ,Software ,020204 information systems ,Random forest classifier ,0202 electrical engineering, electronic engineering, information engineering ,Keyword ,Extraction, F-measure scores ,Key-phrase ,Keywords ,Random forests ,Semantic features, Semantics ,Keyphrase ,Random Forest ,business.industry ,Extraction (chemistry) ,Pattern recognition ,SemEval ,Semantics ,Random forest ,Human-Computer Interaction ,Semantic features ,Computer Networks and Communication ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
In this paper, we present a multi-featured supervised automatic keyword extraction system. We extracted salient semantic features which are descriptive of candidate keyphrases, a Random Forest classifier was used for training. The system achieved an accuracy of 58.3 % precision and has shown to outperform two top performing systems when benchmarked on a crowdsourced dataset. Furthermore, our approach achieved a personal best Precision and F-measure score of 32.7 and 25.5 respectively on the Semeval Keyphrase extraction challenge dataset. The paper describes the approaches used as well as the result obtained.1
- Published
- 2016
12. Performance evaluation of deterministic wormhole routing in k-ary n-cubes
- Author
-
Bruno Ciciani, C. Paolucci, and Michele Colajanni
- Subjects
Static routing ,Dynamic Source Routing ,Computer Networks and Communications ,Computer science ,Message passing ,keyword ,Torus ,Parallel computing ,Flow network ,Topology ,Computer Graphics and Computer-Aided Design ,Theoretical Computer Science ,Artificial Intelligence ,Hardware and Architecture ,Hypercube ,Wormhole ,Routing (electronic design automation) ,Software - Abstract
We present a new analytical approach for the performance evaluation of deterministic wormhole routing in k -ary n -cubes. Our methodology achieves closed formulas for average time values through the analysis of network flows. The comparison with simulation models demonstrates that our methodology gives accurate results for both low and high traffic conditions. Another important quality is the flexibility of our approach. We demonstrate that it can be used to model dimension-ordered -routing in several k -ary n -cubes such as hypercubes, 3D symmetric and asymmetric tori, architectures with uni- and bi-directional channels.
- Published
- 1998
- Full Text
- View/download PDF
13. Threshold-Based Reconfiguration Strategies for Gracefully Degradable Parallel Computations
- Author
-
Michele Colajanni, V. Grassi, and M. Angelaccio
- Subjects
Class (computer programming) ,Computer Networks and Communications ,Computer science ,Event (computing) ,business.industry ,Computation ,Distributed computing ,keyword ,Control reconfiguration ,Fault tolerance ,Parallel computing ,Fault (power engineering) ,Theoretical Computer Science ,Software ,Artificial Intelligence ,Hardware and Architecture ,Routing (electronic design automation) ,business - Abstract
The occurrence of faults in multicomputers with hundreds or thousands of nodes is a likely event that can be dealt with hardware or software fault-tolerant approaches. This paper presents a unifying model that describes software reconfiguration strategies for parallel applications with regular computational pattern. We show that most existing strategies can be obtained as instances of the proposedthreshold-basedreconfiguration meta-algorithm. Moreover, this approach is useful to discover several yet unexplored strategies among which we consider the class of theadaptive threshold-basedstrategies. The performance optimization analysis demonstrates that these strategies, applied to data-parallel regular computations, give optimal results for worst fault patterns. A wide spectrum of simulations, where the system parameters have been settled to those of actual multicomputers, confirms that adaptive threshold-based strategies yield the most stable performance for a variety of workloads, independently of the number and pattern of faults.
- Published
- 1998
- Full Text
- View/download PDF
14. Non-uniform and dynamic domain decompositions for hypercomputing
- Author
-
Michele Colajanni and M. Cermele
- Subjects
Computer Networks and Communications ,Computer science ,Computation ,Distributed computing ,keyword ,Parallel computing ,Load balancing (computing) ,Computer Graphics and Computer-Aided Design ,Partition (database) ,Theoretical Computer Science ,Hypercomputation ,Artificial Intelligence ,Hardware and Architecture ,Programmer ,SPMD ,Software - Abstract
Implementing efficient parallel programs on a network-based computing platform is still a challenge. This paper proposes a new adaptive data distribution (ADD) support that avoids the complex task of managing irregular data distributions and adapting them to the variable conditions of a multi-users system. In particular, ADD provides the programmer with a data partition algorithm that fits the non-uniformity of the platform nodes at the beginning of program execution, a set of data inquiry primitives that allow the programmer to deal with a logical regular partition and a runtime support that autonomously adapts the data distribution to the node power variations occurring during computation. Several experimental results demonstrate that ADD is a very useful tool to maintain the efficiency of SPMD computations especially when the platform is highly non-uniform and variable.
- Published
- 1997
- Full Text
- View/download PDF
15. Ensemble Learning for Keyword Extraction
- Author
-
Geadas, Pedro and Ribeiro, Bernardete Martins
- Subjects
Machine Learning ,Artificial Intelligence ,Ensemble learning ,Information Retrieval ,Keyphrase ,Supervised Machine Learning ,Information Extraction ,Keyword ,Automatic Keyword Extraction - Abstract
Dissertação de Mestrado em Engenharia Informática apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra Nowadays, the most relevant events occurring in the city are advertised on-line, generally through small textual descriptions. The exponential growth of the Web often hampers the task of finding relevant information, turning the existence of good information extraction and summarization methods in a necessity. As such, the main goal of this dissertation is to develop an ensemble learning application for automatically extracting keywords from those event textual descriptions, since using human indexers is slow and expensive. Through rich information on events, one should be able to understand its mobility implications and possibly correlate both, allowing to foreseeing eventual repercussions that a specific event may cause in the city’s normal behavior. The proposed application intends to apply Supervised Machine Learning approaches, namely from known automatic keyword extraction systems, retrieving a set of keywords as output from the event descriptions usually found in theWeb
- Published
- 2013
16. Easy programming in BASIC for language teachers
- Author
-
Michael Arthur Riccioli
- Subjects
Linguistics and Language ,CLs upper limits ,business.industry ,Computer science ,Basic ,Language Teachers ,Listing ,Basic Compiler ,Machine Code ,To debug ,Disc ,Cassette ,Upper /Lower Case ,Keyword ,Variable ,Operator ,Constant ,Location ,Print-Out ,Dot-Matrix Printer ,Monitor ,Screen Display ,Sound ,Hard Copy ,Crossword ,CALT ,CALL ,RETURN ,ENTER ,RUN ,NEW ,SAVE ,LOAD ,CHAIN ,REM ,PRINT ,CLS ,INPUT ,LET ,DATA ,CLOAD ,CSAVE ,TRACE ON/OFF ,TRON/TROFF ,HOME ,BBC Micro ,APPLE ,CANON X-07 ,Computer Programming ,Artificial intelligence ,TRACE ON OFF ,Professeurs de Langues ,Listage du programme ,Compilateur (Basic) ,Code Machine ,Déverminer/Corriger ,Disque ,Majuscule/Minuscule ,Mot Clé ,Opérateur ,Constante ,Emplacement/ Adresse ,Listage ,Imprimante matricielle ,Affichage écran ,Son ,Copie écran sur papier ,Moniteur ,Mots-Croisé ,Programmation . Basic ,CANON X07 ,business ,Humanities ,Language and Linguistics ,Education - Abstract
Computer programming in Basic is a first step for language teachers who wish to write software for their students. The reader will find numerous examples of easy programmes, the way the computer reads the programme lines, displays the messages on the monitor and what a print — out of a Computer Test looks like. Learning Basic can also help teachers who wish to modify commercial software. The example given is about a crossword programme (Chelsea College) and how it became possible to go into the programme (written in Basic) in order to add a printer statement so as to get a Hard Copy of the crossword grids., Cet article a pour but de montrer aux professeurs de langues qu'il est relativement simple de s'initier à la programmation en Basic pour écrire des logiciels pour l'utilisation en classe. Il est évident que dès que la programmation d'un projet se révèle être compliquée, il faudrait que l'enseignant se mette au travail avec un programmeur professionnel qui l'aidera dans la conception, la programmation et la correction. Un autre facteur qui pourrait servir, si on a programmé en Basic, est le fait de pouvoir pénétrer dans des logiciels vendus dans le commerce et faire des changements dans le déroulement du programme. L'exemple donné concerne la copie écran sur papier d'un logiciel de mots-croisés. Rien n'était prévu pour faire imprimer une copie sur feuille des mots-croisés construits lors de l'achat du logiciel en question., Riccioli Michael Arthur. Easy programming in BASIC for language teachers. In: Cahiers de l'APLIUT, volume 6, numéro 4, 1987. pp. 17-23.
- Published
- 1987
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.