1,136 results
Search Results
2. Life beyond the Paper Graphic Index: Evaluating New Geographic Retrieval Technologies for the Future Map Library.
- Author
-
Fleet, Chris
- Subjects
- *
MAPS , *WORLD Wide Web , *WEBSITES , *LIBRARIANS , *GEOGRAPHIC information systems , *INFORMATION retrieval - Abstract
In recent years technology has dramatically expanded the possibilities for making available historical maps on the web. These various new techniques are examined and evaluated. It is argued that map curators are well placed to exploit these new techniques, and doing so has a range of advantages for the future survival of our institutions. The priorities and principles that should guide this work are suggested, along with the main practical conclusions for website presentation. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
3. Discovery and classification of user interests on social media
- Author
-
Shahzad, Basit, Lali, Ikramullah, Nawaz, M. Saqib, Aslam, Waqar, Mustafa, Raza, and Mashkoor, Atif
- Published
- 2017
- Full Text
- View/download PDF
4. Samuel F. B. Morse Papers at the Library of Congress, 1793-1919.
- Author
-
Goldwhite, H.
- Subjects
WEBSITES ,TELEGRAPH & telegraphy ,INTERNET ,WORLD Wide Web - Abstract
Reviews the Web site Samuel F. B. Morse Papers at the Library of Congress, 1793-1919.
- Published
- 2005
5. The Churchill papers: a catalogue.
- Author
-
Tolppanen, B. P.
- Subjects
WEBSITES ,BIOGRAPHIES ,COMPUTER network resources ,WORLD Wide Web - Abstract
Reviews the Web site The Churchill Papers.
- Published
- 2004
- Full Text
- View/download PDF
6. The Wilbur and Orville Wright Papers.
- Author
-
McIntyre, W.A.
- Subjects
WEBSITES ,WORLD Wide Web ,INVENTORS - Abstract
Reviews the Web site, the Wilbur and Orville Wright Papers, a part of the Library of Congress American Memory project.
- Published
- 2004
- Full Text
- View/download PDF
7. The usage and acceptance of domestic preprint servers in China.
- Author
-
Zhang Yaokun and Xia Nanqiang
- Subjects
CLIENT/SERVER computing ,WEBSITES ,COMPUTER network resources ,WORLD Wide Web ,WEB archives ,PHYSICS ,MATHEMATICS ,COMPUTER science - Abstract
Purpose — The aims of this article are to describe the current status, usage, and acceptance of domestic preprint servers in mainland China by investigating three integrated preprint servers in mainland China. These are the Qiji e-print archive (Qiji), the Chinese Preprint Server (CPS), Chinese Science Papers Online (CSPO)., Design/methodology/approach — This research gives a quantitative analysis of the submission numbers to three preprint servers, the subject distribution and citations. An investigation of university websites and their policies was also carried out. Findings — Preprint submissions have a strong correlation with official promotion and policies. The preprints communication pattern in mainland China concentrates on the disciplines of physics, mathematics and computer science which reflects international preprint communication behaviour. As yet the domestic preprint servers in mainland China attract little attention from excellent scholars and have a low impact on scientific communication. Research limitations/implications — No author survey was conducted. Originality/value — This is the first paper to characterize the usage and acceptance of domestic preprint servers in mainland China using a quantitative method. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
8. THE DOWNSIDE OF CYBERSPACE: CHEATING MADE EASY
- Author
-
Gibelman, Margaret, Gelman, Sheldon R., and Fast, Jonathan
- Published
- 1999
9. Accessibility of university websites worldwide: a systematic literature review
- Author
-
Llorenç Valverde, Milton Campoverde-Molina, Sergio Luján-Mora, Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos, and Advanced deveLopment and empIrical research on Software (ALISoft)
- Subjects
Web accessibility ,Review Paper ,University ,Computer Networks and Communications ,business.industry ,Computer science ,Section (typography) ,Systematic literature review ,Websites ,Human-Computer Interaction ,World Wide Web ,Systematic review ,Consolidation (business) ,Work (electrical) ,Lenguajes y Sistemas Informáticos ,Web page ,business ,Evaluation ,Publication ,Computer communication networks ,Software ,Information Systems - Abstract
The identity and institutional image of universities are presented to the world through their websites. On their websites, universities publish their academic offerings, their mission, their vision, their academic objectives, their achievements, their regulations, their news and all their university work. Hence, the importance of university websites is accessible. The accessibility of university websites has been evaluated several times in the past, but there is no work that has summarized all the evaluations performed to provide a general overview of the situation. Therefore, in this research we have performed a systematic literature review (SLR) to consolidate, analyze, synthesize and interpret the accessibility results of university websites published in 42 papers that have been selected for this study. The methodology used in this SLR was that proposed in Kitchenham’s guidelines, which includes three stages: planning the review, conducting the review and reporting the review. The results present the analysis and synthesis of the evaluations of 9,140 universities in 67 countries. Of these, 38,416 web pages, 91,421 YouTube videos and 28,395 PDF documents were evaluated. Manual methods, methods with automatic tools and the combination of both methods were used for the evaluation. Most websites were evaluated using the ISO/IEC 40500:2012 and Section 508 standards. The accessibility guidelines most commonly violated in the evaluations were: adaptable, compatible, distinguishable, input assistance, keyboard accessible, navigable, predictable, readable and text alternatives. In conclusion, the university websites, YouTube videos and PDF documents analyzed in the 42 papers present important accessibility problems. The main contribution of this SLR is the consolidation of the results of the 42 studies selected to determine the findings and trends in the accessibility of university websites around the world. This work was supported by the Catholic University of Cuenca; the EduTech project (609785-EPP-1-2019-1-ES-EPPKA2-CBHE-JP) co-funded by the Erasmus+ Programme of the European Union; and the project “Development of IoT systems for people with disabilities” (PID2019-111196RB-I00) of the Spanish Ministry of Science and Innovation.
- Published
- 2021
10. Evaluating web accessibility of educational institutions websites using a variable magnitude approach.
- Author
-
Kuppusamy, K. S. and Balaji, V.
- Subjects
WEB accessibility ,EDUCATIONAL websites ,WORLD Wide Web ,WEBSITES ,INFORMATION dissemination - Abstract
The World Wide Web serves as an excellent platform for information dissemination. Educational institutions such as universities are utilizing the web medium to reach their target audience. In the post-Covid-19 scenario, the web medium has obtained increased significance as it has become the primary access channel to reach these institutions. In this backdrop, it becomes essential to analyze the accessibility of these sites for students with special needs. This paper presents an approach to compute the accessibility of web pages for persons with disabilities. A variable magnitude approach is proposed in this paper for the computation of accessibility barrier count as a combination of two different components. The proposed approach is experimented with top ranked higher educational institution websites of India. Based on the inferences from the results and inputs received from students with disabilities, a set of suggestions have been compiled by this paper to minimize the barriers faced by persons with disabilities in consuming these web resources. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. On requesting re-prints electronically.
- Author
-
Hartley, James
- Subjects
PUBLISHED reprints ,ELECTRONIC systems ,WORLD Wide Web ,WEBSITES ,COMMUNICATION ,RESPONSE rates - Abstract
Background. Many researchers contact colleagues in order to obtain re-prints of their papers. Most people, these days, use electronic aids to do this. Aim. The aim of this study was to compare the success rates for obtaining re-prints using such electronic communication with those obtained previously by post. Method. A hundred requests for e-prints or re-prints were sent electronically to colleagues, and the response rates recorded. Results. Of those approached, 79% responded to the requests, although 12% of these needed a reminder. Those replying electronically without a reminder (41%) took an average of one day to respond (range 0-13 days), whereas those replying by mail without a reminder (26%) took on average nine days (range 2-34 days). Responders needing a reminder (12%) took on average 41 days to respond (range 32-71). There were no significant differences between the response rates of American and British researchers, nor between those of men and women, but there was a significantly higher response rate from colleagues who were sent accompanying materials with the request. Conclusions. These results indicate that rapid replies were received from people responding electronically and that the overall response rate was somewhat higher, and somewhat faster, than that found in most previous postal studies. However, the problem of non- and late-responders still exists. One further advantage for the electronic system is that, unlike the postal method, it facilitates rapid additional communication between the sender and the recipient. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
12. The BMJ's Website Scales up: Now It Provides Free Access to Full Text
- Author
-
Delamothe, Tony and Smith, Richard
- Published
- 1998
13. Object-Oriented Design Structures in Web Application Models.
- Author
-
Rossi, Gustavo and Schwabe, Daniel
- Subjects
OBJECT-oriented methods (Computer science) ,SYSTEMS design ,WEB development ,COMPUTER software development ,WEBSITES ,SOFTWARE engineering ,COMPUTER software industry ,WORLD Wide Web ,OBJECT-oriented programming - Abstract
In this paper, we discuss different object-oriented design structures that should be used in the process of building Web applications. We base our discussion on the OOHDM approach for defining a Web application model, in particular, the separation of the navigational model from the conceptual model. We focus on the systematic application of different design patterns (such as Observer and Decorator) for decoupling different aspects of a Web model. We briefly discuss some specific patterns that may appear in this kind of applications and we introduce additional concepts such as Web frameworks as a conceptual approach to maximize design reuse in Web applications. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
14. Web alert: CRISPR in microbial biotechnology: An annotated selection of World Wide Web sites relevant to the topics in environmental microbiology.
- Subjects
WEBSITES ,CRISPRS ,MICROBIAL ecology ,WORLD Wide Web ,MICROBIAL biotechnology - Abstract
CRISPR papers https://www.science.org/topic/crispr This page from Science magazine provides links to a list of papers dealing with reviews and original papers on CRISPR. Introduction to CRISPR https://innovativegenomics.org/education/digital-resources/what-is-crispr/ This site gives a brief overview of CRISPR technology and applications. Https://microbiologysociety.org/blog/crispr-and-microbiology.html This site provides an overview of the impact of CRISPR-Cas on microbial sciences. [Extracted from the article]
- Published
- 2022
- Full Text
- View/download PDF
15. Tool for Parsing Important Data from Web Pages.
- Author
-
Radilova, Martina, Kamencay, Patrik, Hudec, Robert, Benco, Miroslav, and Radil, Roman
- Subjects
NAIVE Bayes classification ,WEBSITES ,COMPUTATIONAL linguistics ,DOCUMENT imaging systems ,NATURAL language processing ,INTERNET content ,SUPPORT vector machines - Abstract
This paper discusses the tool for the main text and image extraction (extracting and parsing the important data) from a web document. This paper describes our proposed algorithm based on the Document Object Model (DOM) and natural language processing (NLP) techniques and other approaches for extracting information from web pages using various classification techniques such as support vector machine, decision tree techniques, naive Bayes, and K-nearest neighbor. The main aim of the developed algorithm was to identify and extract the main block of a web document that contains the text of the article and the relevant images. The algorithm on a sample of 45 web documents of different types was applied. In addition, the issue of web pages, from the structure of the document to the use of the Document Object Model (DOM) for their processing, was analyzed. The Document Object Model was used to load and navigation of the document. It also plays an important role in the correct identification of the main block of web documents. The paper also discusses the levels of natural language. These methods of automatic natural language processing help to identify the main block of the web document. In this way, the all-textual parts and images from the main content of the web document were extracted. The experimental results show that our method achieved a final classification accuracy of 88.18%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Discussion of An Experimental Examination of Alternative Forms of Web Assurance for Business-to-Consumer e-Commerce.
- Author
-
Kovar, Stacy E.
- Subjects
ELECTRONIC commerce ,INSURANCE ,CONSUMER behavior ,BUSINESS to consumer transactions ,WORLD Wide Web ,INFORMATION resources management ,INTERNET industry ,INFORMATION superhighway ,BUSINESS planning ,WEBSITES - Abstract
The article presents comments on E. Mauldin and V. Arunachalam's paper titled "An Experimental Examination of Alternative Forms of Web Assurance for Business-to-Consumer e-Commerce," which appeared in volume 16 of the "Journal of Information Systems." The author observes that the paper clearly suggests that we need to develop better models, clearly define constructs and examine the topics, targets, and forms of assurance that we provide. The results of the paper support the findings of other research that Web assurance seems to make little difference to individual consumers.
- Published
- 2002
- Full Text
- View/download PDF
17. Call For Papers.
- Subjects
TECHNOLOGICAL innovations ,CONFERENCES & conventions ,WEBSITES ,WORLD Wide Web ,AUTHORS ,COMPUTER network resources - Abstract
Authors addressing advancement and innovation in topics of interest to AMTA are invited to submit a 200-word abstract for review and possible presentation at the Symposium. Visit the AMTA 2012 website at www.AMTA.org for a detailed list of paper topics. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
18. A novel user trend‐based priority assigner and URL scheduler for dynamic incremental crawling.
- Author
-
Gupta, Ashlesha and Dixit, Ashutosh
- Subjects
UNIFORM Resource Locators ,INFORMATION needs ,WEBSITES ,WORLD Wide Web ,INTERNET traffic ,SEARCH engines - Abstract
Summary: An efficient search engine needs to be designed in such a way that is able to provide relevant and accurate information in accordance with user needs and interests. The quality of downloaded records can be guaranteed only when website pages of high pertinence are downloaded by the crawlers in accordance with the current topics or user trends. Earlier Focused Crawlers were used to download topic specific pages but these crawlers were not able to adapt to the changing interest of the users. Therefore, there is a need to design crawlers that are able to naturally track the present pattern points and download site pages that meet client's present need. In this paper, a priority assigner and scheduler method for organizing Uniform Resource Locators (URLs) is being proposed that helps the crawler in tracking user's interest and prioritize downloading documents that are relevant to the user's choice in addition to current trends. The experimental results conforms that the proposed priority assigner and URL scheduler‐based crawling outshines conventional crawling strategies based on Change‐history or Site‐Map‐based methods in terms of quality of downloaded web pages and reducing network traffic over the Internet. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Thoracic Surgery Information on the Internet: A Multilingual Quality Assessment
- Author
-
Nathan Lawrentschuk, Myles T. Davaris, Robert Abouassaly, and Stephen Barnett
- Subjects
medicine.medical_specialty ,020205 medical informatics ,thoracic ,multilingualism ,media_common.quotation_subject ,02 engineering and technology ,Website quality ,World Wide Web ,German ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Quality (business) ,Multilingualism ,media_common ,Original Paper ,Internet ,language ,Quality assessment ,business.industry ,Readability ,language.human_language ,websites ,Cardiothoracic surgery ,030220 oncology & carcinogenesis ,Family medicine ,The Internet ,business - Abstract
Background: Previous data suggest that quality of Internet information regarding surgical conditions and their treatments is variable. However, no comprehensive analysis of website quality exists for thoracic surgery. Objective: The aim of this study was to quantify website quality in a multilingual setting using an international standard for assessment. Methods: Health On the Net (HON) principles may be applied to websites using an automated toolbar function. We used the English, French, Spanish, and German Google search engines to identify 12,000 websites using keywords related to thoracic conditions and procedures. The first 150 websites returned by each keyword in each language were examined. We compared website quality to assess for tertile (is the quality better in first, second, or third 50 websites returned) and language differences. A further analysis of the English site types was undertaken performing a comparative analysis of website provider types. Results: Overall, there are a considerable number of websites devoted to thoracic surgery: “lung cancer” returned over 150 million websites. About 7.85% (940/11,967) of websites are HON-accredited with differences by search term (P
- Published
- 2017
20. Frequencies of Private Mentions and Sharing of Mammography and Breast Cancer Terms on Facebook: A Pilot Study
- Author
-
Marco D. Huesch, Joel E. Segel, Susann Schetter, and Alison L. Chetlen
- Subjects
links ,Facebook ,user comments ,020205 medical informatics ,Emoji ,social media ,mammography ,Internet privacy ,education ,Health Informatics ,Breast Neoplasms ,Pilot Projects ,02 engineering and technology ,Social Networking ,World Wide Web ,03 medical and health sciences ,Breast cancer screening ,0302 clinical medicine ,Breast cancer ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Mammography ,Humans ,Social media ,030212 general & internal medicine ,Original Paper ,medicine.diagnostic_test ,business.industry ,online social network ,Middle Aged ,medicine.disease ,breast cancer screening ,Outreach ,websites ,The Internet ,Female ,Web content ,business ,Psychology - Abstract
Background: The most popular social networking site in the United States is Facebook, an online forum where circles of friends create, share, and interact with each other’s content in a nonpublic way. Objective: Our objectives were to understand (1) the most commonly used terms and phrases relating to breast cancer screening, (2) the most commonly shared website links that other women interacted with, and (3) the most commonly shared website links, by age groups. Methods: We used a novel proprietary tool from Facebook to analyze all of the more than 1.7 million unique interactions (comments on stories, reshares, and emoji reactions) and stories associated with breast cancer screening keywords that were generated by more than 1.1 million unique female Facebook users over the 1 month between November 15 and December 15, 2016. We report frequency distributions of the most popular shared Web content by age group and keywords. Results: On average, each of 59,000 unique stories during the month was reshared 1.5 times, commented on nearly 8 times, and reacted to more than 20 times by other users. Posted stories were most often authored by women aged 45-54 years. Users shared, reshared, commented on, and reacted to website links predominantly to e-commerce sites (12,200/1.7 million, 36% of all the most popular links), celebrity news (n=8800, 26%), and major advocacy organizations (n=4900, 15%; almost all accounted for by the American Cancer Society breast cancer site). Conclusions: On Facebook, women shared and reacted to links to commercial and informative websites regarding breast cancer and screening. This information could inform patient outreach regarding breast cancer screening, indirectly through better understanding of key issues, and directly through understanding avenues for paid messaging to women authoring and reacting to content in this space. [J Med Internet Res 2017;19(6):e201]
- Published
- 2017
21. Special Issue on Web Site Evolution (WSE 2006).
- Author
-
Bolchini, Davide, Dean, Thomas, Distante, Damiano, and Tilley, Scott
- Subjects
WEBSITES ,ALGORITHMS ,WORLD Wide Web - Abstract
The article discusses the papers published within the issue including "Identifying Similar Pages of Web Applications Using a Competitive Clustering Algorithm," by De Lucia, Scaniiello and Tortora and "The Design and Use of WSDL-Test: A Tool for Testing Web Services," by Sneed and Huang.
- Published
- 2007
- Full Text
- View/download PDF
22. Automatically classifying familiar web users from eye-tracking data: a machine learning approach.
- Author
-
Öder, Melih, Eraslan, Şükrü, and Yeşilada, Yeliz
- Subjects
EYE tracking ,MACHINE learning ,INTERNET users ,WEBSITES ,WORLD Wide Web ,DATA mining - Abstract
Eye-tracking studies typically collect enormous amount of data encoding rich information about user behaviours and characteristics on the web. Eye-tracking data has been proved to be useful for usability and accessibility testing and for developing adaptive systems. The main objective of our work is to mine eye-tracking data with machine learning algorithms to automatically detect users' characteristics. In this paper, we focus on exploring different machine learning algorithms to automatically classify whether users are familiar or not with a web page. We present our work with an eye-tracking data of 81 participants on six web pages. Our results show that by using eye-tracking features, we are able to classify whether users are familiar or not with a web page with the best accuracy of approximately 72% for raw data. We also show that with a resampling technique this accuracy can be improved more than 10%. This work paves the way for using eye-tracking data for identifying familiar users that can used for different purposes, for example, it can be used to better locate certain elements on pages such as adverts to meet the users' needs or it can be used to do better profiling of users for usability and accessibility assessment of pages. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Weighted PageRank Algorithm Search Engine Ranking Model for Web Pages.
- Author
-
Shaffi, S. Samsudeen and Muthulakshmi, I.
- Subjects
SEARCH engines ,WEBSITES ,SEARCH algorithms ,WORLD Wide Web - Abstract
As data grows in size, search engines face new challenges in extracting more relevant content for users' searches. As a result, a number of retrieval and ranking algorithms have been employed to ensure that the results are relevant to the user's requirements. Unfortunately, most existing indexes and ranking algorithms crawl documents and web pages based on a limited set of criteria designed to meet user expectations, making it impossible to deliver exceptionally accurate results. As a result, this study investigates and analyses how search engines work, as well as the elements that contribute to higher ranks. This paper addresses the issue of bias by proposing a new ranking algorithm based on the PageRank (PR) algorithm, which is one of the most widely used page ranking algorithms We propose weighted PageRank (WPR) algorithms to test the relationship between these various measures. The Weighted Page Rank (WPR) model was used in three distinct trials to compare the rankings of documents and pages based on one or more user preferences criteria. The findings of utilizing the Weighted Page Rank model showed that using multiple criteria to rank final pages is better than using only one, and that some criteria had a greater impact on ranking results than others. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Evaluation of Open and Distance Education Websites: A Hybrid Multi-Criteria Decision-Making Approach.
- Author
-
Şahin, Yıldız and Kulakli, Atik
- Subjects
ANALYTIC hierarchy process ,DIGITAL technology ,INTERNET in education ,DISTANCE education ,WORLD Wide Web ,INFORMATION technology ,WEBSITES - Abstract
Higher education institutions and organizations have new opportunities thanks to digital technologies. Universities worldwide seek to provide the most outstanding available student services, particularly those that promote student achievements in their program objectives. The World Wide Web, in particular, has advanced Internet-based information technology, dramatically impacting all types of education delivery. Therefore, it has rapidly expanded the Open and Distance Education (ODE) system. This study aims to evaluate the performance of the higher institutions' ODE websites within 5 main criteria (Navigation, Accessibility, Design, Content Readability, and Announcements) and 20 sub-criteria. The case study has taken place in Türkiye, and the institutions available for the study were Anadolu University ODE; Ankara University ODE; Ataturk University ODE; Istanbul University ODE. This paper utilized two Multi-criteria Decision Making (MCDM) techniques: Fuzzy Analytical Hierarchy Process (Fuzzy AHP) and Fuzzy Weighted Aggregated Sum Product Assessment (Fuzzy WASPAS). The criteria were determined with the help of the literature, which was searched, then categorized and weighted with Fuzzy AHP. The evaluation step in the process was conducted with Fuzzy WASPAS to select the best-performed alternative ODE websites. According to research findings, Design is the essential criterion, followed by Accessibility, Content Readability, Announcement, and Navigation criteria. Our research identified and recommended the development areas for further research and proposed theoretical and practical implications as well as managerial decisions to be considered for the ODE website improvements. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Contextual Embeddings-Based Web Page Categorization Using the Fine-Tune BERT Model.
- Author
-
Nandanwar, Amit Kumar and Choudhary, Jaytrilok
- Subjects
DEEP learning ,WEBSITES ,LANGUAGE models ,WORLD Wide Web ,INTERNET content - Abstract
The World Wide Web has revolutionized the way we live, causing the number of web pages to increase exponentially. The web provides access to a tremendous amount of information, so it is difficult for internet users to locate accurate and useful information on the web. In order to categorize pages accurately based on the queries of users, methods of categorizing web pages need to be developed. The text content of web pages plays a significant role in the categorization of web pages. If a word's position is altered within a sentence, causing a change in the interpretation of that sentence, this phenomenon is called polysemy. In web page categorization, the polysemy property causes ambiguity and is referred to as the polysemy problem. This paper proposes a fine-tuned model to solve the polysemy problem, using contextual embeddings created by the symmetry multi-head encoder layer of the Bidirectional Encoder Representations from Transformers (BERT). The effectiveness of the proposed model was evaluated by using the benchmark datasets for web page categorization, i.e., WebKB and DMOZ. Furthermore, the experiment series also fine-tuned the proposed model's hyperparameters to achieve 96.00% and 84.00% F1-Scores, respectively, demonstrating the proposed model's importance compared to baseline approaches based on machine learning and deep learning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Evaluation of web-based consumer medication information: Content and usability of 4 Australian websites
- Author
-
Melissa T. Baysari, Maureen Robinson, Amina Tariq, Magdalena Z. Raban, Lauren Richardson, Johanna I. Westbrook, Mary Byrne, and Ling Li
- Subjects
020205 medical informatics ,Web development ,prescription drugs ,Internet privacy ,Usability ,02 engineering and technology ,usability testing ,World Wide Web ,03 medical and health sciences ,0302 clinical medicine ,drug information service ,Health care ,0202 electrical engineering, electronic engineering, information engineering ,Medicine ,health communication ,Web navigation ,030212 general & internal medicine ,080608 Information Systems Development Methodologies ,Medical prescription ,Web usability ,Health communication ,Original Paper ,Internet ,business.industry ,nonprescription drugs ,Consumer ,Websites ,Content ,consumer health information ,The Internet ,business - Abstract
Background - Medication is the most common intervention in health care, and written medication information can affect consumers’ medication-related behavior. Research has shown that a large proportion of Australians search for medication information on the Internet. Objective - To evaluate the medication information content, based on consumer medication information needs, and usability of 4 Australian health websites: Better Health Channel, myDr, healthdirect, and NPS MedicineWise . Methods - To assess website content, the most common consumer medication information needs were identified using (1) medication queries to the healthdirect helpline (a telephone helpline available across most of Australia) and(2) the most frequently used medications in Australia. The most frequently used medications were extracted from Australian government statistics on use of subsidized medicines in the community and the National Census of Medicines Use. Each website was assessed to determine whether it covered or partially covered information and advice about these medications. To assess website usability, 16 consumers participated in user testing wherein they were required to locate 2 pieces of medication information on each website. Brief semistructured interviews were also conducted with participants to gauge their opinions of the websites. Results - Information on prescription medication was more comprehensively covered on all websites (3 of 4 websites covered 100% of information) than nonprescription medication (websites covered 0%-67% of information). Most websites relied on consumer medicines information leaflets to convey prescription medication information to consumers. Information about prescription medication classes was less comprehensive, with no website providing all information examined about antibiotics and antidepressants. Participants (n=16) were able to locate medication information on websites in most cases (accuracy ranged from 84% to 91%). However, a number of usability issues relating to website navigation and information display were identified. For example, websites not allowing combinations of search terms to be entered in search boxes and continuous blocks of text without subheadings. Conclusions - Of the 4 Australian health information websites tested, none provided consumers with comprehensive medication information on both prescription and nonprescription medications in a user-friendly way. Using data on consumer information needs and user testing to guide medication information content and website design is a useful approach to inform consumer website development.
- Published
- 2016
27. The past, present, and future of network monitoring: A panel discussion.
- Author
-
Stevens, Nathaniel T. and Wilson, James D.
- Subjects
WORLD Wide Web ,WEBSITES ,SOCIAL science research ,ARTIFICIAL intelligence ,OPERATIONS research - Abstract
He holds a B.S. in engineering, an M.S. in applied physics and computer science, and an M.S. and Ph.D. in statistics from Virginia Tech. George Michailidis received his B.S. degree in economics from the University of Athens, Athens, Greece, in 1987, his M.A. degrees in both economics and mathematics from the University of California, Los Angeles (UCLA), Los Angeles, CA, USA, and his Ph.D. degree in mathematics from UCLA. His research interests and work generally falls into one of four primary themes: network modeling and analysis of structural and functional connectivity data; modeling and monitoring dynamic networks and their role in social dynamics and social media; unsupervised machine learning; and statistical machine learning for health, sustainability and the environment. He received his B. Stat and M. Stat degrees from the Indian Statistical Institute, and his PhD in Statistics from University of Illinois at Urbana-Champaign. [Extracted from the article]
- Published
- 2021
- Full Text
- View/download PDF
28. An Integrated Variable-Magnitude Approach for Accessibility Evaluation of Healthcare Institute Web Pages.
- Author
-
Ara, Jinat, Sik-Lanyi, Cecilia, and Kelemen, Arpad
- Subjects
WEBSITES ,WORLD Wide Web ,ACCESSIBLE design ,WEB accessibility ,INFORMATION dissemination ,CHILDREN with disabilities - Abstract
The World Wide Web has become an important platform for sharing a wide array of information within the world community. In the post-COVID-19 scenario, the web become a primary source of information in the context of healthcare information dissemination. Healthcare institutions, such as hospitals and clinics, utilize this platform to provide services to reach their target users. It is essential to evaluate the web pages of healthcare institutions and compute their accessibility score for people with disabilities or special needs. This paper presents a variable-magnitude approach to compute the accessibility score of healthcare web pages, considering several requirements of people with disabilities. To compute the accessibility score through the proposed approach, we considered two different components and integrated them to compute the accessibility score through the proposed algorithm. The proposed approach was experimentally applied to sixteen healthcare institutes' web pages in Hungary. Based on the experiment's results and the received feedback from an accessibility specialist, a set of suggestions is provided to minimize the accessibility barrier and improve the accessibility score for people with disabilities to access web resources without difficulty. The main contribution of this work is in enhancing awareness of web platform accessibility for web practitioners to improve accessibility, so that people with disabilities can effectively access web resources. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. EQ-5D: a plea for accurate nomenclature.
- Author
-
Brooks, Richard, Boye, Kristina S., and Slaap, Bernhard
- Subjects
QUALITY of life ,NAMES ,WEBSITES ,ABBREVIATIONS ,SERIAL publications ,TERMS & phrases ,INFORMATION resources ,WORLD Wide Web - Abstract
Between 1987 and 1990, the EuroQol Group developed a 5-dimension health-related quality of life instrument, originally known as 'the EuroQol instrument', which since 1995 has been called the 'EQ-5D'. For several years, 'the EuroQol instrument' and 'EQ-5D' were both deployed in published materials. In order to standardise nomenclature, the EuroQol Group agreed in 2001 on a terminology glossary containing 12 items; this was recently revised and augmented to include 22 items and can be found on the Group's website (www.euroqol.org). Since 2009, EQ-5D has been available in three versions: EQ-5D-3L, EQ-5D-5L, and EQ-5D-Y, where 3L stands for three levels, 5L for five levels, and Y for youth. Yet, almost 20 years after the original glossary was published, the instrument and its components continue to be inaccurately named in published materials. Two surveys – of arthritis applications, and 82 recent publications – found a variety of terms used to describe the instrument. Despite the instrument being named 'EQ-5D' for 25 years, and the terms 'EQ-5D-3L' and 'EQ-5D-5L' being established for a decade, variations of 'the EuroQol instrument' continue to be used as descriptors. The EuroQol Group's website contains advice on how to use EQ-5D, including nomenclature, and potential users are urged to consult the site. Since standardising nomenclature is crucial in the compilation of systematic reviews, the EuroQol Group would like to emphasise that 'EQ-5D' is not an abbreviation and is the correct term to use when referring to the instrument in general. In the interests of accuracy and good practice, users of the EuroQol family of instruments should employ the standard EQ-5D nomenclature when reporting and discussing their findings. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. Google Scholar revisited.
- Subjects
WEBSITES ,PERIODICALS ,SERIAL publications ,WORLD Wide Web ,INTERNET - Abstract
Purpose - The purpose of this paper is to revisit Google Scholar. Design/methodology/approach - This paper discusses the strengths and weaknesses of Google Scholar. Findings - The Google Books project has given a massive and valuable boost to the already rich and diverse content of Google Scholar. The dark side of the growth is that significant gaps remain for top ranking journals and serials, and the number of duplicate, triplicate and quadruplicate records for the same source documents (which Google Scholar cannot detect reliably) has increased. Originality/value - This paper discusses the strengths and weaknesses of Google Scholar. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
31. Report on the Sixth International Workshop on Location and the Web (LocWeb 2016).
- Author
-
Ahlers, Dirk and Wilde, Erik
- Subjects
COMPUTER architecture ,WORLD Wide Web ,WEBSITES ,ACCESS to information ,GEOSPATIAL data - Abstract
For describing and understanding the real world, location is an important factor. Consequently, it also appears in many Web applications and mining approaches as a crosscutting issue. LocWeb 2016 continues a workshop series addressing issues at the intersection of location-based services and Web architecture and was held at WWW 2016. It combines geospatial search, information management, and Web architecture, with a main focus on location-aware information access. The workshop drew contributions from various fields, ranging from mobility analytics over new ways to understand cities to Web standards. LocWeb2016 had an interdisciplinary combination of contributions, with two keynotes and three long papers. We will briefly discuss the workshop theme and the contributions. [ABSTRACT FROM AUTHOR]
- Published
- 2016
32. Using Web Traffic Analysis for Customer Acquisition and Retention Programs in Marketing.
- Author
-
Wilson, R. Dale
- Subjects
INTERNET marketing ,INTERNET in education ,WORLD Wide Web ,RECORDS management ,WEBSITES ,CUSTOMER satisfaction ,CUSTOMER relations - Abstract
This paper draws heavily on the author's recent sojourn into the field of web traffic analysis. Based on this experience, the paper argues that web traffic analysis provides an interesting and useful set of procedures for learning about consumers' navigational paths through companies' web sites. Once these navigational paths have been logged and analyzed, it is then possible to determine how web site visitors-both new prospects and returning customers-react to marketing offers online. Web traffic analysis offers a variety of metrics that can be used to evaluate web sites, and intelligent executives can then modify these web sites in ways that improve visitor response. The paper demonstrates how web traffic analysis fits into the academic literature in marketing, and a number of software and web metrics issues are presented. An illustration is used to demonstrate some of the ways in which web traffic analysis can be used to drive customer acquisition and customer retention programs iii marketing. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
33. What Do Electronic Health Record Vendors Reveal About Their Products: An Analysis of Vendor Websites
- Author
-
Aviv Shachak, Alejandro R. Jadad, and Natalie K Yeung
- Subjects
Telemedicine ,Vendor ,Computer science ,Electronic health record ,MEDLINE ,Health Informatics ,lcsh:Computer applications to medicine. Medical informatics ,Diffusion of innovations ,World Wide Web ,Electronic Health Records ,Humans ,Diffusion of Innovations ,Marketing of Health Services ,Ontario ,Original Paper ,Internet ,Data collection ,Vendors ,business.industry ,lcsh:Public aspects of medicine ,Commerce ,lcsh:RA1-1270 ,Websites ,Purchasing ,lcsh:R858-859.7 ,The Internet ,Consumer confidence index ,business - Abstract
BackgroundPurchasing electronic health records (EHRs) typically follows a process in which potential adopters actively seek information, compare alternatives, and form attitudes towards the product. A potential source of information on EHRs that can be used in the process is vendor websites. It is unclear how much product information is presented on EHR vendor websites or the extent of its value during EHR purchasing decisions. ObjectiveTo explore what features of EHR systems are presented by vendors in Ontario, Canada, on their websites, and the persuasive means they use to market such systems; to compare the online information available about primary care EHR systems with that about hospital EHR systems, and with data compiled by OntarioMD, a regional certifying agency. MethodsA list of EHR systems available in Ontario was created. The contents of vendor websites were analyzed. A template for data collection and organization was developed and used to collect and organize information on the vendor, website content, and EHR features. First, we mapped information on system features to categories based on a framework from the Institute of Medicine (IOM). Second, we used a grounded theory–like approach to explore information for building consumer confidence in the vendor and product, and the various persuasive strategies employed on vendor websites. All data were first coded by one researcher. A peer reviewer independently analyzed a randomly chosen subset of the websites (10 of 21; 48%) and provided feedback towards a unified coding scheme. All data were then re-coded and categorized into themes. Finally, we compared information from vendor websites and data gathered by OntarioMD. ResultsVendors provided little specific product information on their websites. Only two of five acute care EHR websites (40%) and nine of 16 websites for primary care systems (56%) featured seven or all eight of the IOM components. Several vendor websites included system interface demonstrations: screenshots (six websites), public videos or slideshows (four websites), or for registered viewers only (three websites). Persuasive means used by vendors included testimonials on 14/21 (67%) websites, and directional language. Except for one free system, trial EHR versions were not available. OntarioMD provided more comprehensive information about primary care systems than the vendors’ websites. Of 14 points of comparison, only the inclusion of templates and bilingual interfaces were fully represented in both data sources. For all other categories, the vendor websites were less complete than the OntarioMD site. ConclusionsEHR vendor websites employ various persuasive means, but lack product-specific information and do not provide options for trying systems on a limited basis. This may impede the ability of potential adopters to form perceptions and compare various offerings. Both vendors and clients could benefit from greater transparency and more specific product information on the Web. Trial RegistrationN/A
- Published
- 2013
34. Is Your Library Website Missing Essential Information?: A Comparison and Evaluation of Public Library Websites in Australia, Canada, and United States.
- Author
-
Velasquez, Diane L. and Campbell-Meier, Jennifer
- Subjects
COLLEGE students ,INFORMATION resources management ,ACADEMIC libraries ,QUANTITATIVE research ,CURRICULUM ,INFORMATION resources ,DESCRIPTIVE statistics ,LIBRARIANS ,WORLD Wide Web ,PUBLIC libraries - Abstract
This paper describes the findings of a quantitative study of 1,698 public library websites in Australia, Canada, and the United States over a period of three years using a spreadsheet protocol. The purpose of the research was to evaluate public library websites, available online sources, and whether library staff were available to respond to users' questions and concerns regarding the website. Descriptive statistics are used to report the results. The study provides public library website information regarding which protocol criteria each country's libraries attained. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Building Self-Healing Feature Based on Faster R-CNN Deep Learning Technique in Web Data Extraction Systems.
- Author
-
Patnaik, Sudhir Kumar and Narendra Babu, C.
- Subjects
DEEP learning ,DATA extraction ,WORLD Wide Web ,SERVER farms (Computer network management) ,CONVOLUTIONAL neural networks ,WEBSITES - Abstract
Web data extraction has evolved over the years with extracting data from documents to today's World Wide Web (WWW). The WWW growth has placed data at the centre of this ecosystem and benefited society at large, businesses and consumers. The proposed system uses deep learning technique, Faster region convolutional neural network (R-CNN) for automated navigation, extraction of data and self-healing of data extraction engine to adapt to dynamic changes in website layout. The proposed system trains the Faster R-CNN model for detection of product in the web page using bounding box image detection technique and extracts product details with high extraction accuracy. Deep learning technique has advanced rapidly in the different fields for image detection, but its application in data extraction makes this paper unique. An ecommerce retail website is used as real-world example to prove the self-healing capability of the proposed automated web data extraction system. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. An Automated Word Embedding with Parameter Tuned Model for Web Crawling.
- Author
-
Neelakandan, S., Arun, A., Bhukya, Raghu Ram, Hardas, Bhalchandra M., Kumar, T. Ch. Anil, and Ashok, M.
- Subjects
WORLD Wide Web ,WEB search engines ,BIRD classification ,WEBSITES ,DEEP learning - Abstract
In recent years, web crawling has gained a significant attention due to the drastic advancements in the World Wide Web. Web Search Engines have the issue of retrieving massive quantity of web documents. One among the web crawlers is the focused crawler, that intends to selectively gather web pages from the Internet. But the efficiency of the focused crawling can easily be affected by the environment of web pages. In this view, this paper presents an Automated Word Embedding with Parameter Tuned Deep Learning (AWE-PTDL) model for focused web crawling. The proposed model involves different processes namely pre-processing, Incremental Skip-gram Model with Negative Sampling (ISGNS) based word embedding, bidirectional long short-term memory-based classification and bird swarm optimization based hyperparameter tuning. The SGNS training desires to go over the complete training data to pre-compute the noise distribution before performing Stochastic Gradient Descent (SGD) and the ISGNS technique is derived for the word embedding process. Besides, the cosine similarity is computed from the word embedding matrix to generate a feature vector which is fed as input into the Bidirectional Long Short-Term Memory (BiLSTM) for the prediction of website relevance. Finally, the Birds Swarm Optimization-Bidirectional Long Short-Term Memory (BSO-BiLSTM) based classification model is used to classify the webpages and the BSO algorithm is employed to determine the hyperparameters of the BiLSTM model so that the overall crawling performance can be considerably enhanced. For validating the enhanced outcome of the presented model, a comprehensive set of simulations are carried out and the results are examined in terms of different measures. The Automated Word Embedding with Parameter Tuned Deep Learning (AWE-PTDL) technique has attained a higher harvest rate of 85% when compared with the other techniques. The experimental results highlight the enhanced web crawling performance of the proposed model over the recent state of art web crawlers. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. WHAT AFFECTS WEB CREDIBILITY PERCEPTION? AN ANALYSIS OF TEXTUAL JUSTIFICATIONS.
- Author
-
KĄKOL, MICHAŁ and NIELEK, RADOS ŁAW
- Subjects
WEBSITES ,TRUTHFULNESS & falsehood - Abstract
In this paper, we present the findings of a qualitative analysis of 15,750 comments left by 2,041 participants in a Reconcile web credibility evaluation study. While assessing the credibility of the presented pages, respondents of the Reconcile studies were also asked to justify their ratings in writing. This work attempts to give an insight into the factors that affected the credibility assessment. To the best of our knowledge, the presented study is the most-recent large-scale study of its kind carried out since 2003, when the Fogg et al. sHow do users evaluate the credibility of Web sites? A study with over 2,500 participants' paper was published. The performed analysis shows that the findings made a decade ago are still mostly valid today despite the passage of time and the advancement of Internet technologies. However we report a weaker impact of webpage appearance. A much bigger dataset (as compared to Fogg's studies) allowed respondents to reveal additional features, which influenced the credibility evaluations. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
38. VFX: A VISION-BASED APPROACH TO FORUM DATA EXTRACTION.
- Author
-
Chen Hui Ng, Choon Jin Ng, and Tong Ming Lim
- Subjects
DATA extraction ,INTERNET forums ,WORLD Wide Web ,INTERNET ,WEBSITES - Abstract
Rapid development of the Internet has dramatically increased information available on the World Wide Web. Amongst these vast sources of information, discussion forums may be useful for businesses and organizations to get a glimpse of customer opinions or to extract product information. Little existing work reported in the literature has systemically investigated the problem of extracting user posts from forum sites. Extracting forum posts accurately raises a few challenges. First, forum comes in a variety of templates and this makes it hard to formalize general rules to extract forum posts. Second, each post record might appear relatively different from each other. This introduces inconsistency in the Document Object Model (DOM) for comparisons. Third, each post in the forum can consist of complicated subtrees rather than a single node in the DOM tree. To tackle these challenges, a vision-based approach was introduced to automatically extract posts from a web forum page based on its visual cues. In this paper, we propose a visual-based forum extraction (VFX) algorithm that can extract user posts in any types of forum without the need to inspect its template structure in advance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
39. Web accessibility investigation and identification of major issues of higher education websites with statistical measures: A case study of college websites.
- Author
-
Ismail, Abid and Kuppusamy, K.S.
- Subjects
WEB accessibility ,WORLD Wide Web ,WEBSITES ,INTERNET content ,HIGHER education ,POWER tools ,WEB browsers - Abstract
The World Wide Web Consortium (W3C) has provided the most important set of guidelines for web accessibility which is popularly known as Web Content Accessibility Guidelines (WCAG). The accessibility analysis of higher education websites becomes paramount important to make them inclusive considering the growing number of enrollments of persons with disabilities (PwDs) in higher education, in countries such as India. This paper presents the accessibility analysis of higher education websites with the case study of college websites (N = 44) affiliated with the University of Kashmir and Cluster University Srinagar. The study has been carried out with two major accessibility evaluation tools called web accessibility test, denoted as TAW and the accessibility engine powering browser extensions called the accessibility engine, denoted as aXe. This paper lists the major accessibility barriers exposed by these sites in terms of metrics such as a number of problems, warnings and a status of success criteria violations. With respect to TAW tool, a number of problems observed were 2646, a large number of warnings to the scale of 15995 and the not reviewed items were 1356. With aXe tool, the total violations observed were 1951 and items needing review were 1733. Findings of the statistical analysis are also presented in this paper. This paper presents a roadmap of steps for making these websites inclusive and barrier-free for PwDs. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Accessing Papers on IEEE Xplore.
- Subjects
OCEANOGRAPHY ,WEBSITES ,MARINE sciences ,WEB-based user interfaces ,WORLD Wide Web ,PERIODICALS - Published
- 2011
- Full Text
- View/download PDF
41. FROM THE EDITORIAL OFFICE.
- Author
-
Stingl, Michael
- Subjects
PUBLISHING ,EMAIL ,WORLD Wide Web ,WEBSITES - Abstract
The author declares the processes in accepting submissions which are proposed to be published. He claims that the article or writing should no longer contain more than 10,000 words considering the increasing number of articles being submitted. He adds that an article can also be submitted electronically. He explains that through electronic process, via cjp@uleth.ca, they can process papers more quickly.
- Published
- 2008
- Full Text
- View/download PDF
42. How assessment websites of academic libraries convey information and show value.
- Author
-
Clunie, Simone and Parrish, Darlene Ann
- Subjects
ACADEMIC libraries ,CONSUMER attitudes ,CREATIVE ability ,NEEDS assessment ,QUALITY assurance ,SECURITY systems ,WORLD Wide Web ,INTRANETS (Computer networks) ,ACCESSIBLE design of public spaces - Abstract
Purpose As libraries are required to become more accountable and demonstrate that they are meeting performance metrics, an assessment website can be a means for providing data for evidence-based decision making and an important indicator of how a library interacts with its constituents. The purpose of this paper is to share the results of a review of websites of academic libraries from four countries, including the UK, Canada, Australia and the USA.Design/methodology/approach The academic library websites included in the sample were selected from the Canadian Association of Research Libraries, Research Libraries of the United Kingdom, Council of Australian University Libraries, Historically Black College & Universities Library Alliance, Association of Research Libraries and American Indian Higher Education Consortium. The websites were evaluated according to the absence or presence of nine predetermined characteristics related to assessment.Findings It was discovered that “one size does not fit all” and found several innovative ways institutions are listening to their constituents and making improvements to help users succeed in their academic studies, research and creative endeavors.Research limitations/implications Only a sample of academic libraries from each of the four countries were analyzed. Additionally, some of the academic libraries were using password protected intranets unavailable for public access. The influences of institutional history and country-specific practices also became compelling factors during the analysis.Originality/value This paper seeks to broaden the factors for what is thought of as academic library assessment with the addition of qualitative and contextual considerations. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
43. Wikipedia and academic peer review: Wikipedia as a recognised medium for scholarly publication?
- Subjects
WEBSITES ,PROFESSIONAL peer review ,WORLD Wide Web ,EDUCATION ,COMPUTER network resources ,MANAGEMENT - Abstract
Purpose - The purpose of this paper is to engage in a thought experiment, exploring the use of Wikipedia or similar content-malleable systems for the review and dissemination of academic knowledge. Design/methodology/approach - By looking at other sources, the paper considers the current state of the academic peer-review process, discusses Wikipedia and reflects on dynamic content creation and management applications currently in use in academia. Findings - The traditional peer review process must be updated to match the rapid creation and diffusion of knowledge that characterises the 21st century. The Wikipedia concept is a potential model for more rapid and reliable dissemination of scholarly knowledge. The implications of such a concept would have a dramatic effect on the academic community. Originality/value - This paper promotes a radical idea for changing the methods by which academic knowledge is both constructed and disseminated. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
44. Corporate Social Responsibility as Argument on the Web.
- Author
-
Coupland, C.
- Subjects
SOCIAL responsibility of business ,WEBSITES ,BUSINESS communication ,SOCIOLOGY of corporations ,ORGANIZATIONAL behavior ,BUSINESS ethics ,WORLD Wide Web ,ORGANIZATIONAL sociology ,RHETORIC ,ORGANIZATIONAL research - Abstract
This paper critically examines the language drawn on to describe socially responsible activities (CSR) in the context of the corporate web page. I argue that constructions of CSR are made plausible and legitimised according to the context of the expression. The web site is a genre of communication which addresses a broad and discerning audience; hence fractures in the institutionalised nature of argument may be apparent. The focus of this paper is to examine how the rhetoric of CSR is legitimised and to develop a framework of argumentation repertoires that operate in this context. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
45. Transport records deposited in 2002.
- Subjects
TRANSPORTATION -- Records & correspondence ,MANUSCRIPT collections ,WEBSITES ,MANUSCRIPTS ,WORLD Wide Web - Abstract
The article presents information on transport records deposited in 2002. T he following classified list of major archive collections acquired by British repositories during 2002 has been compiled by the Historical Manuscripts Commission. Some collections may not yet be available for research, and enquiries should be directed to the relevant repositories. The commission seeks each year to collect information relating to manuscript accessions from over 200 repositories and record offices throughout the British Isles. The information is published on the commission's Web site (virww.nationalarchives.gov.uk) and in a series of thematic digests which appear in learned journals and newssheets.
- Published
- 2004
- Full Text
- View/download PDF
46. Metadata? Thesauri? Taxonomies? Topic maps! Making sense of it all.
- Author
-
Garshol, Lars Marius
- Subjects
METADATABASES ,DUBLIN Core ,INFORMATION resources management ,INFORMATION services ,INFORMATION storage & retrieval systems ,WEBSITES - Abstract
To be faced with a document collection and not to be able to find the information you know exists somewhere within it is a problem as old as the existence of document collections. Information architecture is the discipline dealing with the modern version of this problem: how to organize web sites so that users can actually find what they are looking for. Information architects have so far applied known and well-tried tools from library science to solve this problem, and now topic maps are sailing up as another potential tool for information architects. This raises the question of how topic maps compare with the traditional solutions, and that is the question this paper attempts to address. The paper argues that topic maps go beyond the traditional solutions in the sense that they provide a framework within which they can be represented as they are, but also extended in ways which significantly improve information retrieval. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
47. Ensemble approach for web page classification.
- Author
-
Gupta, Amit and Bhatia, Rajesh
- Subjects
WEBSITES ,WORLD Wide Web ,WEB search engines ,INTERNET content ,INFORMATION retrieval ,NATURAL language processing ,SEARCH engines - Abstract
Over the decades World Wide Web has become abundance source of distributed web content repository hyper-linked with diverse information domains. Performance of search engines in locating the information is exemplary but still there is inadequacy in search engines for focused crawling of web content. Web Page Classification being pivotal for information retrieval and management task plays imperative role for natural language processing in creating classified web document repositories and building indexed web directories. The conventional machine learning approaches extract the desired features from web pages in order to classify them whereas deep leaning algorithms learns the covet features as the network goes deeper and deeper. Transfer learning based Pre-trained models such as BERT attains impressive performance for text classification. In this study, we evaluate the effectiveness of adopting pre-trained model BERT for the task of classifying web pages into different categories. In this paper, we proposed an ensemble approach for web page classification by learning contextual representation using pre-trained bidirectional BERT and then applying deep Inception modelling with Residual connections for fine-tunes the target task by utilizing parallel multi-scale semantics. Experimental evaluation exhibit that proposed ensemble model outperforms benchmark baselines and achieve better performance in contrast to other transfer learning approaches evaluated on the web page classification task for different classification datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
48. 15 Years of Stanca Act: Are Italian Public universities websites accessible?
- Author
-
Barricelli, Barbara Rita, Casiraghi, Elena, Dattolo, Antonina, and Rizzi, Alessandro
- Subjects
WORLD Wide Web ,PUBLIC universities & colleges ,WEBSITES - Abstract
With the increasing spread and usage of Internet technologies, the challenge of ensuring Web accessibility for all, including anyone with a form of disability, has become an hot issue, pursued both by the World Wide Web Consortium (W3C) and by governments in different countries. In particular, W3C has developed technical Web accessibility guidelines (WCAG), while governments have legally addressed the problem, by emanating specific laws and policies. In Italy, the Stanca Act, a law regulating the design and creation of governmental and public sector websites, recently updated according to last WCAG 2.1 was enacted in 2004. To analyze its impact, this paper presents a study about the accessibility of Italian public universities websites, by particularly analyzing their conformance to the Stanca Act. The reported analysis shows that, despite that the Stanca act dates back 15 years ago, Italian public universities are still struggling to satisfy all its requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
49. Internet Campaigning and Participation.
- Author
-
Klotz, Robert
- Subjects
- *
INTERNET in political campaigns , *PARTICIPATION , *ELECTIONS , *WEBSITES , *POLITICAL candidates - Abstract
This paper explores the use of the Internet for promoting participation in the 2004 election. The online mobilization efforts of the websites for all 68 major-party candidates for the U.S. Senate were systematically analyzed. The evidence suggests that candidates are using websites to promote credit card participation, voting, and volunteering. The research formalizes the concept of plagiarized participation, whereby would-be participators are encouraged to present the words of others as their own in support of a cause. Providing frequency counts and text examples, this paper gives a systematic view of Internet campaigning for grassroots and Astroturf support. ..PAT.-Conference Proceeding [ABSTRACT FROM AUTHOR]
- Published
- 2005
50. Seeking Salvation for Accountability.
- Author
-
Dubnick, Melvin J.
- Subjects
- *
LEGAL liability , *RESPONSIBILITY , *WEBSITES , *WORLD Wide Web , *AUTHORS - Abstract
A critique of the uses and abuses of the concept of accountability. Check author’s web site for an updated version of the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2002
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.