26 results on '"Raudonis V"'
Search Results
2. Evaluation of Touch Control Interface for the Smart Mobile Furniture
- Author
-
Maskeliunas, R., Raudonis, V., Lengvenis, P., and Kauno technologijos universitetas
- Subjects
Engineering ,Multimodal interface ,Control algorithm ,Human computer interface ,business.industry ,Interface (computing) ,Control (management) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Smart furniture ,Human–computer interaction ,Electrical and Electronic Engineering ,business ,Simulation - Abstract
The paper describes our implementation and experimental evaluation of a touch surface control algorithms developed for the smart mobile furniture. We begin the article by presenting the working principle of touch surface, followed by the descriptions of proposed eyes-free control algorithms aimed to avoid random touches. The experimental evolution and analysis shown the problems affiliated with the usage of standard hardware by manipulating it by nose and foot finger (compared vs a hand finger), and allowed the initial determination of recognition accuracy, performance (in time) and overall user rating. DOI: http://dx.doi.org/10.5755/j01.eee.19.1.3258
- Published
- 2013
3. Arduino based controller for the smart assistive mobility hardware
- Author
-
Lengvenis, P., Maskeliunas, R., Raudonis, V., and Kauno technologijos universitetas
- Subjects
Engineering ,Mobile robot sensing systems ,business.industry ,Controller (computing) ,Schematic ,Voltage regulator ,Modelling ,Microcontroller ,Smart sensors ,Embedded system ,Arduino ,Trajectory ,Integral solution ,Electrical and Electronic Engineering ,Microcontrollers ,Control logic ,business ,Computer hardware - Abstract
The main goal of this work is the development of Arduino microcontroller based integral solution for the smart, assistive mobility hardware. The paper presents schematics of device control and electrics, the control logic, the power regulation chain model and affiliated calculate efficiency parameters. The overall experimental investigation of real-life performance of our Arduino model proved quite successful. The added power regulator allowed achieving the stable outputs (±0.001V) to mimic the original controller. The deviation in trajectory compared to the ideal model was small (< 10 %) and can be further reduced by doing micro corrections.DOI: http://dx.doi.org/10.5755/j01.eee.18.9.2812
- Published
- 2012
4. Intelligent Environmental Recognition Features of 'Robosofa'
- Author
-
Maskeliunas, R., Raudonis, V., Bukis, A., and Kauno technologijos universitetas
- Subjects
Focus (computing) ,Engineering ,business.industry ,Speech recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Video camera ,Frame rate ,law.invention ,Task (computing) ,Range (mathematics) ,Colored ,law ,Computer vision ,Artificial intelligence ,State (computer science) ,Electrical and Electronic Engineering ,business - Abstract
The detailed review of state of the art techniques and algorithms is presented in this work, aiming to create the autonomous mobile platform for mobility improvement of disabled persons. The video camera, sixth infrared range sensors and the motorized furniture were used in our mobile system. Presented system automatically localizes the position of the “smart” furniture based on certain color markers. The mobile platform is a part of a bigger associative system which is being developed during the project ROBOSOFA. Therefore, the main focus of this paper is on the self localization problem from visual information, i.e., the images of colored markers. The recognition task of the certain room is limited by the signals from range sensors and the recognition of the certain markers, because each room is coded with different color markers. Intermediate experimental results show, that the proposed algorithm can be applied in real time applications and it is able to process 20 frames per second., Darbe pristatoma detali techninių sprendimų ir algoritmų apžvalga, leidžianti sukurti autonominę mobiliąją platformą, tinkamą neįgaliesiems. Sistemos pagrindą sudaro vaizdo kamera, šeši infraraudonųjų spindulių atstumo jutikliai ir judusis baldas. Pristatomoji sistema autonomiškai nustato „išmaniojo“ baldo buvimo vietą pagal specialiai suformuotus spalvinius žymeklius. Autonominė mobilioji platforma yra dalis didesnės asociatyviosios sistemos, kuriamos vykdant projektą „Robosofa“. Todėl šiame darbe daugiausia dėmesio skiriama padėties nustatymui (lokalizacijai) naudojant vaizdinę informaciją, t. y. kambarių specialiųjų žymeklių atvaizdus. Kiekvienas kambarys pažymimas tam tikru spalviniu žymekliu, todėl lokalizacija yra apribojama žymeklių atpažinimu ir atstumų iki sienų nustatymu naudojant jutiklius. Pirminiai eksperimentiniai tyrimai parodė, kad taikomas algoritmas yra tinkamas realaus laiko aplikacijoje ir juo galima apdoroti iki 20 kadrų per sekundę.
- Published
- 2012
5. Goal directed, state and behavior based navigation algorithm for smart 'Robosofa' furniture
- Author
-
Narvydas, G., Maskeliunas, R., Raudonis, V., and Kauno technologijos universitetas
- Subjects
Engineering ,business.industry ,Electrical and Electronic Engineering ,business ,Humanities ,Simulation - Abstract
This article presents a navigation system for the intelligent furniture of the “RoboSofa” project capable of navigation through a known environment and reaching the target in the shortest possible time. The target can be set by human being using various input modalities or remote control devices. The additional environmental scanning and image recognition (location markers) features were implemented in this solution. Our solution works by performing the data fusion gathered from sensors. This information is combined with a-priori knowledge of the environment to estimate “Robosofa” position and send commands to the actuators for movement. The main goals described were to solve one of the most important aspects of mobile robotics - to perform successful locomotion, obstacle avoidance task. The Khepera2 based mobile test platform controlled based on proposed control system effectively avoids obstacles and reliably reaches the target point., Straipsnyje pristatoma navigacijos sistema, skirta pagal projektą „RoboSofa“ kuriamiems išmaniesiems baldams. Aprašomi tyrimai, kaip nuvažiuoti nežinomą maršrutą žinomoje aplinkoje trumpiausiu laiku. Tikslą gali nustatyti vartotojas, naudodamas įvairias įvesties modalijas arba nuotoliniu būdu. Taip pat buvo įdiegti aplinkos skenavimo ir vaizdų atpažinimo algoritmai. Įranga veikia analizuodama sensorių teikiamus duomenis. Ši informacija sujungiama su a priori žiniomis, siekiant nustatyti įrenginio poziciją ir perduoti komandas valdikliams. Svabiausias tikslas buvo išspręsti pagrindinius robotikos navigacijos uždavinius: efektyviai apvažiuoti kliūtis ir greitai pasiekti tikslą. „Khepera2“ pagrindu sukurta mobilioji testavimo platforma efektyviai apvažiavo įvairios formos kliūtis arba jų išvengė ir patikimai pasiekė numatytas pozicijas.
- Published
- 2011
6. Monitoring of Humans Traffic using Hierarchical Temporal Memory Algorithms
- Author
-
Sinkevicius, S., Simutis, R., Raudonis, V., and Kauno technologijos universitetas
- Subjects
Hierarchical temporal memory ,business.industry ,Computer science ,Frame (networking) ,Artificial intelligence ,Data mining ,Electrical and Electronic Engineering ,business ,computer.software_genre ,Machine learning ,computer - Abstract
The main purpose of this paper is to investigate the application of Hierarchical Temporal Memory (HTM) mechanism for humans traffic calculation in a public places using WEB cameras. The proposed approach distinguishes humans from other objects in a current video frame, identifies direction of movement and calculates balance of the humans’ traffic. As a result, all people entering and leaving room can be counted and information about people-traffic can be acquired. This information is very useful for further traffic monitoring and can be used for various traffic organization tasks., Pagrindinis šio darbo tikslas yra pritaikyti hierarchinę laikinę atmintį (Hierarchical Temporal Memory) žmonių srauto balansui skaičiuoti viešosiose vietose su WEB kamera. Pasiūlyta metodika leidžia atskirti žmones nuo kitų objektų, sekti judėjimo kryptį ir apskaičiuoti srauto balansą. Ši informacija gali būti panaudota optimizuojant sistemas, kuriose žmonių srautai, eilės bei apkrovima yra kritiniai.
- Published
- 2011
7. Hierarchical control approach for autonomous mobile robots
- Author
-
Proscevicius, T., Bukis, A., Raudonis, V., Eidukeviciute, M., and Kauno technologijos universitetas
- Subjects
Hierarchy ,Engineering ,Artificial neural network ,business.industry ,Control (management) ,Robot ,Hierarchical control system ,Mobile robot ,Robotics ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Fuzzy logic - Abstract
Methods for intelligent mobile robots control which are based on principles of hierarchical control systems will be reviewed in this article. Hierarchical intelligent mobile robots are new direction for development of robotics, which have wide application perspectives. Despite increasing progress in technologies, the main problem of autonomous mobile robots development is that, they are ineffective in their control. In each of the hierarchical control levels (movement in space, problems solving and signal processing sets) will define by specific management of objectives, goals and rules. Communication and management between hierarchies are implemented by higher level of hierarchy using obtained information about the environment and lover level of hierarchy. Studies have shown that artificial neural networks, fuzzy logic are widely used for the development of the hierarchical systems. The main focus of the work is on communications in hierarchy levels, since the robot must be controlled in real time., Šiame straipsnyje apžvelgiami metodai, kurių esmė yra autonominių mobiliųjų robotų valdymui taikomi hierarchinių valdymo sistemų principai. Hierarchiniai autonominiai mobilieji robotai yra nauja robotikos raidos kryptis su plačiomis taikymo perspektyvomis. Nepaisant veržlios pažangos, pagrindinis autonominių mobiliųjų robotų tobulėjimo stabdys ir toliau yra neefektyvus jų valdymas. Kiekvienoje valdymo hierarchijoje (judėjimo erdvės, sprendžiamų uždavinių ir signalų apdorojimo ciklų hierarchijos) bus apibrėžiami specifiniai valdymo uždaviniai ir tikslai. Pasaulyje atlikti tyrimai rodo, kad šioms sistemoms kurti plačiausiai taikomi neuroniniai tinklai, neraiškioji logika. Dėmesio skiriama ir komunikacijoms tarp hierarchijų, nes robotas turi būti valdomas realiu laiku.
- Published
- 2011
8. Autoassociative gaze tracking system based on artificial intelligence
- Author
-
Proscevičius, T., Raudonis, V., Kairys, A., Lipnickas, A., Simutis, R., and Kauno technologijos universitetas
- Abstract
Real-time auto-associative interface between the users gaze and a computer is presented and analyzed in this paper. This interface can provide more self-sufficiency to the disabled person and may help them while dealing with the problem of public integration. That is the reason for creation of real time and free positioning gaze tracking system which is applicable to control the computer application. The gaze tracking precision, computer processing rate and robustness of the system were explored experimentally. The artificial neuron network method and principal components analysis are used in the presented system for the user gaze and computer screen auto-association. The applied methods reduce the amount of the received video data by filtering out unimportant information either reduce the total computation burden of the system. Proper structure of neuron network and the number of the principal components were estimated through heuristic approach. The presented system of gaze tracking was tested with computer applications in real-time., Pateikiama ir analizuojama vartotojo žvilgsnio ir kompiuterio realaus laiko autoasociatyvioji sąsaja. Ji suteikia neįgaliajam daugiau savarankiškumo, padeda integruotis į visuomenę. Sukurta realaus laiko nepozicionuojama žvilgsnio stebėjimo sistema, kuri taikoma kompiuterinėms programoms valdyti. Žvilgsnio sekimo tikslumas, kompiuterinio vaizdų apdorojimo sparta ištirti eksperimentiškai. Sekimo sistemos programinę dalį sudaro dirbtiniai neuroniniai tinklai, pagrindinių komponenčių analizės metodai. Taikant šiuos metodus sumažėja vaizdo duomenų kiekis, nes neinformatyvūs duomenys nufiltruojami. Tai paspartina skaičiavimus. Euristiniu metodu parinkta neuroninio tinklo struktūra ir pagrindinių komponenčių skaičius. Pristatoma žvilgsnio sekimo sistema testuota realiuoju laiku.
- Published
- 2010
9. 'iHouse' for advanced environment control
- Author
-
Kairys, A., Raudonis, V., Simutis, R., and Kauno technologijos universitetas
- Subjects
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION - Abstract
In this paper intuitive environment is proposed, image processing software was connected with eye tracking system in order to wirelessly control electric devices. The system use web-cam mounted to wheelchair. Accordingly a set of special markers was designed, each marker means a different device such as doors, window, light and etc. Markers are recognized from video frames that are taken from surroundings. Marker position in a frame is compared with mouse cursor position for intuitive control. Accuracy, speed and reliability experiments took place. System is able to process video frame in less than 0.1 s, 84% accuracy is achieved. Experiments claim that intuitive system for smart home is relevant to be used for real time applications., Siūloma „iHouse“ sistema, kurioje duomenų apdorojimo programinė įranga ir akies padėties sekimo sistema panaudojama bevieliam elektros įrenginių valdymui. Sistemoje naudojama prie invalido vežimėlio pritvirtinta internetinė kamera. Specialiai sistemai buvo sukurtas žymeklių rinkinys. Kiekvienas žymeklis naudojamas skirtingiems įrenginiams: elektrinėms durims, langams, šviesai, šildytuvui ir kt., identifikuoti. Žymekliai aplinkoje atpažįstami apdorojant kadrus, kurie užfiksuojami minėtąja kamera. Atpažinto žymeklio centro koordinatės palyginamos su pelės žymeklio koordinatėmis, kurios nustatomos akies sekimo sistema, ir taip intuityviai pasirenkami ir valdomi įrenginiai. Buvo atlikti tikslumo, greitaveikos ir patikimumo tyrimai. Aplinkos vaizdo kadras apdorojamas greičiau nei per 0,1 s, o žymekliai atpažįstami 84 % tikslumu. Eksperimentiniai duomenys rodo, kad intuityvi sistema yra tinkama naudoti realiu laiku.
- Published
- 2010
10. An efficient object tracking algorithm based on dynamic particle filter
- Author
-
Raudonis, V., Simutis, R., Paulauskaitė-Tarasevičienė, A., and Kauno technologijos universitetas
- Abstract
The tracking problem of the non-rigid object has two difficulties. First, the object can change his scale, form or be partially occluded with other objects and, second, the location of the object can vary in the non-linear manner during the time. The paper considers the method, which solves given problem of the object tracking in the complex background. The solution of the mentioned problem is solved using dynamic particle filter (DPF), where the amount of the particles depends on the changing rate of the object location. The algorithm of dynamic particle filter was formalized using formal DPLA (dynamic PLA) notation. The object tracking is done according his color features (HSV histogram), which are independent from the object scale and/or form. Proposed method is demonstrated in the context of tracking real objects in the video data. The results shows, that the object tracking rate depends on the amount of generated particles and the tracking error., „Nestandaus“ objekto sekimo uždavinyje susiduriama su dviem klausimais. Pirma, objektas vaizdo medžiagoje gali keisti savo formą, dydį arba būti iš dalies užstojamas, antra, objekto padėtis laiko atžvilgiu kinta netiesiškai. Straipsnyje pateikiamas dinaminis dalelių filtro algoritmas, kuris sprendžia objekto sekimo sudėtingos spalvinės sudėties fone uždavinį ir kurio dalelių skaičius priklauso nuo sekamo objekto padėties kitimo spartos. Algoritmas yra formaliai specifikuotas naudojant dinaminį PLA. Objektas vaizdo medžiagoje yra sekamas pagal jo spalvines savybes (HSV histogramą), kurios yra nepriklausomos nuo objekto dydžio ar formos. Dinaminiu dalelių filtru galima sekti iš dalies užstotus ir „lanksčius“ objektus. Siūlomas algoritmas buvo aprašytas formaliai ir išbandytas su realiais vaizdo duomenimis. Metodo tyrimai parodė, kad objekto sekimo sparta yra tiesiai proporciga generuojamam dalelių skaičiui ir sekimo tikslumui.
- Published
- 2009
11. A classification of flash evoked potentials based on artificial neural network
- Author
-
Raudonis, V., Narvydas, G., Simutis, R., and Kauno technologijos universitetas
- Abstract
This paper presents how the certain sequences of the light impulses (or narrows squares) can be used to increase the number of commands for a brain-computer interface which is based on visually evoked potentials. The observation of the certain components in the power spectrum of the measured EEG will not give needed results, because the blink of the stimulators is not periodic. Therefore, different methods must be used to recognize a user commands. In this paper, a classification of flash visual evoked potentials (FVEP) of visual cortex which is induced by OFF-to-ON flash of light source based on artificial neural network is presented. The presented below results shows that is possible two classify EEG signals, which were recorded when person was visually stimulated with two type’s stimuli., Didinti sąsajos žmogaus smegenys – kompiuteris komandų skaičių kliudo EEG signalo harmonikos, kurios atsiranda stimuliuojant sistemos vartotoją pastoviu dažniu mirksinčia šviesa. Šiame darbe vartotojo komandoms generuoti siūloma naudoti skirtingas šviesos impulsų sekas (šviesos kodus), kurios sužadina taip vadinamus vizualiai sužadintus potencialų skirtumus (FVEP) smegenų srityje, atsakingoje už vaizdo apdorojimą. Šviesos impulsų sužadintame EEG signalo dažnių spektre nėra aiškių vyraujančių dažninių komponenčių ir harmonikų. Todėl šiame darbe neuroninis klasifikatorius yra sukurtas taip, kad klasifikuotų pagal tam tikrą dažninių komponenčių aibę. Tyrimai parodė, kad, naudojant dirbtinį neuroninį tinklą, galima sėkmingai klasifikuoti vartotojo komandas, kurios buvo sugeneruotos naudojant skirtingus „šviesos kodus“.
- Published
- 2008
12. Arduino based Controller for the Smart Assistive Mobility Hardware
- Author
-
Lengvenis, P., primary, Maskeliunas, R., additional, and Raudonis, V., additional
- Published
- 2012
- Full Text
- View/download PDF
13. Goal Directed, State and Behavior based Navigation Algorithm for Smart “Robosofa” Furniture
- Author
-
Narvydas, G., primary, Maskeliunas, R., additional, and Raudonis, V., additional
- Published
- 2011
- Full Text
- View/download PDF
14. Hierarchical Control Approach for Autonomous Mobile Robots
- Author
-
Proscevicius, T., primary, Bukis, A., additional, Raudonis, V., additional, and Eidukeviciute, M., additional
- Published
- 1970
- Full Text
- View/download PDF
15. Non-Contact Vision-Based Techniques of Vital Sign Monitoring: Systematic Review.
- Author
-
Saikevičius L, Raudonis V, Dervinis G, and Baranauskas V
- Subjects
- Humans, Monitoring, Physiologic methods, Signal Processing, Computer-Assisted, Vital Signs physiology
- Abstract
The development of non-contact techniques for monitoring human vital signs has significant potential to improve patient care in diverse settings. By facilitating easier and more convenient monitoring, these techniques can prevent serious health issues and improve patient outcomes, especially for those unable or unwilling to travel to traditional healthcare environments. This systematic review examines recent advancements in non-contact vital sign monitoring techniques, evaluating publicly available datasets and signal preprocessing methods. Additionally, we identified potential future research directions in this rapidly evolving field.
- Published
- 2024
- Full Text
- View/download PDF
16. A pilot cost-analysis study comparing AI-based EyeArt® and ophthalmologist assessment of diabetic retinopathy in minority women in Oslo, Norway.
- Author
-
Karabeg M, Petrovski G, Hertzberg SN, Erke MG, Fosmark DS, Russell G, Moe MC, Volke V, Raudonis V, Verkauskiene R, Sokolovska J, Haugen IK, and Petrovski BE
- Abstract
Background: Diabetic retinopathy (DR) is the leading cause of adult blindness in the working age population worldwide, which can be prevented by early detection. Regular eye examinations are recommended and crucial for detecting sight-threatening DR. Use of artificial intelligence (AI) to lessen the burden on the healthcare system is needed., Purpose: To perform a pilot cost-analysis study for detecting DR in a cohort of minority women with DM in Oslo, Norway, that have the highest prevalence of diabetes mellitus (DM) in the country, using both manual (ophthalmologist) and autonomous (AI) grading. This is the first study in Norway, as far as we know, that uses AI in DR- grading of retinal images., Methods: On Minority Women's Day, November 1, 2017, in Oslo, Norway, 33 patients (66 eyes) over 18 years of age diagnosed with DM (T1D and T2D) were screened. The Eidon - True Color Confocal Scanner (CenterVue, United States) was used for retinal imaging and graded for DR after screening had been completed, by an ophthalmologist and automatically, using EyeArt Automated DR Detection System, version 2.1.0 (EyeArt, EyeNuk, CA, USA). The gradings were based on the International Clinical Diabetic Retinopathy (ICDR) severity scale [1] detecting the presence or absence of referable DR. Cost-minimization analyses were performed for both grading methods., Results: 33 women (64 eyes) were eligible for the analysis. A very good inter-rater agreement was found: 0.98 (P < 0.01), between the human and AI-based EyeArt grading system for detecting DR. The prevalence of DR was 18.6% (95% CI: 11.4-25.8%), and the sensitivity and specificity were 100% (95% CI: 100-100% and 95% CI: 100-100%), respectively. The cost difference for AI screening compared to human screening was $143 lower per patient (cost-saving) in favour of AI., Conclusion: Our results indicate that The EyeArt AI system is both a reliable, cost-saving, and useful tool for DR grading in clinical practice., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
17. Physical and Chemical Characteristics of Droppings as Sensitive Markers of Chicken Health Status.
- Author
-
Mozuriene E, Mockus E, Klupsaite D, Starkute V, Tolpeznikaite E, Gruzauskas V, Gruzauskas R, Paulauskaite-Taraseviciene A, Raudonis V, and Bartkiene E
- Abstract
The aim of this study was to analyze the physical and chemical characteristics of chicken droppings (n = 73), which were collected during different age periods and classified by visual inspection into normal (N) and abnormal (A). Significant differences were found in the texture, pH, dry matter (DM), fatty acids (FAs), short-chain fatty acids (SCFAs), and volatile compounds (VCs) between the tested dropping groups ( p ≤ 0.05). The age period of the chicken had a significant influence on the color coordinates, texture, pH, DM, and SCFA contents in N and A as well as on all FAs content in N ( p ≤ 0.05). Droppings from the N group had a harder texture, lower values of a* and b* color coordinates, higher DM content, higher level of linoleic FA, and lower level of α-linolenic FA than the droppings from the A group in each age period ( p ≤ 0.05). The predominant SCFA was acetic acid, the content of which was significantly lower in the N group compared to that of the A group. The alcohol and organic acid contents were the highest in most of the A group at different age periods, while ketones dominated in the N and A groups. In conclusion, the majority of the tested dropping characteristics were influenced by the age period. While certain characteristics demonstrate differences between N and A, a likely broader range of droppings is required to provide more distinct trends regarding the distribution of characteristics across different droppings.
- Published
- 2024
- Full Text
- View/download PDF
18. Towards Early Poultry Health Prediction through Non-Invasive and Computer Vision-Based Dropping Classification.
- Author
-
Nakrosis A, Paulauskaite-Taraseviciene A, Raudonis V, Narusis I, Gruzauskas V, Gruzauskas R, and Lagzdinyte-Budnike I
- Abstract
The use of artificial intelligence techniques with advanced computer vision techniques offers great potential for non-invasive health assessments in the poultry industry. Evaluating the condition of poultry by monitoring their droppings can be highly valuable as significant changes in consistency and color can be indicators of serious and infectious diseases. While most studies have prioritized the classification of droppings into two categories (normal and abnormal), with some relevant studies dealing with up to five categories, this investigation goes a step further by employing image processing algorithms to categorize droppings into six classes, based on visual information indicating some level of abnormality. To ensure a diverse dataset, data were collected in three different poultry farms in Lithuania by capturing droppings on different types of litter. With the implementation of deep learning, the object detection rate reached 92.41% accuracy. A range of machine learning algorithms, including different deep learning architectures, has been explored and, based on the obtained results, we have proposed a comprehensive solution by combining different models for segmentation and classification purposes. The results revealed that the segmentation task achieved the highest accuracy of 0.88 in terms of the Dice coefficient employing the K-means algorithm. Meanwhile, YOLOv5 demonstrated the highest classification accuracy, achieving an ACC of 91.78%.
- Published
- 2023
- Full Text
- View/download PDF
19. Accuracy of a Smartphone-Based Artificial Intelligence Application for Classification of Melanomas, Melanocytic Nevi, and Seborrheic Keratoses.
- Author
-
Liutkus J, Kriukas A, Stragyte D, Mazeika E, Raudonis V, Galetzka W, Stang A, and Valiukeviciene S
- Abstract
Current artificial intelligence algorithms can classify melanomas at a level equivalent to that of experienced dermatologists. The objective of this study was to assess the accuracy of a smartphone-based "You Only Look Once" neural network model for the classification of melanomas, melanocytic nevi, and seborrheic keratoses. The algorithm was trained using 59,090 dermatoscopic images. Testing was performed on histologically confirmed lesions: 32 melanomas, 35 melanocytic nevi, and 33 seborrheic keratoses. The results of the algorithm's decisions were compared with those of two skilled dermatologists and five beginners in dermatoscopy. The algorithm's sensitivity and specificity for melanomas were 0.88 (0.71-0.96) and 0.87 (0.76-0.94), respectively. The algorithm surpassed the beginner dermatologists, who achieved a sensitivity of 0.83 (0.77-0.87). For melanocytic nevi, the algorithm outclassed each group of dermatologists, attaining a sensitivity of 0.77 (0.60-0.90). The algorithm's sensitivity for seborrheic keratoses was 0.52 (0.34-0.69). The smartphone-based "You Only Look Once" neural network model achieved a high sensitivity and specificity in the classification of melanomas and melanocytic nevi with an accuracy similar to that of skilled dermatologists. However, a bigger dataset is required in order to increase the algorithm's sensitivity for seborrheic keratoses.
- Published
- 2023
- Full Text
- View/download PDF
20. Towards Home-Based Diabetic Foot Ulcer Monitoring: A Systematic Review.
- Author
-
Kairys A, Pauliukiene R, Raudonis V, and Ceponis J
- Subjects
- Humans, Artificial Intelligence, Wound Healing, Ulcer, Diabetic Foot diagnosis, Diabetes Mellitus
- Abstract
It is considered that 1 in 10 adults worldwide have diabetes. Diabetic foot ulcers are some of the most common complications of diabetes, and they are associated with a high risk of lower-limb amputation and, as a result, reduced life expectancy. Timely detection and periodic ulcer monitoring can considerably decrease amputation rates. Recent research has demonstrated that computer vision can be used to identify foot ulcers and perform non-contact telemetry by using ulcer and tissue area segmentation. However, the applications are limited to controlled lighting conditions, and expert knowledge is required for dataset annotation. This paper reviews the latest publications on the use of artificial intelligence for ulcer area detection and segmentation. The PRISMA methodology was used to search for and select articles, and the selected articles were reviewed to collect quantitative and qualitative data. Qualitative data were used to describe the methodologies used in individual studies, while quantitative data were used for generalization in terms of dataset preparation and feature extraction. Publicly available datasets were accounted for, and methods for preprocessing, augmentation, and feature extraction were evaluated. It was concluded that public datasets can be used to form a bigger, more diverse datasets, and the prospects of wider image preprocessing and the adoption of augmentation require further research.
- Published
- 2023
- Full Text
- View/download PDF
21. Automatic Detection of Microaneurysms in Fundus Images Using an Ensemble-Based Segmentation Method.
- Author
-
Raudonis V, Kairys A, Verkauskiene R, Sokolovska J, Petrovski G, Balciuniene VJ, and Volke V
- Subjects
- Humans, Fundus Oculi, Image Processing, Computer-Assisted methods, Microaneurysm diagnostic imaging, Diabetic Retinopathy diagnostic imaging
- Abstract
In this study, a novel method for automatic microaneurysm detection in color fundus images is presented. The proposed method is based on three main steps: (1) image breakdown to smaller image patches, (2) inference to segmentation models, and (3) reconstruction of the predicted segmentation map from output patches. The proposed segmentation method is based on an ensemble of three individual deep networks, such as U-Net, ResNet34-UNet and UNet++. The performance evaluation is based on the calculation of the Dice score and IoU values. The ensemble-based model achieved higher Dice score (0.95) and IoU (0.91) values compared to other network architectures. The proposed ensemble-based model demonstrates the high practical application potential for detection of early-stage diabetic retinopathy in color fundus images.
- Published
- 2023
- Full Text
- View/download PDF
22. Efficient Violence Detection in Surveillance.
- Author
-
Vijeikis R, Raudonis V, and Dervinis G
- Subjects
- Violence, Machine Learning, Neural Networks, Computer
- Abstract
Intelligent video surveillance systems are rapidly being introduced to public places. The adoption of computer vision and machine learning techniques enables various applications for collected video features; one of the major is safety monitoring. The efficacy of violent event detection is measured by the efficiency and accuracy of violent event detection. In this paper, we present a novel architecture for violence detection from video surveillance cameras. Our proposed model is a spatial feature extracting a U-Net-like network that uses MobileNet V2 as an encoder followed by LSTM for temporal feature extraction and classification. The proposed model is computationally light and still achieves good results-experiments showed that an average accuracy is 0.82 ± 2% and average precision is 0.81 ± 3% using a complex real-world security camera footage dataset based on RWF-2000.
- Published
- 2022
- Full Text
- View/download PDF
23. Fast Multi-Focus Fusion Based on Deep Learning for Early-Stage Embryo Image Enhancement.
- Author
-
Raudonis V, Paulauskaite-Taraseviciene A, and Sutiene K
- Subjects
- Automation, Image Enhancement, Deep Learning, Image Processing, Computer-Assisted
- Abstract
Background: Cell detection and counting is of essential importance in evaluating the quality of early-stage embryo. Full automation of this process remains a challenging task due to different cell size, shape, the presence of incomplete cell boundaries, partially or fully overlapping cells. Moreover, the algorithm to be developed should process a large number of image data of different quality in a reasonable amount of time., Methods: Multi-focus image fusion approach based on deep learning U-Net architecture is proposed in the paper, which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image., Results: The experiment includes the visual and quantitative analysis by estimating the image similarity metrics and processing times, which is compared to the results achieved by two wellknown techniques-Inverse Laplacian Pyramid Transform and Enhanced Correlation Coefficient Maximization., Conclusion: Comparatively, the image fusion time is substantially improved for different image resolutions, whilst ensuring the high quality of the fused image.
- Published
- 2021
- Full Text
- View/download PDF
24. Deep Learning Based Evaluation of Spermatozoid Motility for Artificial Insemination.
- Author
-
Valiuškaitė V, Raudonis V, Maskeliūnas R, Damaševičius R, and Krilavičius T
- Subjects
- Humans, Male, Neural Networks, Computer, Spermatozoa, Deep Learning, Insemination, Artificial, Semen Analysis
- Abstract
We propose a deep learning method based on the Region Based Convolutional Neural Networks (R-CNN) architecture for the evaluation of sperm head motility in human semen videos. The neural network performs the segmentation of sperm heads, while the proposed central coordinate tracking algorithm allows us to calculate the movement speed of sperm heads. We have achieved 91.77% (95% CI, 91.11-92.43%) accuracy of sperm head detection on the VISEM (A Multimodal Video Dataset of Human Spermatozoa) sperm sample video dataset. The mean absolute error (MAE) of sperm head vitality prediction was 2.92 (95% CI, 2.46-3.37), while the Pearson correlation between actual and predicted sperm head vitality was 0.969. The results of the experiments presented below will show the applicability of the proposed method to be used in automated artificial insemination workflow.
- Published
- 2020
- Full Text
- View/download PDF
25. Towards the automation of early-stage human embryo development detection.
- Author
-
Raudonis V, Paulauskaite-Taraseviciene A, Sutiene K, and Jonaitis D
- Subjects
- Automation, Humans, Molecular Imaging, Time-Lapse Imaging, Embryonic Development, Image Processing, Computer-Assisted methods, Machine Learning
- Abstract
Background: Infertility and subfertility affect a significant proportion of humanity. Assisted reproductive technology has been proven capable of alleviating infertility issues. In vitro fertilisation is one such option whose success is highly dependent on the selection of a high-quality embryo for transfer. This is typically done manually by analysing embryos under a microscope. However, evidence has shown that the success rate of manual selection remains low. The use of new incubators with integrated time-lapse imaging system is providing new possibilities for embryo assessment. As such, we address this problem by proposing an approach based on deep learning for automated embryo quality evaluation through the analysis of time-lapse images. Automatic embryo detection is complicated by the topological changes of a tracked object. Moreover, the algorithm should process a large number of image files of different qualities in a reasonable amount of time., Methods: We propose an automated approach to detect human embryo development stages during incubation and to highlight embryos with abnormal behaviour by focusing on five different stages. This method encompasses two major steps. First, the location of an embryo in the image is detected by employing a Haar feature-based cascade classifier and leveraging the radiating lines. Then, a multi-class prediction model is developed to identify a total cell number in the embryo using the technique of deep learning., Results: The experimental results demonstrate that the proposed method achieves an accuracy of at least 90% in the detection of embryo location. The implemented deep learning approach to identify the early stages of embryo development resulted in an overall accuracy of over 92% using the selected architectures of convolutional neural networks. The most problematic stage was the 3-cell stage, presumably due to its short duration during development., Conclusion: This research contributes to the field by proposing a model to automate the monitoring of early-stage human embryo development. Unlike in other imaging fields, only a few published attempts have involved leveraging deep learning in this field. Therefore, the approach presented in this study could be used in the creation of novel algorithms integrated into the assisted reproductive technology used by embryologists.
- Published
- 2019
- Full Text
- View/download PDF
26. HEMIGEN: Human Embryo Image Generator Based on Generative Adversarial Networks.
- Author
-
Dirvanauskas D, Maskeliūnas R, Raudonis V, Damaševičius R, and Scherer R
- Abstract
We propose a method for generating the synthetic images of human embryo cells that could later be used for classification, analysis, and training, thus resulting in the creation of new synthetic image datasets for research areas lacking real-world data. Our focus was not only to generate the generic image of a cell such, but to make sure that it has all necessary attributes of a real cell image to provide a fully realistic synthetic version. We use human embryo images obtained during cell development processes for training a deep neural network (DNN). The proposed algorithm used generative adversarial network (GAN) to generate one-, two-, and four-cell stage images. We achieved a misclassification rate of 12.3% for the generated images, while the expert evaluation showed the true recognition rate (TRR) of 80.00% (for four-cell images), 86.8% (for two-cell images), and 96.2% (for one-cell images). Texture-based comparison using the Haralick features showed that there is no statistically (using the Student's t-test) significant ( p < 0.01) differences between the real and synthetic embryo images except for the sum of variance (for one-cell and four-cell images), and variance and sum of average (for two-cell images) features. The obtained synthetic images can be later adapted to facilitate the development, training, and evaluation of new algorithms for embryo image processing tasks., Competing Interests: The authors declare no conflict of interest.
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.