20 results on '"Smeaton, Alan"'
Search Results
2. K-Space at TRECVid 2006
- Author
-
Wilkins, Peter, Adamek, Tomasz, Ferguson, Paul, Hughes, Mark, Jones, Gareth J F, Keenan, Gordon, McGuinness, Kevin, Malobabic, Jovanka, OConnor, Noel E, Sadlier, David, Smeaton, Alan F, Benmokhtar, Rachid, Dumont, Emilie, Huet, Benoit, Mérialdo, Bernard, Spyrou, Evaggelos, Koumoulos, George, Avrithis, Yannis, Moerzinger, R, Schallauer, P, Bailer, W, Zhang, Qianni, Piatrik, Tomas, Chandramouli, Krishna, Izquierdo, Ebroul, Goldmann, Lutz, Haller, Martin, Sikora, Thomas, Praks, Pavel, Urban, Jana, Hilaire, Xavier, Jose, Joemon M, Wilkins, Peter, Adamek, Tomasz, Ferguson, Paul, Hughes, Mark, Jones, Gareth J F, Keenan, Gordon, McGuinness, Kevin, Malobabic, Jovanka, OConnor, Noel E, Sadlier, David, Smeaton, Alan F, Benmokhtar, Rachid, Dumont, Emilie, Huet, Benoit, Mérialdo, Bernard, Spyrou, Evaggelos, Koumoulos, George, Avrithis, Yannis, Moerzinger, R, Schallauer, P, Bailer, W, Zhang, Qianni, Piatrik, Tomas, Chandramouli, Krishna, Izquierdo, Ebroul, Goldmann, Lutz, Haller, Martin, Sikora, Thomas, Praks, Pavel, Urban, Jana, Hilaire, Xavier, and Jose, Joemon M
- Published
- 2006
3. K-Space at TRECVid 2008
- Author
-
Wilkins, Peter, Byrne, Daragh, Jones, Gareth J.F., Lee, Hyowon, Keenan, Gordon, McGuinness, Kevin, O'Connor, Noel E., O'Hare, Neil, Smeaton, Alan F., and Adamek, Tomasz
- Subjects
Information storage and retrieval systems ,Image processing ,Digital video ,Information retrieval ,Multimedia systems - Abstract
In this paper we describe K-Space’s participation in TRECVid 2008 in the interactive search task. For 2008 the K-Space group performed one of the largest interactive video information retrieval experiments conducted in a laboratory setting. We had three institutions participating in a multi-site multi-system experiment. In total 36 users participated, 12 each from Dublin City University (DCU, Ireland), University of Glasgow (GU, Scotland) and Centrum Wiskunde & Informatica (CWI, the Netherlands). Three user interfaces were developed, two from DCU which were also used in 2007 as well as an interface from GU. All interfaces leveraged the same search service. Using a latin squares arrangement, each user conducted 12 topics, leading in total to 6 runs per site, 18 in total. We officially submitted for evaluation 3 of these runs to NIST with an additional expert run using a 4th system. Our submitted runs performed around the median. In this paper we will present an overview of the search system utilized, the experimental setup and a preliminary analysis of our results.
- Published
- 2008
4. K-Space at TRECVid 2007
- Author
-
Wilkins, Peter, Adamek, Tomasz, Byrne, Daragh, Jones, Gareth J.F., Lee, Hyowon, Keenan, Gordon, McGuinness, Kevin, O'Connor, Noel E., and Smeaton, Alan F.
- Subjects
Information storage and retrieval systems ,Digital video - Abstract
In this paper we describe K-Space participation in TRECVid 2007. K-Space participated in two tasks, high-level feature extraction and interactive search. We present our approaches for each of these activities and provide a brief analysis of our results. Our high-level feature submission utilized multi-modal low-level features which included visual, audio and temporal elements. Specific concept detectors (such as Face detectors) developed by K-Space partners were also used. We experimented with different machine learning approaches including logistic regression and support vector machines (SVM). Finally we also experimented with both early and late fusion for feature combination. This year we also participated in interactive search, submitting 6 runs. We developed two interfaces which both utilized the same retrieval functionality. Our objective was to measure the effect of context, which was supported to different degrees in each interface, on user performance. The first of the two systems was a ‘shot’ based interface, where the results from a query were presented as a ranked list of shots. The second interface was ‘broadcast’ based, where results were presented as a ranked list of broadcasts. Both systems made use of the outputs of our high-level feature submission as well as low-level visual features.
- Published
- 2007
5. TRECVID 2004 experiments in Dublin City University
- Author
-
Cooke, Eddie, Ferguson, Paul, Gaughan, Georgina, Gurrin, Cathal, Jones, Gareth J.F., Le Borgne, Hervé, Lee, Hyowon, Marlow, Seán, McDonald, Kieran, McHugh, Mike, Murphy, Noel, O'Connor, Noel E., O'Hare, Neil, Rothwell, Sandra, Smeaton, Alan F., and Wilkins, Peter
- Subjects
Digital video ,Information retrieval - Abstract
In this paper, we describe our experiments for TRECVID 2004 for the Search task. In the interactive search task, we developed two versions of a video search/browse system based on the Físchlár Digital Video System: one with text- and image-based searching (System A); the other with only image (System B). These two systems produced eight interactive runs. In addition we submitted ten fully automatic supplemental runs and two manual runs. A.1, Submitted Runs: • DCUTREC13a_{1,3,5,7} for System A, four interactive runs based on text and image evidence. • DCUTREC13b_{2,4,6,8} for System B, also four interactive runs based on image evidence alone. • DCUTV2004_9, a manual run based on filtering faces from an underlying text search engine for certain queries. • DCUTV2004_10, a manual run based on manually generated queries processed automatically. • DCU_AUTOLM{1,2,3,4,5,6,7}, seven fully automatic runs based on language models operating over ASR text transcripts and visual features. • DCUauto_{01,02,03}, three fully automatic runs based on exploring the benefits of multiple sources of text evidence and automatic query expansion. A.2, In the interactive experiment it was confirmed that text and image based retrieval outperforms an image-only system. In the fully automatic runs, DCUauto_{01,02,03}, it was found that integrating ASR, CC and OCR text into the text ranking outperforms using ASR text alone. Furthermore, applying automatic query expansion to the initial results of ASR, CC, OCR text further increases performance (MAP), though not at high rank positions. For the language model-based fully automatic runs, DCU_AUTOLM{1,2,3,4,5,6,7}, we found that interpolated language models perform marginally better than other tested language models and that combining image and textual (ASR) evidence was found to marginally increase performance (MAP) over textual models alone. For our two manual runs we found that employing a face filter disimproved MAP when compared to employing textual evidence alone and that manually generated textual queries improved MAP over fully automatic runs, though the improvement was marginal. A.3, Our conclusions from our fully automatic text based runs suggest that integrating ASR, CC and OCR text into the retrieval mechanism boost retrieval performance over ASR alone. In addition, a text-only Language Modelling approach such as DCU_AUTOLM1 will outperform our best conventional text search system. From our interactive runs we conclude that textual evidence is an important lever for locating relevant content quickly, but that image evidence, if used by experienced users can aid retrieval performance. A.4, We learned that incorporating multiple text sources improves over ASR alone and that an LM approach which integrates shot text, neighbouring shots and entire video contents provides even better retrieval performance. These findings will influence how we integrate textual evidence into future Video IR systems. It was also found that a system based on image evidence alone can perform reasonably and given good query images can aid retrieval performance.
- Published
- 2004
6. TRECVID 2004 - an overview
- Author
-
Kraaij, Wessel, Smeaton, Alan F., and Over, Paul
- Subjects
Digital video ,Information retrieval - Published
- 2004
7. Dublin City University video track experiments for TREC 2003
- Author
-
Browne, Paul, Czirjék, Csaba, Gaughan, Georgina, Gurrin, Cathal, Jones, Gareth J.F., Lee, Hyowon, Marlow, Seán, McDonald, Kieran, Murphy, Noel, O'Connor, Noel E., O'Hare, Neil, Smeaton, Alan F., and Ye, Jiamin
- Subjects
Digital video ,Information retrieval - Abstract
In this paper, we describe our experiments for both the News Story Segmentation task and Interactive Search task for TRECVID 2003. Our News Story Segmentation task involved the use of a Support Vector Machine (SVM) to combine evidence from audio-visual analysis tools in order to generate a listing of news stories from a given news programme. Our Search task experiment compared a video retrieval system based on text, image and relevance feedback with a text-only video retrieval system in order to identify which was more effective. In order to do so we developed two variations of our Físchlár video retrieval system and conducted user testing in a controlled lab environment. In this paper we outline our work on both of these two tasks.
- Published
- 2003
8. TRECVID 2003 - an overview
- Author
-
Smeaton, Alan F., Kraaij, Wessel, and Over, Paul
- Subjects
Digital video ,Information retrieval - Published
- 2003
9. Dublin City University video track experiments for TREC 2002
- Author
-
Browne, Paul, Czirjék, Csaba, Gurrin, Cathal, Jarina, Roman, Lee, Hyowon, Marlow, Seán, McDonald, Kieran, Murphy, Noel, O'Connor, Noel E., Smeaton, Alan F., and Ye, Jiamin
- Subjects
InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,Digital video ,Information retrieval - Abstract
Dublin City University participated in the Feature Extraction task and the Search task of the TREC-2002 Video Track. In the Feature Extraction task, we submitted 3 features: Face, Speech, and Music. In the Search task, we developed an interactive video retrieval system, which incorporated the 40 hours of the video search test collection and supported user searching using our own feature extraction data along with the donated feature data and ASR transcript from other Video Track groups. This video retrieval system allows a user to specify a query based on the 10 features and ASR transcript, and the query result is a ranked list of videos that can be further browsed at the shot level. To evaluate the usefulness of the feature-based query, we have developed a second system interface that provides only ASR transcript-based querying, and we conducted an experiment with 12 test users to compare these 2 systems. Results were submitted to NIST and we are currently conducting further analysis of user performance with these 2 systems.
- Published
- 2002
10. Dublin City University at TRECVID 2008
- Author
-
Wilkins, Peter, Kelly, Philip, Ó Conaire, Ciarán, Foures, Thomas, Smeaton, Alan F., O'Connor, Noel E., Wilkins, Peter, Kelly, Philip, Ó Conaire, Ciarán, Foures, Thomas, Smeaton, Alan F., and O'Connor, Noel E.
- Abstract
In this paper we describe our system and experiments performed for both the automatic search task and the event detection task in TRECVid 2008. For the automatic search task for 2008 we submitted 3 runs utilizing only visual retrieval experts, continuing our previous work in examining techniques for query-time weight generation for data-fusion and determining what we can get from global visual only experts. For the event detection task we submitted results for 5 required events (ElevatorNoEntry, OpposingFlow, PeopleMeet, Embrace and PersonRuns) and 1 optional event (DoorOpenClose).
- Published
- 2008
11. TRECVid 2007 experiments at Dublin City University
- Author
-
Wilkins, Peter, Adamek, Tomasz, Jones, Gareth J.F., O'Connor, Noel E., Smeaton, Alan F., Wilkins, Peter, Adamek, Tomasz, Jones, Gareth J.F., O'Connor, Noel E., and Smeaton, Alan F.
- Abstract
In this paper we describe our retrieval system and experiments performed for the automatic search task in TRECVid 2007. We submitted the following six automatic runs: • F A 1 DCU-TextOnly6: Baseline run using only ASR/MT text features. • F A 1 DCU-ImgBaseline4: Baseline visual expert only run, no ASR/MT used. Made use of query-time generation of retrieval expert coefficients for fusion. • F A 2 DCU-ImgOnlyEnt5: Automatic generation of retrieval expert coefficients for fusion at index time. • F A 2 DCU-imgOnlyEntHigh3: Combination of coefficient generation which combined the coefficients generated by the query-time approach, and the index-time approach, with greater weight given to the index-time coefficient. • F A 2 DCU-imgOnlyEntAuto2: As above, except that greater weight is given to the query-time coefficient that was generated. • F A 2 DCU-autoMixed1: Query-time expert coefficient generation that used both visual and text experts.
- Published
- 2007
12. TRECVID 2007 - Overview
- Author
-
Over, Paul, Awad, George M., Kraaij, Wessel, Smeaton, Alan F., Over, Paul, Awad, George M., Kraaij, Wessel, and Smeaton, Alan F.
- Published
- 2007
13. Dublin City University at the TREC 2006 terabyte track
- Author
-
Ferguson, Paul, Smeaton, Alan F., Wilkins, Peter, Ferguson, Paul, Smeaton, Alan F., and Wilkins, Peter
- Abstract
For the 2006 Terabyte track in TREC, Dublin City University’s participation was focussed on the ad hoc search task. As per the pervious two years [7, 4], our experiments on the Terabyte track have concentrated on the evaluation of a sorted inverted index, the aim of which is to sort the postings within each posting list in such a way, that allows only a limited number of postings to be processed from each list, while at the same time minimising the loss of effectiveness in terms of query precision. This is done using the Físréal search system, developed at Dublin City University [4, 8].
- Published
- 2006
14. TRECVID 2006 - an overview
- Author
-
Over, Paul, Ianeva, Tzveta, Kraaij, Wessel, Smeaton, Alan F., Over, Paul, Ianeva, Tzveta, Kraaij, Wessel, and Smeaton, Alan F.
- Published
- 2006
15. TRECVid 2006 experiments at Dublin City University
- Author
-
Koskela, Markus, Wilkins, Peter, Adamek, Tomasz, Smeaton, Alan F., O'Connor, Noel E., Koskela, Markus, Wilkins, Peter, Adamek, Tomasz, Smeaton, Alan F., and O'Connor, Noel E.
- Abstract
In this paper we describe our retrieval system and experiments performed for the automatic search task in TRECVid 2006. We submitted the following six automatic runs: • F A 1 DCU-Base 6: Baseline run using only ASR/MT text features. • F A 2 DCU-TextVisual 2: Run using text and visual features. • F A 2 DCU-TextVisMotion 5: Run using text, visual, and motion features. • F B 2 DCU-Visual-LSCOM 3: Text and visual features combined with concept detectors. • F B 2 DCU-LSCOM-Filters 4: Text, visual, and motion features with concept detectors. • F B 2 DCU-LSCOM-2 1: Text, visual, motion, and concept detectors with negative concepts. The experiments were designed both to study the addition of motion features and separately constructed models for semantic concepts, to runs using only textual and visual features, as well as to establish a baseline for the manually-assisted search runs performed within the collaborative K-Space project and described in the corresponding TRECVid 2006 notebook paper. The results of the experiments indicate that the performance of automatic search can be improved with suitable concept models. This, however, is very topic-dependent and the questions of when to include such models and which concept models should be included, remain unanswered. Secondly, using motion features did not lead to performance improvement in our experiments. Finally, it was observed that our text features, despite displaying a rather poor performance overall, may still be useful even for generic search topics.
- Published
- 2006
16. Dublin City University at the TREC 2005 terabyte track
- Author
-
Ferguson, Paul, Gurrin, Cathal, Smeaton, Alan F., Wilkins, Peter, Ferguson, Paul, Gurrin, Cathal, Smeaton, Alan F., and Wilkins, Peter
- Abstract
For the 2005 Terabyte track in TREC Dublin City University participated in all three tasks: Adhoc, E±ciency and Named Page Finding. Our runs for TREC in all tasks were primarily focussed on the application of "Top Subset Retrieval" to the Terabyte Track. This retrieval utilises different types of sorted inverted indices so that less documents are processed in order to reduce query times, and is done so in a way that minimises loss of effectiveness in terms of query precision. We also compare a distributed version of our Físréal search system [1][2] against the same system deployed on a single machine.
- Published
- 2005
17. TRECVID 2005 - an overview
- Author
-
Over, Paul, Ianeva, Tzveta, Kraaij, Wessel, Smeaton, Alan F., Over, Paul, Ianeva, Tzveta, Kraaij, Wessel, and Smeaton, Alan F.
- Published
- 2005
18. TRECVid 2005 experiments at Dublin City University
- Author
-
Foley, Colum, Gurrin, Cathal, Jones, Gareth J.F., Lee, Hyowon, McGivney, Sinéad, O'Connor, Noel E., Sav, Sorin Vasile, Smeaton, Alan F., Wilkins, Peter, Foley, Colum, Gurrin, Cathal, Jones, Gareth J.F., Lee, Hyowon, McGivney, Sinéad, O'Connor, Noel E., Sav, Sorin Vasile, Smeaton, Alan F., and Wilkins, Peter
- Abstract
In this paper we describe our experiments in the automatic and interactive search tasks and the BBC rushes pilot task of TRECVid 2005. Our approach this year is somewhat different than previous submissions in that we have implemented a multi-user search system using a DiamondTouch tabletop device from Mitsubishi Electric Research Labs (MERL).We developed two versions of oursystem one with emphasis on efficient completion of the search task (Físchlár-DT Efficiency) and the other with more emphasis on increasing awareness among searchers (Físchlár-DT Awareness). We supplemented these runs with a further two runs one for each of the two systems, in which we augmented the initial results with results from an automatic run. In addition to these interactive submissions we also submitted three fully automatic runs. We also took part in the BBC rushes pilot task where we indexed the video by semi-automatic segmentation of objects appearing in the video and our search/browsing system allows full keyframe and/or object-based searching. In the interactive search experiments we found that the awareness system outperformed the efficiency system. We also found that supplementing the interactive results with results of an automatic run improves both the Mean Average Precision and Recall values for both system variants. Our results suggest that providing awareness cues in a collaborative search setting improves retrieval performance. We also learned that multi-user searching is a viable alternative to the traditional single searcher paradigm, provided the system is designed to effectively support collaboration.
- Published
- 2005
19. Experiments in terabyte searching, genomic retrieval and novelty detection for TREC 2004
- Author
-
Blott, Stephen, Boydell, Oisín, Camous, Fabrice, Ferguson, Paul, Gaughan, Georgina, Gurrin, Cathal, Jones, Gareth J.F., Murphy, Noel, O'Connor, Noel E., Smeaton, Alan F., Smyth, Barry, Wilkins, Peter, Blott, Stephen, Boydell, Oisín, Camous, Fabrice, Ferguson, Paul, Gaughan, Georgina, Gurrin, Cathal, Jones, Gareth J.F., Murphy, Noel, O'Connor, Noel E., Smeaton, Alan F., Smyth, Barry, and Wilkins, Peter
- Abstract
In TREC2004, Dublin City University took part in three tracks, Terabyte (in collaboration with University College Dublin), Genomic and Novelty. In this paper we will discuss each track separately and present separate conclusions from this work. In addition, we present a general description of a text retrieval engine that we have developed in the last year to support our experiments into large scale, distributed information retrieval, which underlies all of the track experiments described in this document.
- Published
- 2004
20. The TREC-2002 video track report
- Author
-
Smeaton, Alan F., Over, Paul, Smeaton, Alan F., and Over, Paul
- Abstract
TREC-2002 saw the second running of the Video Track, the goal of which was to promote progress in content-based retrieval from digital video via open, metrics-based evaluation. The track used 73.3 hours of publicly available digital video (in MPEG-1/VCD format) downloaded by the participants directly from the Internet Archive (Prelinger Archives) (internetarchive, 2002) and some from the Open Video Project (Marchionini, 2001). The material comprised advertising, educational, industrial, and amateur films produced between the 1930's and the 1970's by corporations, nonprofit organizations, trade associations, community and interest groups, educational institutions, and individuals. 17 teams representing 5 companies and 12 universities - 4 from Asia, 9 from Europe, and 4 from the US - participated in one or more of three tasks in the 2001 video track: shot boundary determination, feature extraction, and search (manual or interactive). Results were scored by NIST using manually created truth data for shot boundary determination and manual assessment of feature extraction and search results. This paper is an introduction to, and an overview of, the track framework - the tasks, data, and measures - the approaches taken by the participating groups, the results, and issues regrading the evaluation. For detailed information about the approaches and results, the reader should see the various site reports in the final workshop proceedings.
- Published
- 2002
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.