Memory is expressed in multiple functionally and anatomically distinct systems that interact over the course of learning and during the retrieval of information about the world. One method to determine the kind of memory representation (episodic and/or procedural) guiding behavior is to look at the expression of expectations (i.e., predictions) in overt eye-movements, which reflect the allocation of attention based on long-term memory. Memory-guided attention provides a means to assay how a previously learned association (e.g., location of a key in a scene) facilitates the allocation of attention to an expected location in the world. Over the past fifteen years, a long-term memory variant of the contextual cueing paradigm has been utilized to investigate neural mechanisms that support the allocation of attention, based on repeatedly learned expectations about a target object's location in a scene. Results from these studies have led to a consensus view that memory-guided attention is a hippocampus-dependent behavior. However, it is possible other forms of memory, including stimulus-response learning that is hippocampal independent, might also contribute to memory-guided attentional allocation. In Chapter 1, we provide a broad review of memory-guided attention. Specifically, we survey hippocampal prediction, multiple forms of memory representations, previous studies of memory-guided attention and how eye-movements can be leveraged to discover what kind or kinds of memory guide(s) attention. In Chapter 2, we first leverage the similarity of eye-movement trajectories (scanpaths) to quantify how expectations are strengthened across learning. Additionally, we determine if the magnitude of scanpath encoding-retrieval similarity (SERS) predicts the precision of remembering the location of a target object in a scene. Results reveal that overt expressions of attention, indexed by the sequential replay of eye-movements across a scene during repeated viewing, become increasingly similar across learning. Moreover, stronger expectations during retrieval lead to increased sequential replays of eye-movements toward the recalled target location in a scene. Finally, we report that the degree of SERS predicts memory precision of a target's location. Collectively, these results reveal that expectations are strengthened during learning, reinstated at retrieval, and relate to memory precision metrics. In Chapter 3, we specify the kind of memory representation(s) guiding attention. To do so, we change the viewpoint through mirror-reversal of a scene to induce competition between overtly expressed expectations influenced by stimulus-response associations and expectations based on episodic (i.e., relational) memory that captures the relationship between the location of the target object and other elements of the scene. We find evidence that both kinds of memory representations -- stimulus-response and relational -- can facilitate the allocation of attention, indexed by fixation distance to the target and SERS measurements. Collectively, these results reveal that multiple forms of memory representations can guide the allocation of attention to facilitate the finding of a target within a previously encountered scene. In Chapter 4, we conclude with a discussion of the broader implications of the work along with open questions for future investigation. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]