There are two main theories to explain the mechanisms through which statistical learning occurs: bracketing and chunking. The bracketing approach (e.g., Cairns, Shillcock, Chater, & Levy, 1997; Christiansen, Allen, & Seidenberg, 1998; Saffrin, Aslin, & Newport, 1996) suggests that statistical learning occurs through the transitional probabilities between successive items in a sequence. This is thought to include both forward and backward transitional probabilities as well as long-distance dependencies. There are several different models of the chunking approach, but the most empirically validated is PARSER (Perruchet, 2019; Perruchet & Vintner, 1998). PARSER suggests that statistical learning occurs through the extraction of units from the sequence initially at random. If the unit is encountered again, the representation of that unit is strengthened in memory. But as time progresses in between activations of that representation, it weakens through decay. Furthermore, encountering other units with similar constituent features results in interference, further weakening the representation for that unit. The strength of the representation of that unit continues to be adjusted and a stable representation for true units (i.e., ones that actually exist within the sequence) are formed and false units (i.e., mis-parsings of the sequence) become weaker. These two approaches to statistical learning vary in important ways. In the bracketing approach, transitional probabilities are computed between all co-occurring units. These calculations are thought to be continuously updated with each additional exposure, and the mental representations for both the full unit (e.g., abc) and the sub-units that it is comprised of (e.g., ab and bc) are reinforced as training continues. In contrast, the PARSER approach to chunking suggests that the increasing strength of units will interfere with representations of the sub-unit resulting in decay of representations of the sub-unit. Giroux & Rey (2009) directly compared the representation of units and sub-units using both a computation model and human participants during an auditory task with nonsense syllables. During the test phase, both simulated and human participants were asked to choose between a partial unit (i.e., a unit that spanned a boundary between units; e.g., CD from abC and Def) with either a true unit or a sub-unit. After 10 minutes of exposure, both the simulated and human participants showed higher accuracy for choosing units than for choosing sub-units. This suggests that, in line with the chunking approach to statistical learning, participants had learned the full units better than their constituent components. While the Giroux & Rey (2009) study provided strong support for the chunking approach, it, as well as most statistical learning studies to date, focused on testing learning using explicit methods where participants must make a decision about how likely it was that a unit came from a training sequence. This is in stark contrast with the nature of the learning itself which is often described as an implicit rather than explicit process (e.g., Conway, 2020). Recently, however, tasks based on reaction times have been devised to examine more implicit learning of statistical structure (e.g., Batterink, Reber, Neville, & Paller, 2015; Siegelman, Bogaerts, Kronenfeld, & Frost, 2018). In one such task (Siegelman et al., 2018), participants are asked to view a series of stimuli sequentially that are covertly arranged into triplets. Unlike tradition statistical learning paradigms in which the stimuli are presented at a fixed rate, participants choose their own rate of progression by pressing a button following the presentation of each stimulus. Siegelman and colleagues found that, after controlling for between-subject variability, participants typically took longer to press a button after the presentation of a triplet in the first position than in either the second or third position with no difference in the duration of the latter two stimuli. Notably, the second and third stimuli in a sequence are always predictable based on the preceding stimulus. However, stimuli in the first position are less predictable because they can follow any of the other triplets in the stimulus set. As such, quicker reaction times for stimuli in the second and third position suggest that participants were able to extract the statistical structure of the sequence and were anticipating these latter two stimuli in the triplets. The present experiment proposes to use this self-paced statistical learning paradigm (Siegelman et al., 2018) to examine the implicit learning of units compared to sub-units in a visual statistical learning task. In particular, it examines how reaction time to stimuli embedded within learned triplets changes when the triplet is rearranged such that the first stimulus in a triplet is moved to the end of the triplet instead (e.g., Abc is changed to bcA).