Generating predictions is critical to successful speech and music perception. While the literature on auditory prediction is vast, it is usually limited by two features: 1) a conflation of predictability with repetition and 2) the study of a single prediction in isolation. The predictions often studied are therefore quite distinct from those that humans typically deploy: highly complex and continuous. Our proposed projects take advantage of advances in computational modeling to study more realistic forms of prediction. Our first project tests computational models of neural synchrony to study how current models of prediction engage with higher-order temporal patterns. The second project studies continuous prediction in music using models of musical expectation in conjunction with magnetoencephalography (MEG). We relate computational predictions with neural recordings to investigate how predictions of various features interact online. For both projects, we are searching for a master's student to help collect behavioral and MEG data as participants make continuous online predictions of new sequences and/or natural music either through tapping in synchrony or making perceptual and aesthetic judgements. We expect our results will generate key insights into the underlying mechanisms of predictive behavior and how they improve our perceptual systems.