Computation is foundational to all steps of the modern data analysis pipeline, from importing to tidying to transforming, visualizing, and modeling, to communicating data. However, this fact is generally not reflected in our traditional statistics curricula, which too often assume computing is something students can pick up on their own and courses should primarily cover the instruction of theory and applications. On the other hand, data science curricula put computation front and center. The rise of data science has given us an opportunity to re-examine our traditional curricula through a new, computational lens. In this talk, I will highlight how computation can support and enhance the teaching and learning of fundamental statistical concepts such as uncertainty quantification, and prediction. The talk will place these ideas within the context of a curriculum for an introductory data science and statistical thinking course that emphasizes explicit instruction in computing. Additionally, the talk will contextualize the course and its learning objectives in the larger context of an undergraduate statistics program.