In 1987, C. Squier wrote "Word problems and a homological finiteness condition for monoids," which proved a fascinating result that spawned an entire field, but which is little known outside of it. The great mathematical popularizer and category theorist John Baez sketched the ideas in 1995. We consider "word problems," which ask the equality of two terms modulo a set of equivalences, restrict ourselves to simple objects called "monoids" that many functional programmers are fond of, and ask about the decidability of equality over them. This is the same as looking at strings and asking when they are equal if you consider mappings that equate some contiguous sequences with other contiguous sequences. (Such problems arise ubiquitously in interesting computational problems -- consider for example the equivalence of sequences of patches, or of edit actions across a distributed system). The way computer scientists would think to answer this is to see if you can rewrite both sides of the equation into a single canonical form that you can compare for equality. Indeed, that's what Don Knuth and Peter Bendix did, and the result is the Knuth-Bendix algorithm, used in theorem provers and many other applications.
But just how universal is the Knuth-Bendix algorithm? Well, Squier showed that there are finite monoids with decidable word problems that cannot be turned into such canonical rewrite procedures as Knuth-Bendix gives us. And furthermore, he showed that this result derives from considering our systems using the tools of modern algebraic topology! In particular, he showed how to calculate a homology of a monoid presentation.
Ever since then, people have been seeking to generalize Squier's result in new and exciting ways. One of the niftiest and newest was presented last year at FSCD, this talk's paper, which I love, but do not claim to fully understand. Instead of a monoid, we consider an arbitrary "algebraic theory" (say, a syntax tree of a programming language with some equalities between certain forms of trees). And we now ask not about the word problem, but just the minimum number of equalities necessary to present such a theory. The answer, which can be computed with an algorithm, comes from even more, and more generalized homology. The purpose of this talk is to make the above understandable to a lay audience, to sketch some idea of how to think about things that arise in computer science topologically, and to provide an invitation to basic notions of homology.