Researchers have argued for decades that functional programming can greatly simplify writing parallel programs by controlling side-effects and avoiding race-conditions. However, parallel functional programs have historically performed poorly in comparison to their imperative counterparts. The primary reason is that functional programs allocate memory at a high rate, which only grows with parallelism, causing traditional memory management techniques to buckle under the increased demand for memory. In this talk, I identify a memory property called *disentanglement* which emerges naturally in functional programs (more generally due to race-freedom), and describe how to exploit disentanglement for improved efficiency and scalability, resulting in provably efficient parallel GC. We implemented these techniques in the MPL compiler (https://github.com/MPLLang/mpl), extending the SML functional programming language with multicore parallelism. Experimental results show excellent performance and scalability. On 72 processors, MPL achieves up to 63x speedup while often using less space than the sequential baseline. Initial cross-language comparisons suggest that MPL can outperform Java, Go, Multicore OCaml, and GHC Haskell, and even compete with highly optimized C/C++.