This is very impressive, and could be a step on the way to sentience, or at least to human-level general AI, but you don't have to know the technical details to realize LaMDA can have no understanding of what it's saying.
It is excellent at mashing up human-written text from the Internet to look like the kind of thing humans would write (which is what it's doing), but LaMDA has no concepts at all. To understand any noun, adjective or verb you need lots of concepts acquired from the real world via senses. But LaMDA has no senses. It does not know what 'red' looks like. It does not know that cats have four legs, even though it would say 'four' if you ask it how many legs a cat has, because it doesn't know what a cat or a leg is. It has never experienced either.
It simply knows that the letters 'f o u r' are the kind of thing humans type when they see the letters 'h o w m a n y l e g s d o e s a c a t h a v e' (or other similar combinations of letters). If your memory was good enough, you could learn to type such things in response to equivalent questions in Swahili or Chinese without any knowledge of those languages either. Learning to do this would give you no information at all about what your responses mean.