> No problem with floats or strings as far as specification goes. The particular verification tools you choose to run on your TLA+ spec may or may not have limitations in these areas, though.
I think it's disingenuous to say that TLA+ verifiers "may or may not have limitations" wrt floats when none of the available tools support floats. People should know going in that they won't be able to verify specs with floats!
I'm not sure how a "spec with floats" differs from a spec with networks, RAM, 64-bit integers, multi-level cache, or any computing concept, none of which exists as a primitive in mathematics. A floating point number is a pair of integers, or sometimes we think about it as a real number plus some error, and TLAPS can check theorems about specifications that describe floating-point operations.
Of course, things can become more involved if you want to account for overflow, but overflow can get complicated even with integers.
You say no tools but you can "verify floats" with TLAPS. I don't think that RAM or 64-bit integers have facsimiles in TLA+. They can be described mathematically in TLA+ to whatever level of detail you're interested in (e.g. you have to be pretty detailed when describing RAM when specifying a GC, and even more when specifying a CPU's memory-access subsystem), but so can floating point numbers. The least detailed description - say, RAM is just data - is not all that different from representing floats as reals (but that also requires TLAPS for verification).
The complications in describing machine-representable numbers also apply to integers, but these complications can be important, and the level of detail matters just as it matters when representing RAM or any other computing concept. Unlike, say, strings, there is no single "natural" mathematical representation of floating point numbers, just as there isn't one for software integers (integers work differently in C, Java, JS, and Zig; in some situations you may wish to ignore these differences, in others - not). You may want to think about floating point numbers as a real + error, or you may want to think about them as a mantissa-exponent pair, perhaps with overflow or perhaps without. The "right" representation of a floating point number highly depends on the properties you wish to examine, just like any other computing construct. These complications are essential, and they exist, pretty much in the same form, in other languages for formal mathematics.
P.S. for the case of computing a mean, I would use Real rather than try to model floating point if the idiosyncracies of a particular FP implementation were important. That means you can't use TLC. In some situations it could suffice to represent the mean as any number (even an integer) that is ≥ min and ≤ max, but TLC isn't very effective even for algorithms involving non-tiny sets of integers when there's "interesting" arithmetic involved.
I don't know the state of contemporary model checkers that work with theories of reals and/or FP, and I'm sure you're much more familar with that than me, but I believe that when it comes to numeric computation, deductive proofs or "sampling tests" (such as property-based testing) are still more common than model-checking. It could be interesting to add a random sampling mode to TLC that could simulate many operations on reals using BigDecimal internally.
I think it's disingenuous to say that TLA+ verifiers "may or may not have limitations" wrt floats when none of the available tools support floats. People should know going in that they won't be able to verify specs with floats!