Multiplicative structure often shows up in number theory when one considers common arithmetic functions (functions whose domain are the integers).
In addition, we say that is completely multiplicative if it satisfies (1) for all integers , not just those which are relatively prime. From now on we will use the term "multiplicative" to refer to completely multiplicative functions and take our domain to be the reals.
In contrast, some functions have additive structure. For example, for any real , the line for some fixed constant satisfies
since .
Some functions turn multiplicative structure into additive structure; for example, for any real , the logarithm function (base irrelevant) satisfies
which is an interesting and often quite useful property turning multiplications into additions. However, there is no easy formula (that I know of) for , which is quite unfortunate.
Every symmetric positive-definite (s.p.d.) matrix has a lower triangular Cholesky factor satisfying . In some sense all matrix factorizations are product structures since they write a complicated matrix as the product of matrices that are hopefully esimpler or easier to work with. Expanding reveals the hidden additive structure of the Cholesky factorization, owing to the fact that the lower triangularity of implies a sort of decreasing or nested structure. Indeed, we can write as the contributions of rank-one matrices given by the outer product of the columns of (4) (which is generally true for any matrix product) where each outer product affects smaller and smaller submatrices of (from the triangularity of ).
Although the Cholesky factor is both an additive and multiplicative factorization, the operator itself does not play well with addition or multiplication. For multiplication the product of two s.p.d. matrices is not even necessarily s.p.d. and for addition even adding a multiple of the identity, that is, trying to factor requires somewhat sophisticated tricks [3], even if one is able to factor efficiently.
Finally, the and operators have additive and multiplicative structure, respectively. That is, for all square matrices of the same size,
However, there are not as nice decompositions for or . Although it is possible to turn into additive structure by expanding the matrix-matrix product and to turn into multiplicative structure by the Sherman–Morrison–Woodbury identity, this is more of a conversion and a bit less natural than applying (5) directly.
Throughout these examples it seems although multiplicative and additive structure can be converted between, they are hard to mix in some sense. In order to make this notion precise,
Our strategy will to identify a few specific observations and then work our way up to the naturals , the integers , the rationals , and then finally, the reals . Extension to the complex numbers is left as an exercise for the reader.
First, we observe that if we fix , then the multiplicative condition (6) implies
Moving to the left side and factoring, we have
implying either or for all . So immediately the constant function is a candidate solution and indeed, it also satisfies the additive condition (7) trivially. For the more interesting situation where , now using the additive condition (7), we have
Therefore , , and so on. By induction for all natural numbers (where we exclude 0 as a natural number for now).
To extend this result to the integers, first we fill in the hole at 0 by making use of (7),
Then we observe for a natural number and a negative number , using (7),
where we use that for natural , concluding that for integer .
To extend this result to the rationals: let for integers and let . From (6),
where we use and for integers , concluding that for rational .
Throughout all these examples it seems that , but it is hard to extend to the reals without additional structure; we've sort of hit the limit on what our assumptions tell us. We would like our function to commute with limits, that is, we would like for any sequence ,
This is precisely condition (iii) of Theorem 4.3.2. of [1] defining the continuity of a function:
Theorem 4.3.2 (Characterizations of Continuity). Let , and let
. The function is continuous at if and only if any one of the following
three conditions is met:
...
(iii) For all (with ), it follows that .
If we enforce that must be continuous, then every real number is a Cauchy sequence of rational numbers, so assume for rational and observe that
where we use that from the fact that is identity on the rationals so we can conclude for all real . In summary, we have the following theorem:
Theorem. Suppose some continuous function satisfies both
for all . Then either
for all or
for all .
This is just one of many reasons why the identity operator is perhaps the easiest possible operator to work with and analyze; for more details, see our soon-to-be released GitHub repository brownie-in-motion/identity which is joint work with Daniel Lu (brownie-in-motion) and Eshan Ramesh (eshrh).
It's well known [2] that Gilbert Strang's favorite matrix is the following -1, 2, -1 tridiagonal matrix (8) which forms a second-order finite difference approximation for the derivative.
If I was asked upfront I'm not sure what my favorite linear operator would be, but the identity function/matrix/operator is definitely pretty high on the list after this discussion.
Lastly, this article is similar to Exercise 4.3.13 of [1] but I only discovered the connection after I already had the idea. A similar line of reasoning holds for the textbook exercise.
[1] S. Abbott, Understanding Analysis. New York, NY: Springer New York, 2015. doi: 10.1007/978-1-4939-2712-8.
[2] J. Bezanson, A. Edelman, S. Karpinski, and V. B. Shah, "Julia: A Fresh Approach to Numerical Computing," SIAM Review, vol. 59, no. 1, pp. 65–98, Jan. 2017, doi: 10.1137/141000671.
[3] F. Schäfer, M. Katzfuss, and H. Owhadi, "Sparse Cholesky factorization by Kullback-Leibler minimization," arXiv:2004.14455 [cs, math, stat], Oct. 2021, Available: https://arxiv.org/abs/2004.14455