from nonparametric to parametric problems

It often happens in mathematics, especially analysis, that a problem is initially phrased in rather general setting, ie, in a nonparametric form, but turns out to yield strikingly restrictively parametric embodiment. Some famous examples include the Martingale representation theorem, Liouville’s theorem regarding bounded holmorphic functions on the complex plane, and even the Poincare conjecture. I would say this is one of the most important inputs of mathematics to science in general. Today I found another interesting problem in the Putnam directory that exemplifies this principle.
Suppose one has a real valued continuous function g on R, with the property that g(g(x))= ag(x) + bx, where a,b are strictly between 0 and 1/2. Then the claim is g is linear.
My biggest issue with such problems is how to narrow from a continuum class of candidates to ones that admit such explicit representation. It turns out one has to go through an intermediate phase of slightly more complicated parametric form, in order to deduce linearity in the end.
The strategy roughly goes as follows: suppose g(x) = cx, then one could compute easily what c has to be. It could in fact be one of two roots r1, r2 of a quadratic equation. Now given any input x, one can solve for p and q in the following system:
x = p + q, g(x) = r1 p + r2 q.
Then the special property of r1 and r2 as related to g implies that g^{(n)}(x) = r1^n p + r2^n q. From here it’s not difficult to show linearity.
Another interesting strategy illustrated by this problem is the audacity in hypothesis testing. Typically a backtracking from the conjecture up leads to useless information, but here it helps reducing the uncertainty dramatically.
Another interesting problem concerns calculus of variation:
given the set of differentiable functions f on [0,1], with f(0) = 0 and f(1) =1, minimize the integral \int_0^1 | f(x) – f'(x)| dx.
This is one of the typical infinite dimensional optimization problem, whose utility lies mainly in getting closed form formula for efficient implementation in engineering. Here the insight lies in taking full advantage of the boundary condition by transforming the objective function into an exact integral (i.e., one from which one can easily read off the anti-derivative). Now a common ODE trick when confronted with expressions like f – f’ is to multiply by e^{-x}. And using this ansatz one could indeed cook up arbitrarily optimal solution to the problem above and deduce the required infimum.