It often happens in mathematics, especially analysis, that a problem is initially phrased in rather general setting, ie, in a nonparametric form, but turns out to yield strikingly restrictively parametric embodiment. Some famous examples include the Martingale representation theorem, Liouville’s theorem regarding bounded holmorphic functions on the complex plane, and even the Poincare conjecture. I would say this is one of the most important inputs of mathematics to science in general. Today I found another interesting problem in the Putnam directory that exemplifies this principle.
Suppose one has a real valued continuous function g on R, with the property that g(g(x))= ag(x) + bx, where a,b are strictly between 0 and 1/2. Then the claim is g is linear.
My biggest issue with such problems is how to narrow from a continuum class of candidates to ones that admit such explicit representation. It turns out one has to go through an intermediate phase of slightly more complicated parametric form, in order to deduce linearity in the end.
The strategy roughly goes as follows: suppose g(x) = cx, then one could compute easily what c has to be. It could in fact be one of two roots r1, r2 of a quadratic equation. Now given any input x, one can solve for p and q in the following system:
x = p + q, g(x) = r1 p + r2 q.
Then the special property of r1 and r2 as related to g implies that g^{(n)}(x) = r1^n p + r2^n q. From here it’s not difficult to show linearity.
Another interesting strategy illustrated by this problem is the audacity in hypothesis testing. Typically a backtracking from the conjecture up leads to useless information, but here it helps reducing the uncertainty dramatically.
Another interesting problem concerns calculus of variation:
given the set of differentiable functions f on [0,1], with f(0) = 0 and f(1) =1, minimize the integral \int_0^1  f(x) – f'(x) dx.
This is one of the typical infinite dimensional optimization problem, whose utility lies mainly in getting closed form formula for efficient implementation in engineering. Here the insight lies in taking full advantage of the boundary condition by transforming the objective function into an exact integral (i.e., one from which one can easily read off the antiderivative). Now a common ODE trick when confronted with expressions like f – f’ is to multiply by e^{x}. And using this ansatz one could indeed cook up arbitrarily optimal solution to the problem above and deduce the required infimum.

Recent Posts
Recent Comments
Simon on How much it costs to raise a… aquazorcarson on How much it costs to raise a… Simon on How much it costs to raise a… aquazorcarson on Friendship is costly Erwin on Friendship is costly Archives
 December 2017
 November 2017
 October 2017
 August 2017
 July 2017
 June 2017
 October 2016
 September 2016
 July 2016
 June 2016
 May 2016
 March 2016
 July 2015
 May 2015
 March 2015
 February 2015
 January 2015
 November 2014
 October 2014
 June 2014
 March 2014
 February 2014
 December 2013
 November 2013
 October 2013
 August 2013
 July 2013
 May 2013
 April 2013
 March 2013
 January 2013
 September 2012
 January 2012
 December 2011
 September 2011
 August 2011
 July 2011
 June 2011
 April 2011
 March 2011
 February 2011
 January 2011
 November 2010
 October 2010
 September 2010
 July 2010
 June 2010
 May 2010
 April 2010
 March 2010
 February 2010
 January 2010
 December 2009
 October 2009
 September 2009
 August 2009
 July 2009
 June 2009
 May 2009
 April 2009
 March 2009
 February 2009
 January 2009
 October 2008
 September 2008
 July 2008
 June 2008
 May 2008
 April 2008
 March 2008
 February 2008
 January 2008
 October 2007
 September 2007
 August 2007
 July 2007
 April 2007
 March 2007
 February 2007
 December 2006
 November 2006
 October 2006
 September 2006
 August 2006
Categories
Meta