Creating a Java ME Math.pow() Method Blog


    When developing applications for mobile devices using Java, you may require mathematical methods not available on your particular Java VM. This article will specifically address the absence of the "power" method, Math.pow(), in Java ME. We will illustrate three different algorithms for developing an ME equivalent and proffer the best of the three as a possible programming solution to our problem.

    To discuss this problem, we look at both integer and fractional power parameters, restricting our analysis to the positive real numbers. We show the relative ease of finding a solution set for integer problems and fractional problems, regardless of the sign of the exponent. In most cases, we will use the example problemn = 82/3, where we seek either a good estimate or the actual solution for n. Without the availability of the initial exponent up front, other solutions to this problem (including Newton's method and the secant method) are not readily programmed. Although there are some workarounds for the bisection method, we (instead) focus on three methods not traditionally investigated. The first is a simple (yet sometimes inefficient) geometric decay algorithm, while the second method leverages the Math.sqrt() method and guarantees convergence to an approximate solution in no more than 11 iterations. The third and final method uses the Taylor series approximation for the logarithm followed by the Euler transformation on the Taylor series.

    An MEMath.pow() Method for Integer Solutions

    Traditionally, the Java Math.pow() method contains two parameters. These parameters include the base and the exponent. Let's assume (initially) that both of these parameters are integers, and we seek a programming solution for theMath.pow() method in ME using the same parameters as the Java method. Here the programming solution is fairly simple, as shown in Example 1. In this example, we simply run a multiplicative loop indexed by the value of the exponent.

    Example 1 int pow( int x, int y) /*we define the power method with base x and power y (i.e., x^y)*/ { int z = x; for( int i = 1; i < y; i++ )z *= x; return z; } 

    Of course, one may find the need to evaluate non-integer powers. A simple solution for the positive reals (without having access to the Math.pow() method) could involve the use of theMath.log(). For example, consider 82/3. Taking the natural logarithm results in 2/3*ln(8) = 1.386294361. To obtain the final solution, take the exponent of 1.386294361, specifically e1.386294361 = 4. In this case, one would not have the need for the power function. Sadly, Java ME doesn’t support the Math.log() method either. Without either the Math.pow() or theMath.log() methods, we look at a naive "brute force" heuristic, an application of the Math.sqrt() method, and a Taylor series approximation of the natural logarithm (as well as Euler's e) for solutions to our Java ME problem.

    An ME Math.pow() Using a Geometric Decay Algorithm as a Brute Force Solution

    Earlier implementations of Java ME excluded the floating point primitive data types, float and double. More recently, these data types have been added. We will now substitute the integer arguments in our Math.pow()declaration with the double data type.

    You may require fractional exponents for use in your Java ME Math.pow() power method. The first approach to buildingMath.pow() that we provide is a naive "brute force" heuristic using a geometric decay algorithm. In simple terms, a decay algorithm starts with a value greater than the known solution and applies a method to decay that value until it reaches a close approximation to the solution (see Example 2 for an illustration of a simple linear decay algorithm). In our case, we will further illustrate a geometric form that converges towards the solution from above.

    Example 2 /* This example illustrates a simplistic decay algorithm that we will assume converges to our desired solution (a positive integer) */ int n; // assume that n is the solution to the number we are trying to find int varX = 1000; //assume that we know the solution is less than or equal to 1000 while( varX > 0 ) { varX -= 1; // decrement by 1 if( varX == n)return varX; } 

    In Example 2, we decrement from 1000 until we find the desired number, assuming that the number we desire is a positive integer. This type of algorithm forms the basis for our brute force heuristic.

    Using a similar approach, we apply this algorithm when encountering fractions. Let's assume that we seek a value forn where n = 82/3. To use a decay algorithm, we must first find an appropriate starting point that is equal to or larger than the solution itself. Doing so for the positive real numbers with positive exponents is fairly simple. To program this solution for our example, we cube both sides, resulting in n3=82. Of course, this equation is equivalent to n3=64. Our starting value, then, becomes 64, as we know thatn must be smaller than 64 (sincen3 = 64). Note that this same reasoning would apply for any positive value of the exponent, given our restriction to the positive reals. Now, we might design a loop to generate a solution for n that is "close enough" to our desired number. We turn to Example 3, which is appropriate for all positive base numbers and positive exponents.

    Example 3 double pow( double x, double y ) //we define our new power method for fractions { int den = 1000; // specify arbitrary denominator int num = (int)(y*den); // find numerator int s = (num/den)+1; /*********************************************************************** ** Variable 's' provides the power for which we multiply the base to find ** our starting search value. For example, if we seek a solution for ** n = 8^(2/3), then we will use 8^2 or 64 as our starting value (which is ** generated in our next section of code.) Why? The solution for our ** problem (given that the base is positive) will always be less than or ** equal to the base times the numerator power. ************************************************************************/ /*********************************************************************** ** Because we set the denominator to an arbitrary high value, ** we must attempt to reduce the fraction. In the example below, ** we find the highest allowable fraction that we can use without ** exceeding the limitation of our primitive data types. ************************************************************************/ double z = Double.MAX_VALUE; while( z >= Double.MAX_VALUE ) { den -=1; // decrement denominator num = (int)(y*den); // find numerator s = (num/den)+1; // adjust starting value // find value of our base number to the power of numerator z = x; for( int i = 1; i < num; i++ )z *= x; } /*********************************************************************** ** Now we are going to implement the decay algorithm to find ** the value of 'n'. ************************************************************************/ /*********************************************************************** ** We now find 'n' to the power of 's'. We will then decrement 'n', ** finding the value of 'n' to the power of the denominator. This ** value, variable 'a', will be compared to 'z'. If the 'a' is nearly ** equal to 'z', then we will return 'n', our desired result. ************************************************************************/ double n = x; // We define 'n' as our return value (estimate) for 'x'. // find 'n' to the power of 's'. for( int i = 1; i < s; i++)n *= x; // Begin decay loop while( n > 0 ) { double a = n; //proxy for n // find 'a', the value of 'n' to the power of denominator for( int i = 1; i < den; i++ )a *= n; // compare 'a' to 'z'. Is the value within the hundred-thousandth? // if so, return 'n'. double check1 = a-z; double check2 = z-a; if( check1 < .00001|| check2 > .00001 ) return n; n *= .999;// We arbitrarily use a decay of .1% per iteration } // value could not be found, return -1. return -1.0; } 

    This example demonstrates the use of the decay algorithm. You should notice that the value of n (the estimate for our solution) is decremented arbitrarily by .1 percent. You may want to alter this value depending on your programming precision requirements. Also, you might consider including programming logic that compares the previous iteration solution with the current iteration and then continues iterating if there is improvement, but returns the previous value if the solution has regressed.

    The solution described only handles positive exponents. What happens if you have negative values? We address this contingency next.

    Handling Negative Exponents

    To add another layer of complexity, assume that we are passing negative exponents to our Math.pow() method. In the case that the exponent is negative, a simple solution is to convert the base number to a fraction, making the exponent positive. For example, 8-2 may be converted to(1/8)2. Programmatically, we will divide 1 by our base number, x, and multiply our exponent,y, by -1 (see Example 6).

    Example 6 if( y < 0 ) { x = (1/x); // convert base number to fraction y *= -1; // make exponent positive } 

    We have now finished discussing the "brute force" geometric decay algorithm for estimating power functions in Java ME. The reader should note that the naive "brute force" heuristic has performance issues for large base and exponent numerator combinations. Consider the example of 85/2. The starting value using this algorithm would be85 = 32,768. Using a decay of .1 percent, a total of 5196 iterations would be required to find the solution--hardly optimal. With this fact in mind and without proffering improved heuristic search algorithms, we turn to a secondary approximation, which provides more reasonable iterative requirements.

    An MEMath.pow() Using Math.sqrt() Methods

    The second approach to developing our Math.pow()method is to leverage the use of the Math.sqrt()method (see Example 7). Using the Math.sqrt() method requires that we have an estimate for our power that has an even denominator. For example, n = 82/3 => n3 = 82 is a tricky problem, as we need the cube root, not the square root. In order to adjust for this problem in our example, we simply square each side one more time: n3 = 82 => n6 = 84. We can then proceed to find our solution in exactly two iterations.

    Of course, our ME Math.pow() method does not start with numerators and denominators for the exponent. Instead, we are passed a real number. We convert this real number to a fraction with an even denominator and then take the appropriate number of square roots to solve for n. In our n = 82/3 example, we convert as follows:

    n = 8
    2/3 => n = 8
    683/1024 => n
    1024 = 8

    We selected 1024 as our denominator as 10 iterations of the square root function produces the value for n. Specifically, (n1024)(1/(2^10)) = n. Of course, we may have to scale down the number of iterations depending on the size of the right-hand side of the equation. We illustrate this approach in Example 7.

    Example 7 double pow(double x, double y) { //Convert the real power to a fractional form int den = 1024; //declare the denominator to be 1024 /*Conveniently 2^10=1024, so taking the square root 10 times will yield our estimate for n. In our example n^3=8^2 n^1024 = 8^683.*/ int num = (int)(y*den); // declare numerator int iterations = 10; /*declare the number of square root iterations associated with our denominator, 1024.*/ double n = Double.MAX_VALUE; /* we initialize our estimate, setting it to max*/ while( n >= Double.MAX_VALUE && iterations > 1) { /* We try to set our estimate equal to the right hand side of the equation (e.g., 8^2048). If this number is too large, we will have to rescale. */ n = x; for( int i=1; i < num; i++ )n*=x; /*here, we handle the condition where our starting point is too large*/ if( n >= Double.MAX_VALUE ) { iterations--; /*reduce the iterations by one*/ den = (int)(den / 2); /*redefine the denominator*/ num = (int)(y*den); //redefine the numerator } } /************************************************* ** We now have an appropriately sized right-hand-side. ** Starting with this estimate for n, we proceed. **************************************************/ for( int i = 0; i < iterations; i++ ) { n = Math.sqrt(n); } // Return our estimate return n; } 

    The Taylor Series Approximation of the Natural Logarithm and Euler'se

    One of the easiest ways to produce a Math.pow()method for the positive reals is to link a few methods involving use of the Taylor series. Assume we want the power y = xb. That is equivalent to ln(y) = b*ln(x). Further, we may use the Taylor series expansion for estimating the natural logarithm of x as follows.

     ln(x) = (x-1) –(x-1)
    2/2 + (x-1)
    3/3 - (x-1)
    4/4….if |x-1|<=1 OR

    ln(x) = 1/(x/(x-1)) + 1/(x/(x-1)) 2 + 1/(x/(x-1)) 3… if |x|>1

    Since x is positive, the entire domain ofx is covered by these equations. This alternating series provides a very close approximation to the logarithm of the base. Multiplying this logarithm by the exponent yieldsln(y), the right-hand side of our equation,ln(y)=b*ln(x). Now, we simply need to findeln(y), and we are done. We use another Taylor series expansion to finish this process. ex= 1 + x + x2 / 2! + x3 / 3! + …Using these two formulae, we have a solution for our problem, as shown in Example 8.

    Example 8 double pow(double a, double b) { // true if base is greater than 1 boolean gt1 = (Math.sqrt((a-1)*(a-1)) <= 1)? false:true; int oc = -1; // used to alternate math symbol (+,-) int iter = 20; // number of iterations double p, x, x2, sumX, sumY; // is exponent a whole number? if( (b-Math.floor(b)) == 0 ) { // return base^exponent p = a; for( int i = 1; i < b; i++ )p *= a; return p; } x = (gt1)? (a /(a-1)): // base is greater than 1 (a-1); // base is 1 or less sumX = (gt1)? (1/x): // base is greater than 1 x; // base is 1 or less for( int i = 2; i < iter; i++ ) { // find x^iteration p = x; for( int j = 1; j < i; j++)p *= x; double xTemp = (gt1)? (1/(i*p)): // base is greater than 1 (p/i); // base is 1 or less sumX = (gt1)? (sumX+xTemp): // base is greater than 1 (sumX+(xTemp*oc)); // base is 1 or less oc *= -1; // change math symbol (+,-) } x2 = b * sumX; sumY = 1+x2; // our estimate for( int i = 2; i <= iter; i++ ) { // find x2^iteration p = x2; for( int j = 1; j < i; j++)p *= x2; // multiply iterations (ex: 3 iterations = 3*2*1) int yTemp = 2; for( int j = i; j > 2; j-- )yTemp *= j; // add to estimate (ex: 3rd iteration => (x2^3)/(3*2*1) ) sumY += p/yTemp; } return sumY; // return our estimate } 

    In nearly all cases the estimate returned by the Taylor series approximation is more precise than the decay algorithm approach, while the Math.sqrt() method also provides better results. The Taylor series approach uses fewer computing cycles but becomes unstable when values approach zero. TheMath.sqrt() results provide excellent approximations, normally to the third digit. See Table 1 for a comparison of the methods using several arbitrarily assigned positive real variables. We can see that for practical applications, theMath.sqrt() or Taylor series methods appear to be superior for most values.

    Table 1: Comparison of the Decay Algorithm and Square-Root Approaches

    Base, ExponentTrue ResultTaylor Series Approx.Math.sqrt() EstimateDecay Algorithm Estimate
    8.0, 0.754.756828464.74233532211445574.756828464.751286816
    8.0, 0.6674.0027743.99193550549599733.9945884523.994453662
    16.0, 8.04294967296429496729642949672964294752931
    32.0, 5.033554432335544323355443233553177.47
    11.0, 3.01331133113311330.967224
    10.0, 10.01000000000010000000000100000000009999699608
    77.0, 3.0456533456533456533456527.6254
    5.0, 15.030517578125305175781253051757812530516279235
    15.0, 9.038443359375384433593753844335937538440083836
    3.0, 21.010460353203104603532031046035320310459907131
    5.0, 0.051.0837983871.08378837917400171.0834577551.08205432
    7.0, 0.372.0544062.05291912079080642.0509733572.051043668
    1.5, 0.7891.3770065421.3770065415467551.3764962891.376798426
    1.5, 3.7894.6473970784.6473816831793354.640159724.644836289
    0.06282, 0.3257840.4059191460.4132710239696858500.06282
    0.7261, 0.205740.9362706450.93627064453488060.936469010.7261
    0.903272, 0.485930.9517675790.9517675792576420.9518235880.903272
    0.821111, 0.7673920.859632210.85963221007947380.8597661450.821111
    0.24352, .0043220.993913530.99391365453971820.9944973970.24352
    0.000125, .995560.000130089627097113.196335100.000125

    Programming Considerations and Conclusions

    This article has addressed three approaches to developing aMath.pow() method in Java ME. While the naive "brute force" geometric decay heuristic is interesting and potentially useful for small problems, the Math.sqrt() adaptation is probably better for most ranges of applications. The best method is probably the Taylor series approximation. Clearly, these three examples do not encompass the only methods for accomplishing the task (e.g., bisection and others located in the references), and we would hope that others provide more efficient techniques that consume fewer resources. One final note: if you are required to develop such a method, consider its application, required parameters, and desired results and precision.