辅导Polynomial Evaluation、讲解Polynomial Python程序

- 首页 >> Python编程


1.3 Efficient Polynomial Evaluation

We have seen that polynomials may provide ways for approximate calculations for

numbers and functions we are interested in. For this reason it is useful to think

about how to evaluate a polynomial efficiently. The standard form of a polynomial

is one needs k-1 multiplications to compute the value. Then one would

need

multiplications. This number equals and therefore grow quickly with n. Of

course, anybody who would have to compute this by hand would immediately come up

with the idea of computing the powers successively and using the lower powers in the

polynomial instead of discarding them. Then you need

Multiplications, a significant improvement. There is, however, an improvement on this

that does not immediately seem simpler but has a very simple structure as a program. If

we consider, e.g., the polynomial

We can rewrite it as,

The presence of so many parentheses makes it look complicated, but it only requires 3

multiplications. In general.

The following program evaluates a polynomial with given coefficients using what is

known as the Horner scheme.

def P(x,a,N):

p=a[N]

for n in range (N-1,-1,-1):

p=a[n]+x*p

return p

The actual computation takes two lines, the order in which the operations are carried out

makes the parentheses redundant.

Homework Problem Set 4

1. How can you evaluate a polynomial more efficiently if it is odd or even? How

does this apply to?

2. Show how to evaluate the function efficiently.

Homework Problem Set 5

1. Write a Python function that evaluates for 1 using the Taylor polynomial

for this interval with an error of at most 10 and evaluating it by the Horner scheme.

2. Write a Python function that evaluates √1 for 3 using the Taylor

polynomial for the midpoint of this interval with an error of at most 10 and evaluating

it by the Horner scheme.

15?

2.2+2.3 Errors: Definitions, Propagation, Sources and Examples I

We are rearranging parts of Chapter 2 and begin by defining errors. We have already

defined the absolute error.

In addition, we now introduce the relative error.

This definition and any statement about relative errors of course only makes sense if the

true value is different from zero. Talking about a quantity? we denote its true value by and its approximate value by.We denote the absolute error by

and the relative error by.

As an example, we can use the approximation 1.414 for √2. Now √2 is an irrational

number having an infinite decimal expansion. For that reason, we compare the present

approximation by a better one. Then we get an approximate value for the error. We have1.414213562373095 1.414 0.000213562373095

and 0.000213562373095

1.414213562373095 0.000151011402222.

Relative errors are often expressed in percentages, in this case 0.0151011402222%.

We will only talk about the errors that are caused by the finite accuracy of our

calculations in the next section. In Chapter 1 we have seen that one of the sources of error

are mathematical approximation errors, in that case caused by replacing an infinite sum

16?

by a finite one, and we have seen also that errors come from carrying only a finite

number of digits.

In many cases calculations are based on measurements and then measurement errors play

a role. Also mistakes in procedures, and in the way they are carried out, produce errors,

but that we will not discuss much. Another source of errors is incomplete modeling.

As an example of this, the US population US Population from 1790 to 1860 is well

modeled by the function

N(t)=3929000exp(0.02975(t-1790))

The comparison with actual data, from Wikipedia, is as follows.

Year 1790 1810 1830 1860 1870 1900 2000

Model 3,929 7,123 12,915 31,528 42,452 103,636 2,030,191

Actual 3,929 7,240 12,866 31,443 38,558 76,212 281,422

You are invited to speculate about the reasons for the first big difference in 1870.

In Chapter 1 we estimated the absolute error coming from the mathematical

approximation. There, as in many other cases, we just made sure that the error is smaller

than a given quantity. In other cases, we want to have very good approximations of errors

we can then use to even improve the approximation.

Absolute errors are often very useful, but they do change if we introduce different units

for the quantities, for a length the error in meters will be different from the error in

kilometers, while relative errors are independent of the units, they are non-dimensional as

one says in science and engineering.

Both types of errors interact differently with different arithmetic operations. If we have

two quantities x and y, then for their sum x+y,

for the difference?

Note that we are assuming that these arithmetic operations are carried out exactly, errors

owing to such inaccuracies are going to be addressed later.

Now, if we multiply and divide, this is clearly not going to be true. If we multiply a

quantity by 10, its error will be ten times as large. In this case, at least for small errors,

the relative error is much better behaved. We have, separating the error stemming from x

and the error originating from y.

If we assume that both relative errors are small, then their product is even smaller.

As we can obtain division be multiplying by the reciprocal of the divisor, we consider

This requires that 1. Therefore, we have.

This gives us some confidence in our results unless we mix +- with */. Usually we will

not run into too much trouble, but as we have seen we sometimes do.

In many cases we will want to get upper bounds for the absolute and relative errors, only

knowing such bounds for these quantities. In such cases we use the triangle inequality

From this inequality we then obtain.

In the latter case we need This requires that 1. As I am not aware of a sign

for smaller than or almost equal, we do not write that version of these results down. We

can use the results we just obtained to derive some reasonable estimates for the error of

19?

an expression if we make some assumptions about the errors of the input data. Assuming

all input numbers have an absolute error with an absolute value of at most 10, we can

give an approximate upper bound for the error of the expression

1.45781 2.78349

7.25 10.567 .

The absolute error of the numerator and the denominator is at most 2 10. The

numerator is approximately 1.32568, therefore the absolute value of the numerator is

bounded by 1.51 10, the denominator is approximately 17.817, so its relative error is

smaller than or equal 1.13 10. Note that we are fairly generous in our estimates, as

the errors are fairly small, so we do not need very high accuracy. The absolute value of

the relative error of the quotient is then bounded by 10 16.23 10.

The quotient is approximately 0.0744053, so a reasonable bound for the absolute error is

0.0000012. Note that we have not taken the errors in the division into account, they will

be much smaller.

Homework Problem Set 6

1. Show that the relative error of the square of an expression is approximately

bounded by double the relative error of the expression.

2. Give an estimate for the absolute errors in the evaluated expressions below

following the example just above, assuming all input numbers have an absolute

error with an absolute value of at most 10.

2.1 Computer Representation of Numbers

Now is the time to think a bit more about the way numbers are represented and processed

in computers. The numbers are binary, using a position-based system with 0 and 1 as the

only digits. As the processors in use now typically are 64bit processors, we will only

discuss numbers based on 64-bit “words”. Before we do this let us remember what we do

when calculating in the decimal system. This is based on the fact that every real number can be written in the form

with. Unless 0, we can, and do, choose

N so that 0. The problem with this representation is that it still requires an

infinite amount of information, so we need to give up the absolute accuracy of

describing a number, and just use a finite number of digits, e.g. digits, obtaining

an approximation of the number we might, but will not, call by.

There are at least two processes for obtaining .If we choose and for 1, a process called chopping, then, so the absolute value of the relative error is bounded by.

In the more familiar process of rounding we choose the same values if

21, we need to

carry forward and then have 1, with the usual changes in the other digits. Then

we have.

Now is the number of digits given and gives us an alternative way of talking about

relative error. We say a quantity x is given with an accuracy of n significant digits if the

relative error.

This means that the error is not bigger than what we can achieve by using n digits for

writing the number down. We can of course extend this to non-interger n by choosing.

Another common way of writing numbers is in scientific notation.

The parenthesis then has the form,

part of the usual way of writing scientific notation.

These two forms of real numbers are carried over to binary numbers as well. As you

probably know one can replace the 10 by any integer bigger than 1, and that in the

22?

internal calculations done by computers 10 is replaced by 2, although input and output of

computers is typically in decimal form. Again, any real number can be written as. Now each of the takes up exactly one bit. We

again have to confine ourselves to a finite number of digits, so we have.

We can write every integer in this form with 0. A 64bit integer uses 1 bit for

the sign and then 63 bits for the absolute value, thus all such integers are

They are therefore at most than 2 1 in absolute value, as the biggest absolute

value is.

For integers below this size limit there is no inaccuracy in the representation, while

bigger numbers cannot be represented at all. Let us convert a few integers from

binary to decimal and reverse. The notation refers to binary numbers, while

decimal numbers are written as usual. Then we have 45.

To reverse we can repeatedly divide by 2 with remainder. We write the result as

which is equivalent to.Let us consider the number 107. We can divide it

by 2 with remainder and obtain

Dividing 53 by 2 we get? Continue to obtain. Therefore

We remember that calculations with decimal numbers are based on the knowledge of

addition and multiplication tables that allow us to calculate sums and products of any

numbers using the associativity, commutativity and distributivity of the operations for

numbers. The big advantage of binary numbers is the simplicity of the multiplication

table

* 0 1

0 0 0

1 0 1

Now a computer can carry out all arithmetic operations with integers with absolute

accuracy, unless the input or result produce an overflow. In case of division we can use

division with remainder and still have complete accuracy. If we want to get an output in

the form of a number, for division this will often have be in the form of a fraction, thus

not producing an integer.

We have discussed the representation of integers. Now let us consider a subclass of

rational numbers. A format analogous to scientific notation is the floating-point number.

In the 64-bit version the bits

are used as follows. The first bit determines the sign, we have 1 if 1 and  Let us denote. Then with

we have 0 2047. Now let 1023, then 1023 1024. Also let correspond to.Then if 1023 the word b represents

the number.

This only allows us to represent numbers that are larger in absolute value than. For smaller numbers with 1023, the numbers represented are.

This leaves no unusual gaps between the numbers, and we can represent zero as

well. The smallest non-zero absolute value we can have is 2. Also if

1024, then 0 represents ∞ depending on ,with any other a’s

it represents NaN, not a number. The largest regular number is close to.

With these numbers, known as floating point numbers, we are trying to cope with

the everywhere dense set of real numbers, so not every number can be represented

precisely. As the range of numbers is so large, it does not make much sense to talk

about an absolute error, but relative errors do make sense. If

then with a suitable binary sequence. If with the same numbers we let.

This process of finding an approximation for x is called chopping. Instead we

could be rounding, that is equal to chopping when 0 and adding 1 to the last

binary digit when 1. Except when 1024 and all digits were already one,

this is possible and cuts the error in half to 2.

Now we need to return to the arithmetic operations. These are calculated to a

somewhat higher accuracy, and then usually rounded off to the nearest

representable number, at least if we specify that is what we want. It is also the

default setting. Therefore, the accuracy for floating point operations in terms of

relative error is bounded more by the accuracy of the representation than the

accuracy of the calculation. This has a long tradition. Electro-mechanical

calculators often had 10-digit inputs and 20-digit outputs and left the rounding to

the operator.

One aspect of binary representation of numbers that can cause some problems in

programming is that some numbers having a finite decimal representation do not

have a finite binary one. For example,

Therefore, the computer never represents the number 0.2 correctly. This fact can

lead to unexpected problems with programs that are meant to stop when a variable

reaches a certain value.

p=1.0

b=1.0

while b>0.0:

p=p/2

a=1.0+p

b=a-1.0

print(a,b,p)

Homework Problem Set 7

1. Run the program

a=0.0

d=0.2

for k in range (0,101):

print(a-k*d)

a=a+d

for three different d, two that do not have a finite binary representation and on that does,

and show the results.

2. Consider the program

p=1.0

b=1.0

while b>0.0:

27?

p=p/2

a=1.0+p

b=a-1.0

print(a,b,p)

Run it and comment on the result.

We finish with a theorem about which numbers can be written in terms of a finite binary

expansion.

Theorem: If the non-zero real number can be written in terms of a finite binary

expansion, then it is rational and can be written in the form

where is an integer and 0. If can be written in the form

with two integers, that have no common factor bigger than 1,0,and is not a

non-negative power of 2, then ? does not have a finite binary expansion.

Proof: If

with 0 and ∈0,1, then.

The denominator is an integer, thus

This proves the first part of our claim. For the remainder we use what is known as the

fundamental theorem of arithmetic. It is not hard to prove, but we will not do so as it

would leave us too far afield. It states the following.

Any positive integer can be written in exactly one way as a product of prime numbers.

This means that

with prime numbers and non-negative integers. If also

with ,then .

To prove the second part, we need to show that cannot have a finite binary expansion.

If it had, then we would have

In addition, by canceling common factors we can make sure that or that is odd.

As is odd it does not contain a factor 2, therefore.Now do not have any prime factors in common, as and also did not. Thus and

therefore . This is contrary to our assumption.


站长地图