CHAPTER 01.03: SOURCE OF ERROR: Round-off Error 

 

In this segment we're going to talk about round-off errors. There are several possibilities of error whenever you're going to use numerical methods, but we want to concentrate here on just two errors, one is the round-off error and the other is the truncation error. So those are the sources of error which we are going to talk about, because those are the ones which are coming from something on which you may or may not have as much control as other errors, like for example if you have made a mistake in programming, or if your logic is wrong, those are not the kind of errors which we are talking about when we talk about numerical methods. So you're going to have two sources of error, which you are going to have.  One is round-off error and the other one is called truncation error. And let's go ahead and concentrate on what round-off error is.

 

      Now round-off error is defined as follows, it is basically the error which comes from . . . so error created due to approximate representation of numbers. So the round-off error is simply the error created by the approximate representation of numbers, because in a computer you'll be able to only represent a number only so approximately.  For example, if you have a number like 1 divided by 3, and you had a six significant digit computer let's suppose in the decimal notation, then this can be only approximated as 0.333333 a simple rational number like 1 divided by 3 cannot be written exactly in the decimal format.  So the amount of round-off error which you are getting here is the difference between the value of 1 divided by 3 and the value of 0.333333. So in this case, this error is 0.0000003333 and so on and so forth. You're going to get similar round-off errors from other numbers also, like, you may have pi, that also cannot be represented exactly, even in a decimal format, and then square root of 2, things like that. So you're finding out there are many, many numbers, individual numbers, like 1 divided by 3, or pi, or square root of 2, which cannot be represented exactly in a computer. So that's why this creates the round-off error, the round-off error is the difference between what you want to, what you want to be able to approximate, of what you want to be able to denote, and what you are able to get as its approximation. So that's the, that's what we call as round-off error. So that's the end of this particular segment here.