The story is set in the mid-21st century. Global warming has led to ecological disasters all over the world, and a drastic reduction of the human population. Mankind's efforts to maintain civilization lead to the creation of "mechas," advanced humanoid robots capable of emulating human thoughts and emotions. Among these androids there is an advanced prototype model named David, a mecha created by the Cybertronics company to resemble a human child and to "feel" love for its human owners. They test their creation on one of their employees, Henry Swinton and his wife Monica. The Swintons have a son, Martin, who has been placed in suspended animation until a cure can be found for his rare disease. Although Monica is initially frightened of David, she eventually warms to him after activating his imprinting protocol, which irreversibly causes David to feel love for her as a child loves a parent. As he continues to live with the Swintons, David is befriended by Teddy, a mecha toy, who takes upon himself the responsibility of David's well being.
Martin is suddenly cured and brought home; a sibling rivalry ensues between Martin and David. Martin's scheming behavior backfires when he and his friends activate David's self-protection programming at a pool party. Martin is saved from drowning but David's actions prove too much for Henry. It is decided for David to be destroyed at the factory where he was built, but Monica rather leaves him (alongside Teddy) in a forest to live as unregistered mechas. David is captured for an anti-mecha Flesh Fair, an event where useless mechas are destroyed before cheering crowds. David is nearly killed, but the crowd is swayed by his realistic nature and he escapes, along with Gigolo Joe (Jude Law), a male prostitute mecha on the run after being framed for murder.
The two set out to find the Blue Fairy, whom David remembers from the story The Adventures of Pinocchio. As in the story, he believes that she will transform him into a real boy, so Monica will love him and take him back. Joe and David make their way to the decadent metropolis of Rouge City. Information from a holographic volumetric display personality called "Dr. Know" eventually leads them to the top of the Rockefeller Center in the flooded ruins of Manhattan. David's human creator, Professor Hobby, enters and excitedly tells David that finding him was a test, which has demonstrated the reality of his love and desire. A disheartened David attempts to commit suicide by falling from a ledge into the ocean, but Joe rescues him, just as he is captured by the authorities.
David and Teddy take a submersible to the fairy, which turns out to be a statue from a submerged attraction at Coney Island. Teddy and David become trapped when the park's ferris wheel falls on their vehicle. Believing the Blue Fairy to be real, he asks to be turned into a real boy, repeating his wish without end, until the ocean freezes. 2000 years later, Manhattan is buried under several hundred feet of glacial ice, and humans are extinct. Mechas have evolved into an alien-looking humanoid form. They find David and Teddy: functional mechas who knew living humans. David wakes up and realizes the fairy was fake. Using David's memories, the mechas reconstruct the Swinton home, and explain to him via a mecha of the Blue Fairy that he cannot become human. However, they recreate Monica from a lock of her hair which has been faithfully saved by Teddy, but she will live for only a single day and the process cannot be repeated. David spends the happiest day of his life playing with Monica and Teddy. Monica tells David that she loves him and has always loved him as she drifts to sleep for her final time. This was the "everlasting moment" he had been looking for, he closes his eyes, falls asleep for his first time, and goes "to that place where dreams are born".
Builders of computer systems often need information about floating-point arithmetic. There are, however, remarkably few sources of detailed information about it. One of the few books on the subject, Floating-Point Computation by Pat Sterbenz, is long out of print. This paper is a tutorial on those aspects of floating-point arithmetic (floating-point hereafter) that have a direct connection to systems building. It consists of three loosely connected parts. The first Section, "Rounding Error," on page 173, discusses the implications of using different rounding strategies for the basic operations of addition, subtraction, multiplication and division. It also contains background information on the two methods of measuring rounding error, ulps and relative error. The second part discuses the IEEE floating-point standard, which is becoming rapidly accepted by commercial hardware manufacturers. Included in the IEEE standard is the rounding method for basic operations. The discussion of the standard draws on the material in the Section , "Rounding Error," on page 173. The third part discusses the onnections between floating-point and the design of various aspects of computer systems. Topics include instruction set design, optimizing compilers and exception handling.
I have tried to avoid making statements about floating-point without also giving reasons why the statements are true, especially since the justifications involve nothing more complicated than elementary calculus. Those explanations that are not central to the main argument have been grouped into a section called "The Details," so that they can be skipped if desired. In particular, the proofs of many of the theorems appear in this section. The end of each proof is marked with the * symbol; when a proof is not included, the * appears immediately following the statement of the theorem.
Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation. Although there are infinitely many integers, in most programs the result of integer computations can be stored in 32 bits. In contrast, given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation. This rounding error is the characteristic feature of floating-point computation. "Relative Error and Ulps" on page 176 describes how it is measured.
Since most floating-point calculations have rounding error anyway, does it matter if the basic arithmetic operations introduce a little bit more rounding error than necessary? That question is a main theme throughout this section. "Guard Digits" on page 178 discusses guard digits, a means of reducing the error when subtracting two nearby numbers. Guard digits were considered sufficiently important by IBM that in 1968 it added a guard digit to the double precision format in the System/360 architecture (single precision already had a guard digit), and retrofitted all existing machines in the field. Two examples are given to illustrate the utility of guard digits.
The IEEE standard goes further than just requiring the use of a guard digit. It gives an algorithm for addition, subtraction, multiplication, division and square root, and requires that implementations produce the same result as that algorithm. Thus, when a program is moved from one machine to another, the results of the basic operations will be the same in every bit if both machines support the IEEE standard. This greatly simplifies the porting of programs. Other uses of this precise specification are given in "Exactly Rounded Operations" on page 185.
If I speak in the tongues of men and of angels, but have not love, I am only a resounding gong or a clanging cymbal. If I have the gift of prophecy and can fathom all mysteries and all knowledge, and if I have a faith that can move mountains, but have not love, I am nothing. If I give all I possess to the poor and surrender my body to the flames, but have not love, I gain nothing. Love is patient, love is kind. It does not envy, it does not boast, it is not proud. It is not rude, it is not self-seeking, it is not easily angered, it keeps no record of wrongs. Love does not delight in evil but rejoices with the truth. It always protects, always trusts, always hopes, always perseveres. Love never fails. But where there are prophecies, they will cease; where there are tongues, they will be stilled; where there is knowledge, it will pass away. For we know in part and we prophesy in part, but when perfection comes, the imperfect disappears. When I was a child, I talked like a child, I thought like a child, I reasoned like a child. When I became a man, I put childish ways behind me. Now we see but a poor reflection as in a mirror; then we shall see face to face. Now I know in part; then I shall know fully, even as I am fully known. And now these three remain: faith, hope and love. But the greatest of these is love.
Builders of computer systems often need information about floating-point arithmetic. There are, however, remarkably few sources of detailed information about it. One of the few books on the subject, Floating-Point Computation by Pat Sterbenz, is long out of print. This paper is a tutorial on those aspects of floating-point arithmetic (floating-point hereafter) that have a direct connection to systems building. It consists of three loosely connected parts. The first Section, "Rounding Error," on page 173, discusses the implications of using different rounding strategies for the basic operations of addition, subtraction, multiplication and division. It also contains background information on the two methods of measuring rounding error, ulps and relative error. The second part discuses the IEEE floating-point standard, which is becoming rapidly accepted by commercial hardware manufacturers. Included in the IEEE standard is the rounding method for basic operations. The discussion of the standard draws on the material in the Section , "Rounding Error," on page 173. The third part discusses the onnections between floating-point and the design of various aspects of computer systems. Topics include instruction set design, optimizing compilers and exception handling.
I have tried to avoid making statements about floating-point without also giving reasons why the statements are true, especially since the justifications involve nothing more complicated than elementary calculus. Those explanations that are not central to the main argument have been grouped into a section called "The Details," so that they can be skipped if desired. In particular, the proofs of many of the theorems appear in this section. The end of each proof is marked with the * symbol; when a proof is not included, the * appears immediately following the statement of the theorem.
Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation. Although there are infinitely many integers, in most programs the result of integer computations can be stored in 32 bits. In contrast, given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation. This rounding error is the characteristic feature of floating-point computation. "Relative Error and Ulps" on page 176 describes how it is measured.
Since most floating-point calculations have rounding error anyway, does it matter if the basic arithmetic operations introduce a little bit more rounding error than necessary? That question is a main theme throughout this section. "Guard Digits" on page 178 discusses guard digits, a means of reducing the error when subtracting two nearby numbers. Guard digits were considered sufficiently important by IBM that in 1968 it added a guard digit to the double precision format in the System/360 architecture (single precision already had a guard digit), and retrofitted all existing machines in the field. Two examples are given to illustrate the utility of guard digits.
The IEEE standard goes further than just requiring the use of a guard digit. It gives an algorithm for addition, subtraction, multiplication, division and square root, and requires that implementations produce the same result as that algorithm. Thus, when a program is moved from one machine to another, the results of the basic operations will be the same in every bit if both machines support the IEEE standard. This greatly simplifies the porting of programs. Other uses of this precise specification are given in "Exactly Rounded Operations" on page 185.