Ralph Haulk

 

 


Tower of Babel and the “Gray Goo”

In the study of nanotechnology, there used to be an idea called “gray goo”. This goo was composed of sub-atomic nanobots that would build so many of themselves that they were uncontrollable”.
It was a great dream in nanotechnology to create little sub-atomic robots that could literally assemble anything by combining atoms. Imagine being able to build anything simply by re-constructing atoms to create that object!

Trouble. really big trouble, because they could turn everything on this planet, including you and I, into little nanobots!

The problem was how to create an intelligent “off switch” that would react to all necessary situations, and simply stop replicating when the optimum number is reached. But what is an optimum number? Surely there would ultimately be a number applying to certain objects, but not all objects, so much would be left to experimentation. But what if the “off switch” broke down?

This is an interesting parallel to the Tower of Babel. Imagine you have these little animals called humanoids that can build virtually anything they imagine, and all they have to do is form a central decision-making process, organize, and then focus on the building of that one thing. You have a problem similar to the gray goo don’t you? Once they collectively focus on this process, what will act as an “off switch”? What will keep them from continually working on this one project until they have totally destroyed their environment and cannot even grow food to sustain themselves? Is there some kind of “foresight” switch that allows them to produce, realize the goal is unattainable, and then simply stop?
That brings up a problem known as Turing’s halting problem. Alan Turing developed a “Universal Turing machine” which actually existed only in his imagination. It was based on the idea of a computer, but there were no such computers at that time. Left to its own run-time, could such a computer find a way to predict whether or not there was a solution to certain questions asked. Turing found it could not be done. The problem fed into the computer might be so complex that it might run forever, but that would not tell us if the Turing machine would ever stop. Was there a solution to that question, a way to know predictively that it would reach a solution, or would it simply never stop, allowed to run on its own time? Certainly human programmers can build in certain “off witches’ that are complex now, but even those safeguards don’t tell us if there are actually answers to all questions, or if it will be necessary to simply let the program run. It simply can not be known.

Before Turing, there was Kurt Godel, who developed his now famous incompleteness theorem which tells us that in any consistent axiomatic formalization suitable for number theory, there exists undecidable propositions.

Not only that, there may be an infinity of such undecidables.

Now, we come back to the Tower of Babel. Here were these suddenly self aware humanoids, able to build almost anything they imagined, and once organized, each member would simply do his/her part. If the goal was unattainable or tended to infinity, they would simply keep doing their part, like cogs in a machine, until they had destroyed their entire environment.

So we have Genesis 11:6: “…Behold, the people is one, and they have all one language, and this they begin to do: and now nothing will be restrained from them, which they have imagined to do”.

The solution was to diversify their language. Simple enough. if each of them had to think more as an individual, then each would have to consider the consequences of his/her acts as an individual. As a side note, this is also an interesting aspect of Godel’s theorem, because in order to develop his theorem, he had to create a “language”, a “Godel numbering” system that exactly reflected the axioms of the math, and use those numbers as axioms to act on the numbers creating a self referencing system! These “languages” consisten of three levels, and Godel found that the system, no matter how formal, there would exist mathematical statements that declared “I exist,  but I cannot  be proven in this system”!

 In effect, there existed no language that could ever reflect an “off switch” for any foral mathematical system. It had to be added by “outside” programmers!

Back to the Tower of Babel and God’s solution :

But this was not a permanent solution, because down through history, other humanoids would learn how to organize, conquer and build larger and larger systems of thought, with no restraints. How to stop them? If their one ruler claimed to be God, who would question his/her commands? Where would be the “off switch” that would say, ‘here and no further”? Anyone who did would be placing his/her own life in jeopardy. Evolution would not tend to favor such individuals. If you are a being of such power to create humans, your problem would be to keep them from destroying their environment before they reached their full potential. So, you would have to create a system that, by its very nature, led to continual splintering and speciation, with no “halting” in sight. You would have to “infect” the humanoids with a system that made them become ever more individualized and less collective, as collectivism would lead to death. That would require a system of rules or laws that, by its very nature, forced the people to question the consequences of their own acts. That would include the promise to Abraham, in my first essay.


 

Copyright


The content of this site, including but not limited to the text and images herein and their arrangement, are copyright © 1997-2015 by
The Painful Truth. All rights reserved.

Do not duplicate, copy or redistribute in any form without prior written consent.

Disclaimer