Science
Science
As a consequence of Europe’s religious wars, Western nations turned to natural philosophy (the precursor to modern science and its scientific method) as the public source of authority rather than religious doctrine and philosophical tradition. The West’s turn to science has had unparalleled success. At the same time, the West’s turn to science as the sole source of authority retained dire consequences that threaten human liberty, itself, in terms of an individual’s being and worth.
The scientific method is the testing of hypothesis through critical observation and making predictions that further establish or redefine the hypothesis. The method came into being as industry transferred from agrarian economies to market economies, which required precision, records, and realized goals. The Protestant Reformation’s weakening of the Catholic Church fostered the market economy and the natural sciences: the common people readily took advantage of newly produced almanacs, compasses, encyclopedia, farming methods, and new modes of transportation like the steam engine.
The scientific method is the testing of hypothesis through critical observation and making predictions that further establish or redefine the hypothesis. The method came into being as industry transferred from agrarian economies to market economies, which required precision, records, and realized goals. The Protestant Reformation’s weakening of the Catholic Church fostered the market economy and the natural sciences: the common people readily took advantage of newly produced almanacs, compasses, encyclopedia, farming methods, and new modes of transportation like the steam engine.
Sir Isaac Newton (1642-1727), a Christian himself, introduced the most wide ranging and practical application of science for personal as well as public life. In his greatest work Mathematical Principles of Natural Philosophy, Newton introduced laws of mechanics, motion, and gravity. Newton not only predicted the planetary motion of celestial bodies. Newton also described the mechanical operations of objects on earth: a mechanical understanding that scientists and engineers still employ today. Newton’s work predicted natural phenomena so well that Newton’s inability to predict microscopic behaviors of light and other subatomic phenomena brought about the need for contemporary physics, featuring quantum mechanics.
Though we enjoy the profound benefits of science in every aspect of our lives, we also see negative aspects of science’s standing as the only source of public authority: we observe how science disregards individual perspectives and sensibilities as we experience the ill effects of such things like government bureaucracies. We see the personal harm that occurs as policy makers, health professionals, employers, and economists undermine one’s immediate well-being based upon rational calculation, instead of heartfelt decisions. We personally experience the ill effects of science’s indifference to our subjective sensibilities as we witness such things as the withholding of medical services to the elderly for the sake of lowering healthcare cost or the abortion of unwanted pregnancies, because we fail to grasp when the marvel of human sentience occurs.
Universally known is the fact that scientists question the significance of the human being, itself, as scientists see our coming into being as an outcome of a chance event that preceded a long process of adaptation through a natural selection that supports an organism’s fitness to survive. The problem is that founders of modern Western states framed our laws in deference to the significance and rights of the individual; however, presently, as the Church continues to fade in public significance, policy makers continue to ignore individual sensibilities in favor of generalized notions that consider the health of an increasingly centralized state.
The Church has been hard pressed to defend its doctrines of the worlds and humanity’s proceeding from a divine creator. Unfortunately, the Church has rather further decreased its esteem in public circles, as the Church has put forth insufficient creationist and intelligent design arguments, which scientists have uncovered as being mere tautologies, that is, redundant and dogmatic statements, that present no derived or predictive evidence.
Still, little known is the fact that science, itself, has failed to realize its goal of reducing all known phenomena into a natural, mechanical, and predictive explanation. The Achilles Heel of contemporary science is that scientists cannot objectively observe subatomic phenomena without involving themselves into their analysis by mathematically normalizing the phenomena in the context of the classically described world that we observe. This process of normalization, which entails the employment of Max Planck’s constant, Albert Einstein descried as not being real objective science, although the normalizing process has been highly fruitful in its application of computer and nuclear science. Too, scientists have not been able to discern the cause of the Big Bang event and the manner in which energy emanates in discrete quantized forms to bring about the uniform world that we observe. Because the Church has failed to answer these questions, science remains as the sole authority.
The Landscape of Truth's 3rd chapter, entitled Immanuel’s Law, answers these questions handedly. The Landscape’s 3rd chapter describes how modern science came into being; describes the Church’s weakness; describes an objective explanation for the free being of humanity’s spirit and soul; and describes an objective and derivative explanation for the three persons of God.
The Landscape of Truth's 3rd chapter, entitled Immanuel’s Law, answers these questions handedly. The Landscape’s 3rd chapter describes how modern science came into being; describes the Church’s weakness; describes an objective explanation for the free being of humanity’s spirit and soul; and describes an objective and derivative explanation for the three persons of God.
Creator-Design vs Natural-Evolution Approach to Discovering Consciousness in the Brain
Published November 3, 2022 - thelandscapeoftruth.com
Factionalism’s Threat to Democracy as the People Lose a Common Sense of MoralityAny astute student of American history cannot deny that Christianity has provided a moral background that influenced cultural norms for all Americans, whether individual Americans have been Christian or not. For this reason, many find a recent report by the Pew Research Center alarming. In the report, entitled “Modeling the Future of Religion in America”, the Pew Research Center concludes that “If recent trends in religious switching continue, Christians could make up less than half of the U.S. population within a few decades.”
The report projects that by the year 2070 many Americans will identify as religious unaffiliated. However, since Americans value the freedom of religion; the report, on its face, does not uncover any threat to the American way of life: the ability for people to choose whether they want to be affiliated with a religious institution has been safeguarded as a core constitutional right sense the nation’s founding. To Americans, the freedom from religion is equally important as the freedom of religion.
So if one is really an astute student of American history, one would recognize that the careful balance between religious Americans and non-religious Americans’ acceptance of one another’s beliefs is the fabric of the nation. Even the framers of the United States’ Constitution reflected this balance: some framers were agnostic, and some were devout Christians; however, they all agreed that a literate populace with strong religious faith renders a common sense of morality that becomes the necessary glue holding the democratic republic together. In point of fact, the framers understood that the populace’s apprehension of a sense of universal morality is the buttress against the greatest threat to any true democracy, and that threat is factionalism, which induces government, commercial, and even religious powers to pursue their own self-interests despite the greater good for all. As political factions weaponize government institutions against one another, regardless of a common understanding of the nation’s constitutional law, democracy falls. Indeed, as we see the Church lose its public influence, we see factionalism weaken the American democracy.
So if one is really an astute student of American history, one would recognize that the careful balance between religious Americans and non-religious Americans’ acceptance of one another’s beliefs is the fabric of the nation. Even the framers of the United States’ Constitution reflected this balance: some framers were agnostic, and some were devout Christians; however, they all agreed that a literate populace with strong religious faith renders a common sense of morality that becomes the necessary glue holding the democratic republic together. In point of fact, the framers understood that the populace’s apprehension of a sense of universal morality is the buttress against the greatest threat to any true democracy, and that threat is factionalism, which induces government, commercial, and even religious powers to pursue their own self-interests despite the greater good for all. As political factions weaponize government institutions against one another, regardless of a common understanding of the nation’s constitutional law, democracy falls. Indeed, as we see the Church lose its public influence, we see factionalism weaken the American democracy.
The Public Threat of Scientific Advancement without Clear Moral OversightThe political balance between Christian and non-religious groups held unto the end of the Cold War, when America’s dominant Christian culture encountered competing cultures. The post-Cold War period ushered in the emergence of a global economy, and a resulting global multicultural community formed. Then the mass immigration of non-Protestant peoples changed the cultural background of America. And facilitating multiculturalism, the internet’s World Wide Web emerged with its prolific social media.
But while the political balance persisted between Christian and non-religious groups, the applied sciences proved to be the mediator between the people as a whole, though the sciences seemed to undermine the justification for religious faith, altogether. Immediately, at the dawn of the United States of America, advances in the sciences not only seem to invalidate religious belief in God’s existence, but also seem to invalidate the belief that we possess a free soul, with the capacity to make decisions independent of environmental and genetic impressions. First, just before the signing of the US Constitution, German philosopher Immanuel Kant seemingly demonstrated that theologians and philosophers cannot make logical statements that prove the existence of God and a unified independent self: Kant demonstrated that the theological statements do not prove to be suitable scientific hypotheses from which one can test and make predictions about material behavior that qualify as law. Several decades later, Charles Darwin published his theory of evolution, which eventually garnered the opinion that all living beings derived from natural processes, regardless of a divine creator. Soon after, physicists discovered that energy, itself, permeates only in specific waves, to the effect that subatomic particles only have statistical appearances within spacetime. Thus, quantum physicists now conclude that spacetime itself has a statistical foundation to the end that the determining of truth concerning anything cannot be known to the degree of utter specificity.
Undeniably, contemporary science has profoundly benefited modern society with technological advancements; though the secular framing of science diminishes the efficacy of religious belief. Even so, scientific advancements without the general public’s universal moral oversight often become the public’s greatest threat. For example, domestically, mass murders with advanced assault weapons in public schools and events occur more frequently; while, internationally, the proliferation of nuclear weaponry amongst rogue nations advances unabatedly. Also, away from the general publics’ eye, government funded research into deadly viruses frequently occurs, despite the risks to the public’s being exposed to new pandemics. And lastly, artificial intelligence (A.I.) advances in new military, commercial, and medical technology, regardless of the risk of A.I.’s advancing nuclear, other weaponry, biological, and privacy threats.
But while the political balance persisted between Christian and non-religious groups, the applied sciences proved to be the mediator between the people as a whole, though the sciences seemed to undermine the justification for religious faith, altogether. Immediately, at the dawn of the United States of America, advances in the sciences not only seem to invalidate religious belief in God’s existence, but also seem to invalidate the belief that we possess a free soul, with the capacity to make decisions independent of environmental and genetic impressions. First, just before the signing of the US Constitution, German philosopher Immanuel Kant seemingly demonstrated that theologians and philosophers cannot make logical statements that prove the existence of God and a unified independent self: Kant demonstrated that the theological statements do not prove to be suitable scientific hypotheses from which one can test and make predictions about material behavior that qualify as law. Several decades later, Charles Darwin published his theory of evolution, which eventually garnered the opinion that all living beings derived from natural processes, regardless of a divine creator. Soon after, physicists discovered that energy, itself, permeates only in specific waves, to the effect that subatomic particles only have statistical appearances within spacetime. Thus, quantum physicists now conclude that spacetime itself has a statistical foundation to the end that the determining of truth concerning anything cannot be known to the degree of utter specificity.
Undeniably, contemporary science has profoundly benefited modern society with technological advancements; though the secular framing of science diminishes the efficacy of religious belief. Even so, scientific advancements without the general public’s universal moral oversight often become the public’s greatest threat. For example, domestically, mass murders with advanced assault weapons in public schools and events occur more frequently; while, internationally, the proliferation of nuclear weaponry amongst rogue nations advances unabatedly. Also, away from the general publics’ eye, government funded research into deadly viruses frequently occurs, despite the risks to the public’s being exposed to new pandemics. And lastly, artificial intelligence (A.I.) advances in new military, commercial, and medical technology, regardless of the risk of A.I.’s advancing nuclear, other weaponry, biological, and privacy threats.
The Needed Scientific Proof for God’s Existence (the Creator-design approach to determining consciousness’ origin in the brain)As we consider the dire threats to modern democratic society, which increasingly loses the general public’s ethical oversight; we may take comfort from the fact that the advent of artificial intelligence (A.I.) forces scientists, A.I. engineers, and philosophers alike to confront a poignant fact: to mimic the natural intelligence of humans and animals; A.I. developers must frame the criteria for natural intelligence in terms of purpose, causality, and logical consequences, in order to design the mechanical operations of artificial intelligence. In other words, the development of A.I. forces many to reconsider if the mechanism of consciousness is either the effect of an intelligent designer (God) or the effect of random processes that yield no ethical criteria for us to conduct our knowledge and oversee the technological instruments that our knowledge creates.
Thus, let us quickly highlight the two most recent opposing approaches that seek to answer the question of whether God created natural intelligence or evolutionary processes gave rise to natural intelligence: the first approach, a Creator’s design approach, is by an aerospace mechanical designer, Derick M. Fuller, who named the approach Immanuel’s Law, meaning the natural laws of physics are a logical expression of God, who is the basis of all things. Moreover, the designer published the approach in the third chapter of a doctrinal treatise, the Landscape of Truth, in 2014; and he produced an online video presentation in October 2020, followed by an open letter to cognitive neuroscientists in March 2022.
True to his profession, the designer initially framed the necessary mechanical system that consciousness requires: he considered the physical requirements that support the basic structure of cognition and the nature of the environment that the system must operate upon. Recognizing how philosophers, since Plato, have developed the philosophical discipline of epistemology to define consciousness; he understood that conscious-cognition’s basic structure is a capacity to apprehend universal perceptions of the environment and thereby ground those perceptions into categorical concepts (judgements) per the unified sense of one’s state of being. And so, he understood that to distinguish a physical system that corresponds with conscious-cognition’s fundamental operation of rationalizing universal perceptions; he knew that he must distinguish how the brain interacts with the physical system that proves to be the substrate that all other physical systems emerge upon. Like so, he knew that he could readily distinguish how the brain models magnitudinous relationships as categorical concepts of the real world’s physical systems.
Knowing his physics, the designer understood that the mechanical basis of the material universe is a mathematical constant of proportionality: the constant governs energy transfer to normalize the statistical nature of subatomic particles into the macroscopic materials and extreme magnitudes of the universe that the brain enables consciousness to observe. However, also knowing that the brain cannot actually store but only correlatively model the universe’s extreme magnitudes; the designer, Fuller, recognized that the mechanical system equating consciousness must be a series of renormalizing processes that likewise normalize the subatomic world’s statistical nature unto scaled proportional magnitudes that mimic the real world. For example, the designer knew that for the brain to model the appearing world, it must perform an initial normalizing process to distinguish scaled magnitudes that equate cognitive percepts; moreover, he understood that the brain must sequentially perform an augmented normalizing process that correlates the scaled magnitudes into a constant of proportionality that equates a grounding of perceptions as concepts, per one’s personal state of being. And lastly, he realized that the brain must persist with a matching and retention of ongoing renormalizing processes that amounts to conscious will, itself.
To observe and demarcate the brain’s mechanical operation of renormalizing statistically based sensor-input into the magnitudinous models of conscious-cognition; the designer, Fuller, distinguished the brain’s underlying power source: he essentially distinguished that which survives the brain’s tendency toward entropy, that is, the brain’s mechanical breakdown that manifests as disease. In this way, he could map out the entire mechanical system. The designer’s obvious choice for the brain’s power source was the brain’s electromagnetic (EM) waves, which the brain’s neuron-cells’ electro-chemical exchanges generate. You see, each tree-like shaped neuron connects to other neurons and exchanges electro-chemical signals. Thereby, the neuronal net generates particular magnetic fields; as each neuron, neuronal layer, and electro-chemical signaling bears unique composition that renders to each part of the brain specific functions.
Currently, having no further understanding other than knowing the brain’s neuronal structure, cognitive neuroscientists can only point to brain regions that are active during cognitive behavior: the neuroscientists’ natural-evolution approach to framing the brain’s workings leaves them with no understanding of how the brain’s functioning actually amounts to consciousness. Many scientists, therefore, believe as Immanuel Kant held, that scientific knowledge of the self cannot be known. Some even go to the extent of believing that consciousness is an epiphenomenon, a mirage, of neuronal processes.
In contrast, the mechanical designer’s approach, Immanuel’s Law, entails philosophy’s strong epistemological structure that reinterprets the brain’s workings and closes the mind-body explanatory gap, by reconciling Christian theology, Western philosophy, and contemporary science to define the mechanical workings of consciousness. For example, Immanuel’s Law observes that the brain’s initial renormalizing process occurs at the cerebral cortex; where the cortex’s neuronal layers systematize sensory input into magnitudinous concepts. Next, Immanuel’s Law observes that the personalizing renormalizing process occurs at the thalamus, near the brain’s center, as the cortex-thalamus circuit transfers the cortex’s EM waves to the thalamus where the thalamus region obtains the constant of proportionality. Then finally, Immanuel’s Law observes that the matching and retention renormalizing process occurs as memory centers, like the hippocampus, reconcile ongoing sensory input with existing impressions upon the neuronal net. In fact, the mechanical designer calls the whole circuit, which obtains consciousness’ constant of proportionality, a dynamic reference frame: the first-person magnitudinous sense of self, which is a standing on one’s being that physically corresponds with an intra-dimensional experience of spacetime that seamlessly reconciles past, present, and forecasted perceptions, due to the dynamic reference frame’s grasping of the background—that is, energy’s proportionality—as the reference frame reconstitutes, representationally, the normalized world that again stands upon statistically based subatomic interactions.
The mechanical designer, Derick M. Fuller’s, design approach works because the approach simply seeks to reverse engineer a purposeful design; moreover, the designer establishes that the brain’s operation is by design because the electro-chemical exchange; the integrating of the EM waves through renormalizing processes; and the ultimate obtaining of a constant of proportionality all stand upon quantum mathematical relationships that preexist the physical worlds that emerge upon quantum proportionalities. The preexisting mathematically scheme matches Greek philosophers’ logos supposition of a prime mover, initiating all things; and of course the preexisting mathematical scheme matches Christian theology’s description of God the Father’s creating all things through God’s Logos, the second person of the Trinity. In short, the design approach reestablishes the moral and ethical foundations of Western civilization, from which the general public can scrutinize ongoing technological and social innovations:
To be specific, the design approach pursues the sense of purposefulness that results from the logical consequences of any given physical system’s operation. Then from the sense of purposefulness, society can theologically and philosophically structure an ethical compass; moreover, from the logical consequences of the system, society can derive hypotheses that scientists can falsify, that is, prove true or false.
Undeniably, when understood correctly, the mechanical designer, Derick M. Fuller’s, model shows striking promise. Yet, in order for the mechanical designer to prove that his model for the mechanism of consciousness is true; scientists must subject the designer’s model, Immanuel’s Law, to the scrutiny of the scientific method. In other words, scientists must test Immanuel’s Law as a hypothesis, in order to see if the hypothesis makes valid predictions; whereby all would recognize Immanuel’s Law as exemplifying true laws of physics. Still, though Immanuel’s Law has yet to undergo scientific scrutiny, the recent natural-evolution approach to determining consciousness’ origin in the brain independently reaches nearly identical conclusions as the mechanical designer, Fuller; therefore, the natural-evolutionary approach vindicates, Immanuel’s Law, though Immanuel’s Law points to far more reaching theological, philosophical, and political consequences.
Thus, to vindicate either the design approach or the natural-evolution approach to determining consciousness’ origin in the brain, we must establish that the authors of either approach appropriately apply the scientific method; which depends upon the integrity of the observer, in regards to the observer’s capacity to acquire objective knowledge. Indeed, the question of the integrity of the observer has haunted the practitioners of the scientific method, since the applied sciences of the 16th through 19th began to outpace philosophical institutions in gathering such knowledge that we have yet to comprehend philosophically. The German philosopher Immanuel Kant conceded that the design approach to understanding the universe is necessary in the effort to secure rational understanding, though Kant unfortunately concluded that we are unable to secure an objective understanding of one’s personhood or God. And so, despite the design approach’s utility in acquiring rational understanding, the natural-evolution approach has by far been the context in which scientists conceive the universe’s workings. And so, since scientists lack philosophical definition, the true natures of conscious-cognition and the universe have remained cryptic and mysterious at best.
You see, the design approach expects purpose, even a design intent that the designer can gainfully reconcile with Western philosophy’s descriptions of the structure of conscious-cognition, which produces our knowledge. In contrast, the natural-evolution approach also recognizes the structure of cognition and even the physics of the brain and the environment; however, the neuroscientists, who employ the natural-evolution approach, do not consider the advent of the brain’s capacity for conscious-cognition to be the outcome of purposeful intent. Rather, the neuroscientists consider the advent of conscious-cognition to be a consequence of the evolved fitness of biological systems that adapt to their environments. Therefore, scientists expect the emerging material world and biological systems to be meandering to the extent that they believe that the advent of conscious-cognition is merely a fortuitous outcome. In the end, the scientists’ natural-evolution approach stymies knowledge because the scientists do not expect purposeful structures that directly facilitate our cognitive faculties.
For example, two key pioneers of modern physics, Albert Einstein and Max Planck, revered the utility of the design approach to understanding the universe. Both expected beauty and elegance as they fathomed how to unpack the workings of the universe. First, Planck discovered energy’s discrete constant of proportionality. Later when quantum physicists discovered that subatomic behavior is statistical, as a consequence of energy’s discrete nature; Einstein objected to physicists incorporating themselves (the observers) into their observations by their mathematically factoring Planck’s proportional constant, in order to reconcile the subatomic world’s statistical behavior with the macroscopic world’s normal, uniform behavior. Einstein understood that the physicists lacked a philosophical understanding of the subatomic world’s behavior; therefore, he recognized that by their incorporating themselves into their observations, they compromised their objective ability to employ the scientific method.
In direct response to Albert Einstein’s observation, the mechanical designer’s model, Immanuel’s Law, understands that cognition itself stands upon our incorporating our perspectives into our environments through the generating of magnitudinous models, which are synonymous with cognitive concepts, through renormalizing processes. To be specific, the designer, Fuller, kept in mind that energy’s discrete proportionality forms the basis of the chemical bonds and compounds that regulate energy’s emission and absorption, ultimately to normalize the forces of the appearing world; therefore, Fuller understood that the brain must also quantify discrete magnitudinous representations, per a constant of proportionality, in order to retain conceptions of the appearing world. Like so, the mechanical designer, Fuller, closed the mind-matter explanatory gap and so audaciously reconciled Christian theology and Western philosophy with modern physics.
The natural-evolution approach to determining consciousness’ origin in the brainAt this point, we have sufficiently considered the mechanical designer’s creator-design approach to determining conscious-cognition in the brain; however, to recognize the merits of the creator-design approach, we must now assess the natural-evolution approach. But since the natural-evolution approach reaches similar material conclusions as the mechanical designer, our assessment will be very brief: we will only highlight how the natural-evolution approach fails to establish the integrity of the observer, in regards to the natural-evolution approach’s entirely ignoring Albert Einstein’s warning to physicists that our science is unsound unless we philosophically grasp how to maintain objectivity while reconciling the statistical behavior of the subatomic world with the uniform world that we observe. You see, the cognitive neuroscientists who champion the natural-evolution approach frame their hypotheses in the context of 19th Century classical physics in which Charles Darwin framed the theory of evolution; therefore, evolutionary theory does not encompass the early 20th Century quantum physics understanding of the subatomic world. And so, though the following natural-evolution approach reaches similar material conclusions as the mechanical designer, we shall here briefly delineate how the approach falls apart: we shall see how their framing their hypotheses in the evolutionary context does not yield the philosophical (or even epistemological) framework for their hypotheses to predict outcomes that correspond with observations, in regards to the actual nature of conscious-cognition. To be specific, we shall see that their approach ultimately fails because the classical physics based theory of evolution cannot reconcile with the neuronal network’s quantum physics based operations.
Furthermore, to demonstrate the contrast between the creator-design and the natural-evolution approaches to distinguishing conscious-cognition’s origin in the brain; we chose the most recent work that represents the decades old development of the natural-evolution approach. Moreover, the paper that we chose is a contemporaneous work, which the authors fortuitously published in July 2022, several months after the mechanical designer, Fuller, distributed his open letter to cognitive neuroscientists, in March 2022.
The paper, representing the natural-evolution approach, is entitled “Qualia and Phenomenal Consciousness Arise From the Information Structure of an Electromagnetic Field in the Brain”; moreover, the work can be found on the NIH National Library of Medicine website.
Recognizing the criteria for being a true scientific paper that presents logically formed hypotheses that others can test and verify, the authors of the natural-evolution paper present a series of hypotheses that seek to answer four main questions, which the authors deem to be necessary for determining how the brain obtains conscious-cognition: the first question seeks to answer what constitutes the physical substrate for subjective, phenomenal, consciousness (i.e., P-consciousness). The second question seeks a determination concerning whether the substrate is generated in a particular part of the brain or the entirety of the body. The third question seeks to determine where does qualia, the actual qualitative sense (i.e., the like thisness), arise. And finally the fourth question seeks to determine what differentiates P-consciousness electromagnetic (EM) fields from other naturally occurring electromagnetic fields.
To answer the first question (i.e., what is consciousness’ physical substrate), the authors hypothesize that the brain’s electromagnetic (EM) waves, which moving electrical currents generate as synchronic neurons fire, are the physical substrate to consciousness. For similar reasons as the mechanical designer, the authors reach their conclusion because the physical characteristics of the EM waves correspond with conscious-cognitions ability to store and transmit information, almost instantly: the authors note that the particularities of the EM fields directly reflect the particularities of each charged neuronal circuit that generates the field; moreover, the authors note that the EM fields are more integrated than the electro-chemical exchanges throughout the neuronal network. For this reason, the authors note how the EM waves superimpose their more generalized structure upon the neuronal network.
In short, the premise that the authors establish in their initial hypothesis is that the complexity and integration of neuronal networks enable the resulting EM waves to generalize sensory inputs and manifest consciousness. And so, in deference to this their first hypothesis, the authors of the natural-evolution approach answer the second question by further hypothesizing that consciousness appears in the thalamus because of its neuronal structure’s complexity. Recall that the mechanical designer also considers the thalamus as being the location for conscious-cognition but for more critical reasons than neuronal complexity: remember, the designer recognizes the need for the brain to have a processing capacity that corresponds with the manner in which quantized energy regulates the world with normalized degrees and demarcations. For this reason, we noted how the designer concludes that the thalamus fits the circumstance that enables the brain to perform a personalized renormalizing operation. And so, interestingly, like the mechanical designer, the authors also ponder whether the brain’s neuronal network reflects quantum-like operations, as the authors deduce that conscious percepts occur at the collapse of the EM wave-functions where the EM waves interface with one another; however, as we shall shortly note, their natural-evolution context prevents the authors from fully recognizing the implications of the brain’s performing quantum-like calculations.
Hence, only recognizing the brain’s quantum-like calculations to be a subject that they should consider later, the authors of the natural-evolution approach confidently step ahead to hypothesize an answer to the third question, which they deem to be necessary for determining the origin of consciousness in the brain. Again, the third question seeks to determine where do qualia (that is, consciousness’ qualitative senses) arise. To answer the third question, the authors simply hypothesize that the particularity of each sensory neuron’s resulting EM field carries particular information of the sensed experience. And so, the authors postulate that the thalamus’ EM fields simply emulate the initial sensory fields and so render a subjective experience. Of course, we saw that the mechanical designer reaches similar material conclusions but with mechanical and epistemological understanding.
At the close of the authors’ paper, we finally begin to see the authors’ natural-evolution approach fall apart as the authors seek to answer the fourth question (that is, what differentiates “conscious” EM fields from natural EM fields). To answer the fourth question, the authors hypothesize that the complexity of the thalamic EM fields supervene over less complex contributing sensory fields, in a manner that renders a best-estimate generalization of the information from the contributing fields. Though confident in their answers; to their credit, the authors recognize that their hypothesis, of complexity manifesting consciousness, falls short of a full explanation since the less complex neuronal systems in animals manifest consciousness, as well. What is more, they also recognize that their work does not answer what many cognitive neuroscientists consider to be the hard questions, like why does the brain’s integrated neuron network interpret naturally occurring wavelength-frequencies photons as color, and why does the neuronal network interpret some wavelength-frequencies as sound pitch. Thus, at the last, the authors empathize with many cognitive neuroscientists who consider consciousness as an epiphenomenon (a mirage), since many brain functions that instruct the body occur regardless of consciousness.
Thus, let us quickly highlight the two most recent opposing approaches that seek to answer the question of whether God created natural intelligence or evolutionary processes gave rise to natural intelligence: the first approach, a Creator’s design approach, is by an aerospace mechanical designer, Derick M. Fuller, who named the approach Immanuel’s Law, meaning the natural laws of physics are a logical expression of God, who is the basis of all things. Moreover, the designer published the approach in the third chapter of a doctrinal treatise, the Landscape of Truth, in 2014; and he produced an online video presentation in October 2020, followed by an open letter to cognitive neuroscientists in March 2022.
True to his profession, the designer initially framed the necessary mechanical system that consciousness requires: he considered the physical requirements that support the basic structure of cognition and the nature of the environment that the system must operate upon. Recognizing how philosophers, since Plato, have developed the philosophical discipline of epistemology to define consciousness; he understood that conscious-cognition’s basic structure is a capacity to apprehend universal perceptions of the environment and thereby ground those perceptions into categorical concepts (judgements) per the unified sense of one’s state of being. And so, he understood that to distinguish a physical system that corresponds with conscious-cognition’s fundamental operation of rationalizing universal perceptions; he knew that he must distinguish how the brain interacts with the physical system that proves to be the substrate that all other physical systems emerge upon. Like so, he knew that he could readily distinguish how the brain models magnitudinous relationships as categorical concepts of the real world’s physical systems.
Knowing his physics, the designer understood that the mechanical basis of the material universe is a mathematical constant of proportionality: the constant governs energy transfer to normalize the statistical nature of subatomic particles into the macroscopic materials and extreme magnitudes of the universe that the brain enables consciousness to observe. However, also knowing that the brain cannot actually store but only correlatively model the universe’s extreme magnitudes; the designer, Fuller, recognized that the mechanical system equating consciousness must be a series of renormalizing processes that likewise normalize the subatomic world’s statistical nature unto scaled proportional magnitudes that mimic the real world. For example, the designer knew that for the brain to model the appearing world, it must perform an initial normalizing process to distinguish scaled magnitudes that equate cognitive percepts; moreover, he understood that the brain must sequentially perform an augmented normalizing process that correlates the scaled magnitudes into a constant of proportionality that equates a grounding of perceptions as concepts, per one’s personal state of being. And lastly, he realized that the brain must persist with a matching and retention of ongoing renormalizing processes that amounts to conscious will, itself.
To observe and demarcate the brain’s mechanical operation of renormalizing statistically based sensor-input into the magnitudinous models of conscious-cognition; the designer, Fuller, distinguished the brain’s underlying power source: he essentially distinguished that which survives the brain’s tendency toward entropy, that is, the brain’s mechanical breakdown that manifests as disease. In this way, he could map out the entire mechanical system. The designer’s obvious choice for the brain’s power source was the brain’s electromagnetic (EM) waves, which the brain’s neuron-cells’ electro-chemical exchanges generate. You see, each tree-like shaped neuron connects to other neurons and exchanges electro-chemical signals. Thereby, the neuronal net generates particular magnetic fields; as each neuron, neuronal layer, and electro-chemical signaling bears unique composition that renders to each part of the brain specific functions.
Currently, having no further understanding other than knowing the brain’s neuronal structure, cognitive neuroscientists can only point to brain regions that are active during cognitive behavior: the neuroscientists’ natural-evolution approach to framing the brain’s workings leaves them with no understanding of how the brain’s functioning actually amounts to consciousness. Many scientists, therefore, believe as Immanuel Kant held, that scientific knowledge of the self cannot be known. Some even go to the extent of believing that consciousness is an epiphenomenon, a mirage, of neuronal processes.
In contrast, the mechanical designer’s approach, Immanuel’s Law, entails philosophy’s strong epistemological structure that reinterprets the brain’s workings and closes the mind-body explanatory gap, by reconciling Christian theology, Western philosophy, and contemporary science to define the mechanical workings of consciousness. For example, Immanuel’s Law observes that the brain’s initial renormalizing process occurs at the cerebral cortex; where the cortex’s neuronal layers systematize sensory input into magnitudinous concepts. Next, Immanuel’s Law observes that the personalizing renormalizing process occurs at the thalamus, near the brain’s center, as the cortex-thalamus circuit transfers the cortex’s EM waves to the thalamus where the thalamus region obtains the constant of proportionality. Then finally, Immanuel’s Law observes that the matching and retention renormalizing process occurs as memory centers, like the hippocampus, reconcile ongoing sensory input with existing impressions upon the neuronal net. In fact, the mechanical designer calls the whole circuit, which obtains consciousness’ constant of proportionality, a dynamic reference frame: the first-person magnitudinous sense of self, which is a standing on one’s being that physically corresponds with an intra-dimensional experience of spacetime that seamlessly reconciles past, present, and forecasted perceptions, due to the dynamic reference frame’s grasping of the background—that is, energy’s proportionality—as the reference frame reconstitutes, representationally, the normalized world that again stands upon statistically based subatomic interactions.
The mechanical designer, Derick M. Fuller’s, design approach works because the approach simply seeks to reverse engineer a purposeful design; moreover, the designer establishes that the brain’s operation is by design because the electro-chemical exchange; the integrating of the EM waves through renormalizing processes; and the ultimate obtaining of a constant of proportionality all stand upon quantum mathematical relationships that preexist the physical worlds that emerge upon quantum proportionalities. The preexisting mathematically scheme matches Greek philosophers’ logos supposition of a prime mover, initiating all things; and of course the preexisting mathematical scheme matches Christian theology’s description of God the Father’s creating all things through God’s Logos, the second person of the Trinity. In short, the design approach reestablishes the moral and ethical foundations of Western civilization, from which the general public can scrutinize ongoing technological and social innovations:
To be specific, the design approach pursues the sense of purposefulness that results from the logical consequences of any given physical system’s operation. Then from the sense of purposefulness, society can theologically and philosophically structure an ethical compass; moreover, from the logical consequences of the system, society can derive hypotheses that scientists can falsify, that is, prove true or false.
Undeniably, when understood correctly, the mechanical designer, Derick M. Fuller’s, model shows striking promise. Yet, in order for the mechanical designer to prove that his model for the mechanism of consciousness is true; scientists must subject the designer’s model, Immanuel’s Law, to the scrutiny of the scientific method. In other words, scientists must test Immanuel’s Law as a hypothesis, in order to see if the hypothesis makes valid predictions; whereby all would recognize Immanuel’s Law as exemplifying true laws of physics. Still, though Immanuel’s Law has yet to undergo scientific scrutiny, the recent natural-evolution approach to determining consciousness’ origin in the brain independently reaches nearly identical conclusions as the mechanical designer, Fuller; therefore, the natural-evolutionary approach vindicates, Immanuel’s Law, though Immanuel’s Law points to far more reaching theological, philosophical, and political consequences.
Thus, to vindicate either the design approach or the natural-evolution approach to determining consciousness’ origin in the brain, we must establish that the authors of either approach appropriately apply the scientific method; which depends upon the integrity of the observer, in regards to the observer’s capacity to acquire objective knowledge. Indeed, the question of the integrity of the observer has haunted the practitioners of the scientific method, since the applied sciences of the 16th through 19th began to outpace philosophical institutions in gathering such knowledge that we have yet to comprehend philosophically. The German philosopher Immanuel Kant conceded that the design approach to understanding the universe is necessary in the effort to secure rational understanding, though Kant unfortunately concluded that we are unable to secure an objective understanding of one’s personhood or God. And so, despite the design approach’s utility in acquiring rational understanding, the natural-evolution approach has by far been the context in which scientists conceive the universe’s workings. And so, since scientists lack philosophical definition, the true natures of conscious-cognition and the universe have remained cryptic and mysterious at best.
You see, the design approach expects purpose, even a design intent that the designer can gainfully reconcile with Western philosophy’s descriptions of the structure of conscious-cognition, which produces our knowledge. In contrast, the natural-evolution approach also recognizes the structure of cognition and even the physics of the brain and the environment; however, the neuroscientists, who employ the natural-evolution approach, do not consider the advent of the brain’s capacity for conscious-cognition to be the outcome of purposeful intent. Rather, the neuroscientists consider the advent of conscious-cognition to be a consequence of the evolved fitness of biological systems that adapt to their environments. Therefore, scientists expect the emerging material world and biological systems to be meandering to the extent that they believe that the advent of conscious-cognition is merely a fortuitous outcome. In the end, the scientists’ natural-evolution approach stymies knowledge because the scientists do not expect purposeful structures that directly facilitate our cognitive faculties.
For example, two key pioneers of modern physics, Albert Einstein and Max Planck, revered the utility of the design approach to understanding the universe. Both expected beauty and elegance as they fathomed how to unpack the workings of the universe. First, Planck discovered energy’s discrete constant of proportionality. Later when quantum physicists discovered that subatomic behavior is statistical, as a consequence of energy’s discrete nature; Einstein objected to physicists incorporating themselves (the observers) into their observations by their mathematically factoring Planck’s proportional constant, in order to reconcile the subatomic world’s statistical behavior with the macroscopic world’s normal, uniform behavior. Einstein understood that the physicists lacked a philosophical understanding of the subatomic world’s behavior; therefore, he recognized that by their incorporating themselves into their observations, they compromised their objective ability to employ the scientific method.
In direct response to Albert Einstein’s observation, the mechanical designer’s model, Immanuel’s Law, understands that cognition itself stands upon our incorporating our perspectives into our environments through the generating of magnitudinous models, which are synonymous with cognitive concepts, through renormalizing processes. To be specific, the designer, Fuller, kept in mind that energy’s discrete proportionality forms the basis of the chemical bonds and compounds that regulate energy’s emission and absorption, ultimately to normalize the forces of the appearing world; therefore, Fuller understood that the brain must also quantify discrete magnitudinous representations, per a constant of proportionality, in order to retain conceptions of the appearing world. Like so, the mechanical designer, Fuller, closed the mind-matter explanatory gap and so audaciously reconciled Christian theology and Western philosophy with modern physics.
The natural-evolution approach to determining consciousness’ origin in the brainAt this point, we have sufficiently considered the mechanical designer’s creator-design approach to determining conscious-cognition in the brain; however, to recognize the merits of the creator-design approach, we must now assess the natural-evolution approach. But since the natural-evolution approach reaches similar material conclusions as the mechanical designer, our assessment will be very brief: we will only highlight how the natural-evolution approach fails to establish the integrity of the observer, in regards to the natural-evolution approach’s entirely ignoring Albert Einstein’s warning to physicists that our science is unsound unless we philosophically grasp how to maintain objectivity while reconciling the statistical behavior of the subatomic world with the uniform world that we observe. You see, the cognitive neuroscientists who champion the natural-evolution approach frame their hypotheses in the context of 19th Century classical physics in which Charles Darwin framed the theory of evolution; therefore, evolutionary theory does not encompass the early 20th Century quantum physics understanding of the subatomic world. And so, though the following natural-evolution approach reaches similar material conclusions as the mechanical designer, we shall here briefly delineate how the approach falls apart: we shall see how their framing their hypotheses in the evolutionary context does not yield the philosophical (or even epistemological) framework for their hypotheses to predict outcomes that correspond with observations, in regards to the actual nature of conscious-cognition. To be specific, we shall see that their approach ultimately fails because the classical physics based theory of evolution cannot reconcile with the neuronal network’s quantum physics based operations.
Furthermore, to demonstrate the contrast between the creator-design and the natural-evolution approaches to distinguishing conscious-cognition’s origin in the brain; we chose the most recent work that represents the decades old development of the natural-evolution approach. Moreover, the paper that we chose is a contemporaneous work, which the authors fortuitously published in July 2022, several months after the mechanical designer, Fuller, distributed his open letter to cognitive neuroscientists, in March 2022.
The paper, representing the natural-evolution approach, is entitled “Qualia and Phenomenal Consciousness Arise From the Information Structure of an Electromagnetic Field in the Brain”; moreover, the work can be found on the NIH National Library of Medicine website.
Recognizing the criteria for being a true scientific paper that presents logically formed hypotheses that others can test and verify, the authors of the natural-evolution paper present a series of hypotheses that seek to answer four main questions, which the authors deem to be necessary for determining how the brain obtains conscious-cognition: the first question seeks to answer what constitutes the physical substrate for subjective, phenomenal, consciousness (i.e., P-consciousness). The second question seeks a determination concerning whether the substrate is generated in a particular part of the brain or the entirety of the body. The third question seeks to determine where does qualia, the actual qualitative sense (i.e., the like thisness), arise. And finally the fourth question seeks to determine what differentiates P-consciousness electromagnetic (EM) fields from other naturally occurring electromagnetic fields.
To answer the first question (i.e., what is consciousness’ physical substrate), the authors hypothesize that the brain’s electromagnetic (EM) waves, which moving electrical currents generate as synchronic neurons fire, are the physical substrate to consciousness. For similar reasons as the mechanical designer, the authors reach their conclusion because the physical characteristics of the EM waves correspond with conscious-cognitions ability to store and transmit information, almost instantly: the authors note that the particularities of the EM fields directly reflect the particularities of each charged neuronal circuit that generates the field; moreover, the authors note that the EM fields are more integrated than the electro-chemical exchanges throughout the neuronal network. For this reason, the authors note how the EM waves superimpose their more generalized structure upon the neuronal network.
In short, the premise that the authors establish in their initial hypothesis is that the complexity and integration of neuronal networks enable the resulting EM waves to generalize sensory inputs and manifest consciousness. And so, in deference to this their first hypothesis, the authors of the natural-evolution approach answer the second question by further hypothesizing that consciousness appears in the thalamus because of its neuronal structure’s complexity. Recall that the mechanical designer also considers the thalamus as being the location for conscious-cognition but for more critical reasons than neuronal complexity: remember, the designer recognizes the need for the brain to have a processing capacity that corresponds with the manner in which quantized energy regulates the world with normalized degrees and demarcations. For this reason, we noted how the designer concludes that the thalamus fits the circumstance that enables the brain to perform a personalized renormalizing operation. And so, interestingly, like the mechanical designer, the authors also ponder whether the brain’s neuronal network reflects quantum-like operations, as the authors deduce that conscious percepts occur at the collapse of the EM wave-functions where the EM waves interface with one another; however, as we shall shortly note, their natural-evolution context prevents the authors from fully recognizing the implications of the brain’s performing quantum-like calculations.
Hence, only recognizing the brain’s quantum-like calculations to be a subject that they should consider later, the authors of the natural-evolution approach confidently step ahead to hypothesize an answer to the third question, which they deem to be necessary for determining the origin of consciousness in the brain. Again, the third question seeks to determine where do qualia (that is, consciousness’ qualitative senses) arise. To answer the third question, the authors simply hypothesize that the particularity of each sensory neuron’s resulting EM field carries particular information of the sensed experience. And so, the authors postulate that the thalamus’ EM fields simply emulate the initial sensory fields and so render a subjective experience. Of course, we saw that the mechanical designer reaches similar material conclusions but with mechanical and epistemological understanding.
At the close of the authors’ paper, we finally begin to see the authors’ natural-evolution approach fall apart as the authors seek to answer the fourth question (that is, what differentiates “conscious” EM fields from natural EM fields). To answer the fourth question, the authors hypothesize that the complexity of the thalamic EM fields supervene over less complex contributing sensory fields, in a manner that renders a best-estimate generalization of the information from the contributing fields. Though confident in their answers; to their credit, the authors recognize that their hypothesis, of complexity manifesting consciousness, falls short of a full explanation since the less complex neuronal systems in animals manifest consciousness, as well. What is more, they also recognize that their work does not answer what many cognitive neuroscientists consider to be the hard questions, like why does the brain’s integrated neuron network interpret naturally occurring wavelength-frequencies photons as color, and why does the neuronal network interpret some wavelength-frequencies as sound pitch. Thus, at the last, the authors empathize with many cognitive neuroscientists who consider consciousness as an epiphenomenon (a mirage), since many brain functions that instruct the body occur regardless of consciousness.
ConclusionThe authors of the natural-evolution approach (like so many contemporary scientists who retreat to the belief that consciousness is an epiphenomenon) not only do society a disservice, but also do other proponents of the scientific method a disservice: first, we must say that their dismissing consciousness as an epiphenomenon dismisses the meaning, purposefulness, and truths that motivate human aspiration. Indeed, a description of the societal fallout that results from the scientific community’s indifference to human meaning is beyond the scope of our current discussion. However, one must know that their dismissal of consciousness as an epiphenomenon directly undermines the integrity of the observer who seeks to employ the scientific method: a method that stands upon the conviction that we have the philosophical capacity to frame logical hypothesis in the context of our self-conscious perspectives, upon which we build knowledge upon observable predictions.
As we have mentioned, Albert Einstein expressed his grave concern over quantum physicists’ failing to establish the objective integrity of the observer, as the physicists factored their statistical computations into their observations of the subatomic world’s statistical nature. Understandably, the quantum physicists grappled with the realization that an exact knowledge of the subatomic world cannot be known; however, as contemporary scientists embrace the belief that consciousness is an epiphenomenon, they inadvertently call into question our capacity to make observations objectively and thereby employ the scientific method, altogether.
The commentary of the 20th Century German philosopher Martin Heidegger is most instructive here. Heidegger criticized modern physicists for casting their theories and hypotheses in what Heidegger described as atomism; which basically is to prioritize and consider incremental aspects of observed phenomena within spacetime, without considering the priority of the whole system that entails the observed. Essentially, what Heidegger recognized is that today’s scientists ignore the millennia old rationalist verses empiricist debate, that is, whether we can acquire knowledge through the rational means of logically comprehending whole systems or whether we can acquire knowledge through what we empirically observe. Heidegger recognized that contemporary scientists fail to regard the dynamic nature of our cognitive experience: his remarkable work fully understands that one cannot separate the stream of consciousness, in regards to separating a present (ready-at-hand) awareness within spacetime from the affectedness of past and expected awareness. And so, Heidegger’s work concludes that to identify the physical substrate of conscious-cognition, one must identify how the substrate gives conscious-cognition the capacity to stand upon its being in correspondence to the world that conscious-cognition becomes aware of.
The theory of evolution is above all an incremental and empiricist concept; therefore, the natural-evolution approach inhibits a full perspective for scientists who seek to determine the origin of consciousness in the brain. For example, when the authors of the above example paper conclude that consciousness occurs at the collapse of integrative and complex wave functions; they ignore certain quantum-based realities that not only make the electro-magnetic waves’ integrative relationships possible, but also relegate the resulting conscious-cognition to be dependent upon the quantum relationships rather than the integrative electromagnetic waves, as the mechanical designer concludes. You see, physicists Max Planck and Albert Einstein destroyed the classical physics understandings that frame the theory of evolution: Planck and Einstein demonstrated that materials interface and exchange forces based upon mathematically quantifiable frequencies of discrete energy. Previously, classical physicists maintained the assumption that such observable things like volume, intensity, or duration caused the universe’s force exchanges; however, contemporary physicists soon found that the discrete quantifiable nature of energy mathematically characterizes the demarcations and measures of the universe’s forces; whereas, non-discrete and infinite energies would render the universe an amalgamated singularity of nothingness.
And so, the brilliance that we find in the mechanical designer’s conclusion that conscious-cognition stands upon a renormalizing substrate is the following: by distinguishing that conscious-cognition corresponds with a renormalizing process of reconciling quantized energy’s subatomic statistical relationships with the uniform material relationships appearing within spacetime; the mechanical designer, Fuller, grasps a mechanical operation that corresponds with the structure of knowledge. As an outcome, within the context of the operational expectancies of the designer’s mechanical model, scientists can employ the scientific method to test and perfect our understanding of brain function.
For example, recall from above that the designer, Fuller, distinguishes the mechanical operation by distinguishing how the brain’s neuronal network integrates the brain’s EM fields to model and correlate the sensed magnitudes of the material world in a way that grasp the world’s proportional transfer of forces. Remember, the designer in this way distinguishes that as the brain independently grasps the material world’s proportional constant, the brain grasps conscious-cognition, which then reimagines the world per its state of being.
The benefit to society that the mechanical designer’s model, Immanuel’s Law, brings is that Immanuel’s Law reconciles science with Western philosophy and theology to the end that the model yields an ethical framework to advance science. In point of fact, Immanuel’s Law description of the brain’s renormalizing process matches Heidegger’s description of consciousness’ being a self-aware stand upon one’s being (Dasein); where one grasps the background upon which all things unfold within spacetime. Too, Immanuel’s Law’s description matches Immanuel Kant’s description of the self: its being a grounding of concepts per one’s grasp of space and time. And because the classical physics based theory of evolution cannot account for the brain’s quantum based operations (since the non-dimensional quantum-based operations enable the macroscopic evolutionary processes within the spacetime dimensions rather than being an outcome of the evolutionary processes); Immanuel’s Law theologically demonstrates that the appearing universe stands upon the priority of a logic-based divine expression (God’s Logos), from which we can philosophically frame the emergence of our knowledge structure and ethical understanding.
Howbeit, Immanuel’s Law theological consequences are far beyond the scope of our discussion. We are only sufficed to say that the natural-evolution approach, as demonstrated by the example paper, holds no mechanical content from which we may glean philosophical or ethical understanding. The authors of the paper’s conclusion that consciousness occurs at the collapse of integrative and complex wave functions yields no mechanical operations that we can assess as contributing or diminishing the structure of knowledge. Again, one must note that the authors’ conclusions maintain a classical physics world view of volume, intensity, and duration effecting force exchanges. In simpler terms, their conclusions beg the following questions: at what point in the neuronal networks’ increasing complexity does consciousness occur? What is the exact operation that the increasing complexity must achieve to establish consciousness?
Decidedly, the natural-evolution approach to determining conscious-cognition’s origin in the brain fails. The correct response for us is to advance the mechanical designer’s model, Immanuel’s Law, for the betterment of society.
As we have mentioned, Albert Einstein expressed his grave concern over quantum physicists’ failing to establish the objective integrity of the observer, as the physicists factored their statistical computations into their observations of the subatomic world’s statistical nature. Understandably, the quantum physicists grappled with the realization that an exact knowledge of the subatomic world cannot be known; however, as contemporary scientists embrace the belief that consciousness is an epiphenomenon, they inadvertently call into question our capacity to make observations objectively and thereby employ the scientific method, altogether.
The commentary of the 20th Century German philosopher Martin Heidegger is most instructive here. Heidegger criticized modern physicists for casting their theories and hypotheses in what Heidegger described as atomism; which basically is to prioritize and consider incremental aspects of observed phenomena within spacetime, without considering the priority of the whole system that entails the observed. Essentially, what Heidegger recognized is that today’s scientists ignore the millennia old rationalist verses empiricist debate, that is, whether we can acquire knowledge through the rational means of logically comprehending whole systems or whether we can acquire knowledge through what we empirically observe. Heidegger recognized that contemporary scientists fail to regard the dynamic nature of our cognitive experience: his remarkable work fully understands that one cannot separate the stream of consciousness, in regards to separating a present (ready-at-hand) awareness within spacetime from the affectedness of past and expected awareness. And so, Heidegger’s work concludes that to identify the physical substrate of conscious-cognition, one must identify how the substrate gives conscious-cognition the capacity to stand upon its being in correspondence to the world that conscious-cognition becomes aware of.
The theory of evolution is above all an incremental and empiricist concept; therefore, the natural-evolution approach inhibits a full perspective for scientists who seek to determine the origin of consciousness in the brain. For example, when the authors of the above example paper conclude that consciousness occurs at the collapse of integrative and complex wave functions; they ignore certain quantum-based realities that not only make the electro-magnetic waves’ integrative relationships possible, but also relegate the resulting conscious-cognition to be dependent upon the quantum relationships rather than the integrative electromagnetic waves, as the mechanical designer concludes. You see, physicists Max Planck and Albert Einstein destroyed the classical physics understandings that frame the theory of evolution: Planck and Einstein demonstrated that materials interface and exchange forces based upon mathematically quantifiable frequencies of discrete energy. Previously, classical physicists maintained the assumption that such observable things like volume, intensity, or duration caused the universe’s force exchanges; however, contemporary physicists soon found that the discrete quantifiable nature of energy mathematically characterizes the demarcations and measures of the universe’s forces; whereas, non-discrete and infinite energies would render the universe an amalgamated singularity of nothingness.
And so, the brilliance that we find in the mechanical designer’s conclusion that conscious-cognition stands upon a renormalizing substrate is the following: by distinguishing that conscious-cognition corresponds with a renormalizing process of reconciling quantized energy’s subatomic statistical relationships with the uniform material relationships appearing within spacetime; the mechanical designer, Fuller, grasps a mechanical operation that corresponds with the structure of knowledge. As an outcome, within the context of the operational expectancies of the designer’s mechanical model, scientists can employ the scientific method to test and perfect our understanding of brain function.
For example, recall from above that the designer, Fuller, distinguishes the mechanical operation by distinguishing how the brain’s neuronal network integrates the brain’s EM fields to model and correlate the sensed magnitudes of the material world in a way that grasp the world’s proportional transfer of forces. Remember, the designer in this way distinguishes that as the brain independently grasps the material world’s proportional constant, the brain grasps conscious-cognition, which then reimagines the world per its state of being.
The benefit to society that the mechanical designer’s model, Immanuel’s Law, brings is that Immanuel’s Law reconciles science with Western philosophy and theology to the end that the model yields an ethical framework to advance science. In point of fact, Immanuel’s Law description of the brain’s renormalizing process matches Heidegger’s description of consciousness’ being a self-aware stand upon one’s being (Dasein); where one grasps the background upon which all things unfold within spacetime. Too, Immanuel’s Law’s description matches Immanuel Kant’s description of the self: its being a grounding of concepts per one’s grasp of space and time. And because the classical physics based theory of evolution cannot account for the brain’s quantum based operations (since the non-dimensional quantum-based operations enable the macroscopic evolutionary processes within the spacetime dimensions rather than being an outcome of the evolutionary processes); Immanuel’s Law theologically demonstrates that the appearing universe stands upon the priority of a logic-based divine expression (God’s Logos), from which we can philosophically frame the emergence of our knowledge structure and ethical understanding.
Howbeit, Immanuel’s Law theological consequences are far beyond the scope of our discussion. We are only sufficed to say that the natural-evolution approach, as demonstrated by the example paper, holds no mechanical content from which we may glean philosophical or ethical understanding. The authors of the paper’s conclusion that consciousness occurs at the collapse of integrative and complex wave functions yields no mechanical operations that we can assess as contributing or diminishing the structure of knowledge. Again, one must note that the authors’ conclusions maintain a classical physics world view of volume, intensity, and duration effecting force exchanges. In simpler terms, their conclusions beg the following questions: at what point in the neuronal networks’ increasing complexity does consciousness occur? What is the exact operation that the increasing complexity must achieve to establish consciousness?
Decidedly, the natural-evolution approach to determining conscious-cognition’s origin in the brain fails. The correct response for us is to advance the mechanical designer’s model, Immanuel’s Law, for the betterment of society.
We Need Your SupportWe must again emphasize that we do not mean to single out the authors of the example paper that we cite above. Nor are we entirely dismissing certain observable conclusions of the theory of evolution, which the authors’ paper stands upon. We fully recognize that biological species adapt to their environment over long periods of time; however, we understand that evolutionary theory is a macroscopic effect and not necessarily the cause of our coming into being. And so, we are sure that the authors of the paper and the institutions that have supported their work are consummate professionals with brilliant understandings of their craft. In fact, we are sure that if the authors could see beyond the narrow scope of evolution and recognize the wide scope of the mechanical design approach, they would more than likely embellish Immanuel’s Law far beyond the humble mechanical designer’s capacity.
But until such a time, we need your support in promoting the mechanical designer, Derick M. Fuller’s, work: Immanuel’s Law. The funding source for advertising and promoting Immanuel’s Law comes from the sale of the doctrinal treatise that entails Immanuel’s Law in the treatise’s third chapter: “The Landscape of Truth: An Orthodox Understanding of the Biblical Testaments for the True Worshippers”.
Please purchase a copy from any online retailer or find a link here on thelandscapeoftruth.com. The Landscape of Truth is a systematic theology of the Holy Bible, designed for Church Bible studies and Church institutes. The Landscape adheres to the Creeds of Christendom and maintains a mainstream view of Reformed Theology, from the Protestant Reformation.
For more information about Immanuel’s Law, itself, please view the promotional video (Immanuel’s Law: the World’s First Scientifically Falsifiable Proof for the Existence of God and the Human Soul) on YouTube, which you may also find here on our website. Also, find the mechanical designer’s open letter to cognitive neuroscientists.
But until such a time, we need your support in promoting the mechanical designer, Derick M. Fuller’s, work: Immanuel’s Law. The funding source for advertising and promoting Immanuel’s Law comes from the sale of the doctrinal treatise that entails Immanuel’s Law in the treatise’s third chapter: “The Landscape of Truth: An Orthodox Understanding of the Biblical Testaments for the True Worshippers”.
Please purchase a copy from any online retailer or find a link here on thelandscapeoftruth.com. The Landscape of Truth is a systematic theology of the Holy Bible, designed for Church Bible studies and Church institutes. The Landscape adheres to the Creeds of Christendom and maintains a mainstream view of Reformed Theology, from the Protestant Reformation.
For more information about Immanuel’s Law, itself, please view the promotional video (Immanuel’s Law: the World’s First Scientifically Falsifiable Proof for the Existence of God and the Human Soul) on YouTube, which you may also find here on our website. Also, find the mechanical designer’s open letter to cognitive neuroscientists.
The Mechanism of Consciousness and its Qualia from an Aerospace Mechanical Designer (An Open Letter)
Published March 29, 2022 - thelandscapeoftruth.com
Greetings,
Throughout my 30 year career as an aerospace mechanical designer, I fully recognize that mechanical application (the producing of mechanisms to assist us in our interacting with the environment) is the greatest scientific proof that we philosophically comprehend the immediate environmental domain that our mechanisms operate upon. However, I also recognize that as engineers continue the quest to test and establish scientific theories; engineers face increasing difficulties in devising technology for testing apparatuses, as well as securing material and monetary resources. As a consequence, I have observed that perspective project teams face increasing difficulties in magnifying the importance of their work to potential funders and the general public. Having recognized the challenges, I have acquired an acute appreciation for the importance of being able to frame the theories philosophically, in regards to gaining an epistemological understanding of what information the theories produce for the general public; as well as securing an ontological understanding of the theories, in regards to the mechanical relationship between distinct parts.
Throughout my 30 year career as an aerospace mechanical designer, I fully recognize that mechanical application (the producing of mechanisms to assist us in our interacting with the environment) is the greatest scientific proof that we philosophically comprehend the immediate environmental domain that our mechanisms operate upon. However, I also recognize that as engineers continue the quest to test and establish scientific theories; engineers face increasing difficulties in devising technology for testing apparatuses, as well as securing material and monetary resources. As a consequence, I have observed that perspective project teams face increasing difficulties in magnifying the importance of their work to potential funders and the general public. Having recognized the challenges, I have acquired an acute appreciation for the importance of being able to frame the theories philosophically, in regards to gaining an epistemological understanding of what information the theories produce for the general public; as well as securing an ontological understanding of the theories, in regards to the mechanical relationship between distinct parts.
With that said, I understand that a major obstacle in achieving a full philosophical conception of proposed projects is the reality that philosophical concepts have long since lost their capacity for scientific verification; from the time when the German philosophy Immanuel Kant’s work, the Critique of Pure Reason, demonstrated that conceptions of the self or metaphysical understandings of God’s creative work are impossible to prove objectively within space and time. And so, as theories are becoming increasingly harder to demonstrate or prove, scientific research increasingly advances as a ship without a rudder.
Thankfully, however, I have grown in appreciation of the fact that the discipline of cognitive neuroscience proves itself to be the most capable discipline to rescue our philosophical speculations with hard science. I believe that neuroscientists could demonstrate that cognitive neuroscience should be considered the leading science; if and only if neuroscientists can publically prove their theories by demonstrating mechanical application: essentially, they must demonstrate that they have found the mechanism of consciousness. Indeed, only by demonstrating that they have found a scientifically falsifiable mechanism for consciousness (the most valuable thing to everyone) can cognitive neuroscience capture the general public’s regard for the discipline’s being essential to science.
And so, for your consideration, I would like to offer what I have determined to be the mechanism for consciousness, that is, again from the design perspective of an aerospace mechanical designer. To begin, however, I must emphasize that cognitive neuroscience relies heavily upon envisioning how the process of natural selection developed the brain, as the fitness of neurological systems adapted to seemingly random processes. Yet, while I recognize the impact of the natural selection process, I must underscore the fact that Charles Darwin envisioned the theory of evolution upon the backdrop of a Newtonian, classical physics’ understanding of the world, which had no notion of the world’s true statistical nature, as quantum mechanics now reveals. Therefore, I understand that our merely envisioning how the brain adapted to random processes does not reveal the brain’s necessary capacity to model statistically the world, in order to present consciousness’ uniform and normalized perceptual concepts.
I am sure that you are acutely aware that a multitude of pseudo-science theories of consciousness exploit the mysterious nature of quantum physics; however, by stressing only a mechanical model that we can observe, we can bypass the nonsensical (the nonsense) and grasp a sound understanding of the world that the brain and the nervous system interface with, in order for the brain to achieve and maintain consciousness.
So to distinguish a straightforward and sober understanding of the quantum implications that we must factor into our understanding of how the brain interfaces with the world, we have only first to distinguish the accompanying philosophical building blocks that the mechanism of consciousness must entail: the mechanism must integrate the brain and its nervous system to establish a unified perception of the self, as the mechanism rationalizes conceptions of the immediately perceived environment within perceived universals that transcend the immediate perceptions. In other words, the mechanism must resolve the ancient question of universals, that is, how the mind grasps universal occurrences in the environment from which the mind can construct knowledge through analysis, categorization, and typification: the building blocks of consciousness. Of course, Plato attempted to resolve the problem of universals with his description of how we through reason grasp perfect ethereal ideals; Immanuel Kant attempted to describe how we grasp an apperception of space and time upon which we ground perceptions; and phenomenologists like Martin Heidegger imagined that we apprehend a background (Dasein) through which we integrate prior, at-hand, and ongoing perceptions within space and time. Even the designers of artificial intelligence struggle with the problem of universals, as they seek to employ “Deep learning” artificial programming networks to simulate the precognitive knowledge base of humans.
Keeping our pursuit of the mind’s grasping universals in mind, the key principles that we must note from quantum mechanics are the following: first, for the converging and expanding forces of energy to extend spacetime, energy must be discrete and permeate in distinct values; because absolute energy would constitute a singularity. Second, we can only statistically locate the exact position and momentum of subatomic matter within the atomic configuration; and we can only statistically locate the exact position and momentum of subatomic matter that interfaces with the atom, because specific locations would imply a linear graduation that would accumulatively undermine the a prior distinct definitions of allowable energy levels that enable ordered interactions. Hence, definitive measurements only yield nonsensical infinities results. As an outcome, any direct observation of a subatomic particle renders the accompanying alternate quantum state unobservable; therefore, lastly, the priority of the natural laws that govern the quantum interactions and the mathematical invariance that govern and normalize quantum interactions through the exchange of forces (like the exchange of light-photons between electrons) are what the brain must statistically associate with to model conceptions of the normalized world. In short, the brain’s capacity to grasp the invariant constants of proportionalities that coordinate and normalize the discrete and distinct manifestations of energy is the means for the brain to grasp universals, as I shall briefly describe in our identifying the mechanism of consciousness:
Currently, cognitive neuroscience is adept at mapping brain regions that enable such sense-perceptions like touch, facial recognition, sight, and hearing; however, the discipline does not factor that in the real world, great variations of energy and its magnitudes constitute what the brain seeks to mimic. Therefore, to envision how the brain mechanically produces sense-perceptions of the real world as it achieves the mechanism of consciousness; we must keep in mind that the brain has to capture, isolate, reconcile, and statistically generate calibrated and recalibrated conceptual relationships of magnitudes within ordered spacetime intervals. At the same time, we must understand that the brain must reconcile all in an invariant relationship of a constant of proportionality between the conceptual magnitudes. Like so, we will understand how the brain achieves a normalized perception of the real world.
We call our falsifiable hypothesis Immanuel’s Law: it is an epistemological understanding of, first, how the world is invariantly expressed to us in a manner that enables us to reconcile instances of phenomenon within universal constants of proportionality and, second, how the brain apprehends the universal proportionality. To this end, Immanuel’s Law entails four principles: first, cognition results from the brain’s mechanical operation that semi-autonomously employs its nervous system to perform a renormalizing process that seeks to engage the body’s environment (an environment that results from invariant force exchanges that normalize energy’s quantum indeterminacy). Second, the brain’s mechanical operation entails an initial renormalizing process; third, a personalizing renormalizing process; and finally, fourth, a matching and retention renormalizing process that constitute consciousness, itself.
The essential points of how Immanuel’s Law hypothesizes the mechanism of consciousness is as follows: the input of the mechanism proceeds from the simple reflex arcs of sensory neurons throughout the body. The signals then travel through the autonomic area of the brain stem unto the brain’s cortex, where the initial renormalizing process occurs: rather than executing only simple electro-chemical reflex arcs; the cortex executes more complex electro-chemical exchanges that bears a plasticity that isolate and capture instances of magnitudes, while statistically capturing the potentiality of like ongoing instances within the discrete domains that the particularity of the cellular structures allow. To be specific, the isolating of discrete magnitudes occurs as a result of the particular architecture of neuron cells, the neural transmitters between the cells, and the specialized neuronal cell layers within the cortex. And the capturing of discrete magnitudes (by the particular neuron cells, neural transmitters, and the cortex’s specialized layers) occurs as the neurons’ electrochemical exchanges generate electromagnetic fields that consist of particle-waves, which exchange force carrying protons. And so, the electromagnetic fields have both quantum and macroscopic properties, to the end that one cannot argue that the quantum effects of the brain are merely localized.
Next, the personalized renormalizing process begins as the thalamic circuit possesses complementing neuronal structures like the cortex structures. As a result, the complementing neural layer structures between the thalamic circuit and the cortex apprehend constants of proportionality through the respective electromagnetic waves. And so, the matching and retention renormalizing process ensues as existing cortical and thalamic proportional neuronal allotments either assimilate or adapt new sensory inputs. As a final outcome, during the process of the brain-mechanism’s attempts to adapt new sense impressions to the neuronal nets proportional allotments, the brain’s mechanism is instantly translated into a dynamic references frame, that is, the neuronal correlate of consciousness that actively seeks to assimilate ongoing sense-perceptions unto its a priori sense of the universality of proportional wholes.
And so, Immanuel’s Law concludes that the haecceity—the qualia, like-thisness—of consciousness is a magnitudinous sense, bearing the mechanistic physicality of a dynamic reference frame; that is, not merely a classical Cartesian reference frame or an Einsteinian relativity reference frame, but more so an dynamic participant in the fluid unpacking the normalized emergence of spacetime.
Much more can be said about the content of Immanuel’s Law, especially in regards to its scientific, theological, and physical implications. Indeed, evolutionary theory’s describes how, through long periods of time, species increase sophistication through adaptive processes, according to the species’ fitness; however, evolutionary theory cannot explain how the brain anticipates the modal effects of processes that occur on the subatomic level, with no definitive spacetime values. Nonetheless, the mechanism that Immanuel’s Law describes is entirely falsifiable; therefore, many supporters of science’s pursuit of objectivity will welcome its perspective. Also, the general public, with faith-based leanings, will embrace the theory as well. For example, Immanuel’s Law’s ultimate conclusions correspond with Greek philosophers’ conception of a first-cause, prime mover initiating the world, logically. Likewise, Immanuel’s Law corresponds with Christian theology’s description of God’s absolute ontological being, YHWH (I am), expressing the world through a Logos (a logical expression). Immanuel’s Law even agrees with German philosopher Immanuel Kant’s concession that there is at least epistemological utility in imagining that an ordered world proceeds from the designs of a supernatural creator. Whatever your personal perspective, please consider Immanuel’s Law’s merit for the betterment of all. You can find out more information by searching for the Immanuel’s Law promotional video on YouTube (see section VII at 1hr 6min). The video is audaciously titled “Immanuel’s Law: the World’s First Scientifically Falsifiable Proof for the Existence of God and the Human Soul”
Otherwise, the doctrinal treatise can be found on online retailers, entitled “The Landscape of Truth: an Orthodox Understanding of the Biblical Testaments for the True Worshippers”
Thank you,
Derick M. Fuller
Thankfully, however, I have grown in appreciation of the fact that the discipline of cognitive neuroscience proves itself to be the most capable discipline to rescue our philosophical speculations with hard science. I believe that neuroscientists could demonstrate that cognitive neuroscience should be considered the leading science; if and only if neuroscientists can publically prove their theories by demonstrating mechanical application: essentially, they must demonstrate that they have found the mechanism of consciousness. Indeed, only by demonstrating that they have found a scientifically falsifiable mechanism for consciousness (the most valuable thing to everyone) can cognitive neuroscience capture the general public’s regard for the discipline’s being essential to science.
And so, for your consideration, I would like to offer what I have determined to be the mechanism for consciousness, that is, again from the design perspective of an aerospace mechanical designer. To begin, however, I must emphasize that cognitive neuroscience relies heavily upon envisioning how the process of natural selection developed the brain, as the fitness of neurological systems adapted to seemingly random processes. Yet, while I recognize the impact of the natural selection process, I must underscore the fact that Charles Darwin envisioned the theory of evolution upon the backdrop of a Newtonian, classical physics’ understanding of the world, which had no notion of the world’s true statistical nature, as quantum mechanics now reveals. Therefore, I understand that our merely envisioning how the brain adapted to random processes does not reveal the brain’s necessary capacity to model statistically the world, in order to present consciousness’ uniform and normalized perceptual concepts.
I am sure that you are acutely aware that a multitude of pseudo-science theories of consciousness exploit the mysterious nature of quantum physics; however, by stressing only a mechanical model that we can observe, we can bypass the nonsensical (the nonsense) and grasp a sound understanding of the world that the brain and the nervous system interface with, in order for the brain to achieve and maintain consciousness.
So to distinguish a straightforward and sober understanding of the quantum implications that we must factor into our understanding of how the brain interfaces with the world, we have only first to distinguish the accompanying philosophical building blocks that the mechanism of consciousness must entail: the mechanism must integrate the brain and its nervous system to establish a unified perception of the self, as the mechanism rationalizes conceptions of the immediately perceived environment within perceived universals that transcend the immediate perceptions. In other words, the mechanism must resolve the ancient question of universals, that is, how the mind grasps universal occurrences in the environment from which the mind can construct knowledge through analysis, categorization, and typification: the building blocks of consciousness. Of course, Plato attempted to resolve the problem of universals with his description of how we through reason grasp perfect ethereal ideals; Immanuel Kant attempted to describe how we grasp an apperception of space and time upon which we ground perceptions; and phenomenologists like Martin Heidegger imagined that we apprehend a background (Dasein) through which we integrate prior, at-hand, and ongoing perceptions within space and time. Even the designers of artificial intelligence struggle with the problem of universals, as they seek to employ “Deep learning” artificial programming networks to simulate the precognitive knowledge base of humans.
Keeping our pursuit of the mind’s grasping universals in mind, the key principles that we must note from quantum mechanics are the following: first, for the converging and expanding forces of energy to extend spacetime, energy must be discrete and permeate in distinct values; because absolute energy would constitute a singularity. Second, we can only statistically locate the exact position and momentum of subatomic matter within the atomic configuration; and we can only statistically locate the exact position and momentum of subatomic matter that interfaces with the atom, because specific locations would imply a linear graduation that would accumulatively undermine the a prior distinct definitions of allowable energy levels that enable ordered interactions. Hence, definitive measurements only yield nonsensical infinities results. As an outcome, any direct observation of a subatomic particle renders the accompanying alternate quantum state unobservable; therefore, lastly, the priority of the natural laws that govern the quantum interactions and the mathematical invariance that govern and normalize quantum interactions through the exchange of forces (like the exchange of light-photons between electrons) are what the brain must statistically associate with to model conceptions of the normalized world. In short, the brain’s capacity to grasp the invariant constants of proportionalities that coordinate and normalize the discrete and distinct manifestations of energy is the means for the brain to grasp universals, as I shall briefly describe in our identifying the mechanism of consciousness:
Currently, cognitive neuroscience is adept at mapping brain regions that enable such sense-perceptions like touch, facial recognition, sight, and hearing; however, the discipline does not factor that in the real world, great variations of energy and its magnitudes constitute what the brain seeks to mimic. Therefore, to envision how the brain mechanically produces sense-perceptions of the real world as it achieves the mechanism of consciousness; we must keep in mind that the brain has to capture, isolate, reconcile, and statistically generate calibrated and recalibrated conceptual relationships of magnitudes within ordered spacetime intervals. At the same time, we must understand that the brain must reconcile all in an invariant relationship of a constant of proportionality between the conceptual magnitudes. Like so, we will understand how the brain achieves a normalized perception of the real world.
We call our falsifiable hypothesis Immanuel’s Law: it is an epistemological understanding of, first, how the world is invariantly expressed to us in a manner that enables us to reconcile instances of phenomenon within universal constants of proportionality and, second, how the brain apprehends the universal proportionality. To this end, Immanuel’s Law entails four principles: first, cognition results from the brain’s mechanical operation that semi-autonomously employs its nervous system to perform a renormalizing process that seeks to engage the body’s environment (an environment that results from invariant force exchanges that normalize energy’s quantum indeterminacy). Second, the brain’s mechanical operation entails an initial renormalizing process; third, a personalizing renormalizing process; and finally, fourth, a matching and retention renormalizing process that constitute consciousness, itself.
The essential points of how Immanuel’s Law hypothesizes the mechanism of consciousness is as follows: the input of the mechanism proceeds from the simple reflex arcs of sensory neurons throughout the body. The signals then travel through the autonomic area of the brain stem unto the brain’s cortex, where the initial renormalizing process occurs: rather than executing only simple electro-chemical reflex arcs; the cortex executes more complex electro-chemical exchanges that bears a plasticity that isolate and capture instances of magnitudes, while statistically capturing the potentiality of like ongoing instances within the discrete domains that the particularity of the cellular structures allow. To be specific, the isolating of discrete magnitudes occurs as a result of the particular architecture of neuron cells, the neural transmitters between the cells, and the specialized neuronal cell layers within the cortex. And the capturing of discrete magnitudes (by the particular neuron cells, neural transmitters, and the cortex’s specialized layers) occurs as the neurons’ electrochemical exchanges generate electromagnetic fields that consist of particle-waves, which exchange force carrying protons. And so, the electromagnetic fields have both quantum and macroscopic properties, to the end that one cannot argue that the quantum effects of the brain are merely localized.
Next, the personalized renormalizing process begins as the thalamic circuit possesses complementing neuronal structures like the cortex structures. As a result, the complementing neural layer structures between the thalamic circuit and the cortex apprehend constants of proportionality through the respective electromagnetic waves. And so, the matching and retention renormalizing process ensues as existing cortical and thalamic proportional neuronal allotments either assimilate or adapt new sensory inputs. As a final outcome, during the process of the brain-mechanism’s attempts to adapt new sense impressions to the neuronal nets proportional allotments, the brain’s mechanism is instantly translated into a dynamic references frame, that is, the neuronal correlate of consciousness that actively seeks to assimilate ongoing sense-perceptions unto its a priori sense of the universality of proportional wholes.
And so, Immanuel’s Law concludes that the haecceity—the qualia, like-thisness—of consciousness is a magnitudinous sense, bearing the mechanistic physicality of a dynamic reference frame; that is, not merely a classical Cartesian reference frame or an Einsteinian relativity reference frame, but more so an dynamic participant in the fluid unpacking the normalized emergence of spacetime.
Much more can be said about the content of Immanuel’s Law, especially in regards to its scientific, theological, and physical implications. Indeed, evolutionary theory’s describes how, through long periods of time, species increase sophistication through adaptive processes, according to the species’ fitness; however, evolutionary theory cannot explain how the brain anticipates the modal effects of processes that occur on the subatomic level, with no definitive spacetime values. Nonetheless, the mechanism that Immanuel’s Law describes is entirely falsifiable; therefore, many supporters of science’s pursuit of objectivity will welcome its perspective. Also, the general public, with faith-based leanings, will embrace the theory as well. For example, Immanuel’s Law’s ultimate conclusions correspond with Greek philosophers’ conception of a first-cause, prime mover initiating the world, logically. Likewise, Immanuel’s Law corresponds with Christian theology’s description of God’s absolute ontological being, YHWH (I am), expressing the world through a Logos (a logical expression). Immanuel’s Law even agrees with German philosopher Immanuel Kant’s concession that there is at least epistemological utility in imagining that an ordered world proceeds from the designs of a supernatural creator. Whatever your personal perspective, please consider Immanuel’s Law’s merit for the betterment of all. You can find out more information by searching for the Immanuel’s Law promotional video on YouTube (see section VII at 1hr 6min). The video is audaciously titled “Immanuel’s Law: the World’s First Scientifically Falsifiable Proof for the Existence of God and the Human Soul”
Otherwise, the doctrinal treatise can be found on online retailers, entitled “The Landscape of Truth: an Orthodox Understanding of the Biblical Testaments for the True Worshippers”
Thank you,
Derick M. Fuller
The Need For A Scientific Understanding Of The Human Spirit’s Preeminence: The Need To Confront the Ethical Challenge Of Introducing Artificial Intelligence Into The Global Economy’s Arms And Technology Race
Published April 29, 2018 – thelandscapeoftruth.com
In the mid-20th Century, moral questions concerning our nation’s defense, foreign, and economic policies seemed crystal clear. The propriety of the policies especially seemed clear in regards to our nation’s defending Western democracy against the spread of totalitarian socialism, as instigated by the Soviet Union.
Since the Protestant Reformation and the discovery of the New World, commercial opportunities have empowered European commoners to enjoin democratic states that ensure individual freedoms in a free market. Western democracies have for the most part safeguarded citizen rights. To date, Western governments routinely address the ethical concerns of emerging technologies and commercial development. The governments guard against economic exploitation.
For the people’s interests, Western governments have successfully balanced political platforms that encourage strong government intervention into the economy with political platforms that relegate the government to playing merely an oversight role. Defending the people’s political choices, Western governments have successfully defended their democracies against regimes that not only force government control over their economies, but also seek to impose the regimes’ powers over vulnerable sovereign states. And so, Western governments’ greatest ethical concern, in regards to the free market’s technological propagation, is the proliferation of nuclear technology amongst autocratic socialist regimes that routinely threaten the sovereignty of other nations.
Banding together their military forces and economies, Western nations peacefully met the nuclear threat of the Soviet Union. They isolated the Soviet Union economically in a so-called Cold War. Then after the socialist economy and government of the Soviet Union fell, Western nations promoted the global economy through free trade deals. Like so, Western nations effectively pacified the autocratic nations by making the autocracies’ economic gains more beneficial than the autocracies’ hostile military gains.
The moral questions concerning our nation’s defense, foreign, and economic policies seemed clear, during the Cold War; however, the moral questions concerning our nation’s policies are increasingly unclear in the post-Cold War global era. For example, Western leaders believed that trading with autocratic nations would inspire the autocratic nations to democratize because of their economic gains. Yet, the Western leaders failed to see that the autocratic nations would rather maintain political stability than risk the social turmoil that often accompany the free market’s cycles: they fear to risk the turmoil that usually results from a nation’s inability to ensure constant economic growth for all. The autocratic nations especially avoid liberating their peoples, because the autocracies benefit from the cheap labor forces of the oppressed. As an outcome, Western nations’ enjoining free trade agreements with non-democratic nations inadvertently sanctions both Western nation’s democratic policies and non-Western nation’s oppressive policies, alike. And so, unlike past generations, the current post-Cold War millennial generation stands without moral clarity: the generation often lacks the moral compass to make ethical decisions about many technological innovations that affect society as a whole.
Though post-Cold War multinational forces retrain autocratic nations from overthrowing vulnerable nations, the proliferation of nuclear armament remains a threat for the millennial generation to face. Further complicating policy decisions is the proliferation of artificial intelligence (AI) without an ethical determination of how AI technology poses unforeseen risks for national defense, human health, labor, and even the commoners’ hard-fought political enfranchisement.
To understand the significance of artificial intelligence (AI), we have only to underscore what sets it apart from other inventions. We have only to note that many abilities define human nature apart from the animal kingdom; however, the two key abilities that stand out are humanity’s capacity to reason and humanity’s capacity to produce tools that enable humanity to overcome their environments. Arguably, AI is the greatest tool that humanity has produced: every tool, be it a hand held tool or an automated tool, requires the skilled decision making and oversight of the person who employs the tool; however, AI gives the tool, itself, a decision making capacity. AI is a computerized machine that mimics human cognition by appearing to perceive its environment as it learns and problem solves within a set domain.
As with the employment of any tool, the employment of AI requires ethical oversight because benefits and dangers arise with the utilization of all tools. Our distant ancestors either decided to use stone flakes as domicile instruments, or decided to use them as spear and arrow heads for weaponry. More recent ancestors at times either decided to transform plowshares into swords, or decided to transform swords into plowshares. Our current ethical dilemma arises from the fact that a rogue nation that has the capacity to produce nuclear energy may easily decide to use its nuclear capacity to produce nuclear weapons.
The essential benefit that artificial intelligence presents is as follows: AI offers labor or services as an auxiliary or augmentation to the human capacity to offer the same labor or services; especially in regards to the degree to which AI offers the labor or services. AI often performs under physical circumstances that inhibit or exhaust human performance. And so, the benefits of AI are far reaching, notably in targeted retail options as well as targeted media and telecommunications services, for example, smart televisions and cellular phones. AI’s performing services in the targeted media and telecommunication’s capacity amounts to the real-time offering of knowledge from, as it were, expert panels standing ready to serve any consumer, per that consumer’s immediate need. AI in robotics offers even more impressive benefits, as seen in unmanned space exploration, manufacturing, transportation, and military operations. Because artificial intelligence stands upon its capacity to mimic human cognition in decision making, the danger that AI poses is in its limited capacity to achieve the human level of abstract reasoning, in regards to qualitative decision making, though AI excels in quantitative decision making. In other words, AI cannot effectively make qualitative decisions in terms of acceptable risks versus unacceptable risks per an abstract sense of worth; therefore, the continued employment of AI during instances when unacceptable risks are evident results in moral compromises. For example, many think that the only moral question in increasing AI’s presence in manufacturing is the risk of losing low-skilled workers’ vital manufacturing jobs. Many express concern that crucial Western manufacturing jobs will vanish, like previous manufacturing jobs have vanished to autocratic nations who economically exploit their laborers. Howbeit, the true moral question directly concerns our trusting a vital part of our economy to the limited decision making capacity of artificial intelligence. Having an interest in career advancement, human laborers play a critical role as they routinely spy ways to better products and production. Also, human laborers routinely function as whistle blowers for the general public when products are faulty or when companies operate corruptly. Thus, performing with the critical judgment that AI lacks, human laborers play a significant role in advancing economic innovation and better working environments for the general public.
We may find a litany of examples that demonstrate how a full reliance upon artificial intelligence’s limited capacity for labor or services entails adverse or even moral consequences. For instance, engineers can program autonomous cars to recognize and avoid stray animals crossing roadways; however, only human reasoning enables a human driver to take greater care and risk after the human distinguishes an encounter with a small puppy on a misadventure from a cunning agile fox of the same size. Indeed, our human appreciation for the life of an adorable puppy is almost instinctive; however, our true concern is the risks to human life that AI’s limited capacity poses. For instance, military conflict continues to be the greatest threat to human life, especially when the conflict involves nuclear weapons. The introduction of AI to weaponry exponentially increases the threat. Engineers can program an autonomous drone to return fire when hostile forces fire upon it; however, only a human commander can risk using nonlethal force in deference to diplomacy after the commander distinguishes that the hostile forces are unwilling participants who fire amiss.
Our ultimate concern about fully relying upon artificial intelligence’s limited capacity is AI’s proliferation among autocratic nations who will more than likely ignore the risks. Though the free Western nations triumphed in the Cold War’s nuclear standoff with the Soviet Union, the increasing introduction of artificial intelligence into military weaponry heightens the threat of an uncontrollable war like never before.
Proponents of artificial intelligence believe that they can address AI’s unacceptable risks by programming ethical determinations into AI’s limited decision making capacity. AI developers draw inspiration from cognitive neuroscientists. The neuroscientists have successfully employed a systems approach to mapping the brain’s neurological production of the excitatory and inhibitory chemicals that produce awareness states, emotions, and the other cognitive functions that render conscious decision making. Of particular interest to AI developers is the effort to determine how the neurological processes produce cognition’s predisposition to ascertain spatial awareness of object-property relationships: the predisposition enables conscious predictions; moreover, the predisposition secures an intuitive sense of worth, which limits endless data mining.
For the people’s interests, Western governments have successfully balanced political platforms that encourage strong government intervention into the economy with political platforms that relegate the government to playing merely an oversight role. Defending the people’s political choices, Western governments have successfully defended their democracies against regimes that not only force government control over their economies, but also seek to impose the regimes’ powers over vulnerable sovereign states. And so, Western governments’ greatest ethical concern, in regards to the free market’s technological propagation, is the proliferation of nuclear technology amongst autocratic socialist regimes that routinely threaten the sovereignty of other nations.
Banding together their military forces and economies, Western nations peacefully met the nuclear threat of the Soviet Union. They isolated the Soviet Union economically in a so-called Cold War. Then after the socialist economy and government of the Soviet Union fell, Western nations promoted the global economy through free trade deals. Like so, Western nations effectively pacified the autocratic nations by making the autocracies’ economic gains more beneficial than the autocracies’ hostile military gains.
The moral questions concerning our nation’s defense, foreign, and economic policies seemed clear, during the Cold War; however, the moral questions concerning our nation’s policies are increasingly unclear in the post-Cold War global era. For example, Western leaders believed that trading with autocratic nations would inspire the autocratic nations to democratize because of their economic gains. Yet, the Western leaders failed to see that the autocratic nations would rather maintain political stability than risk the social turmoil that often accompany the free market’s cycles: they fear to risk the turmoil that usually results from a nation’s inability to ensure constant economic growth for all. The autocratic nations especially avoid liberating their peoples, because the autocracies benefit from the cheap labor forces of the oppressed. As an outcome, Western nations’ enjoining free trade agreements with non-democratic nations inadvertently sanctions both Western nation’s democratic policies and non-Western nation’s oppressive policies, alike. And so, unlike past generations, the current post-Cold War millennial generation stands without moral clarity: the generation often lacks the moral compass to make ethical decisions about many technological innovations that affect society as a whole.
Though post-Cold War multinational forces retrain autocratic nations from overthrowing vulnerable nations, the proliferation of nuclear armament remains a threat for the millennial generation to face. Further complicating policy decisions is the proliferation of artificial intelligence (AI) without an ethical determination of how AI technology poses unforeseen risks for national defense, human health, labor, and even the commoners’ hard-fought political enfranchisement.
To understand the significance of artificial intelligence (AI), we have only to underscore what sets it apart from other inventions. We have only to note that many abilities define human nature apart from the animal kingdom; however, the two key abilities that stand out are humanity’s capacity to reason and humanity’s capacity to produce tools that enable humanity to overcome their environments. Arguably, AI is the greatest tool that humanity has produced: every tool, be it a hand held tool or an automated tool, requires the skilled decision making and oversight of the person who employs the tool; however, AI gives the tool, itself, a decision making capacity. AI is a computerized machine that mimics human cognition by appearing to perceive its environment as it learns and problem solves within a set domain.
As with the employment of any tool, the employment of AI requires ethical oversight because benefits and dangers arise with the utilization of all tools. Our distant ancestors either decided to use stone flakes as domicile instruments, or decided to use them as spear and arrow heads for weaponry. More recent ancestors at times either decided to transform plowshares into swords, or decided to transform swords into plowshares. Our current ethical dilemma arises from the fact that a rogue nation that has the capacity to produce nuclear energy may easily decide to use its nuclear capacity to produce nuclear weapons.
The essential benefit that artificial intelligence presents is as follows: AI offers labor or services as an auxiliary or augmentation to the human capacity to offer the same labor or services; especially in regards to the degree to which AI offers the labor or services. AI often performs under physical circumstances that inhibit or exhaust human performance. And so, the benefits of AI are far reaching, notably in targeted retail options as well as targeted media and telecommunications services, for example, smart televisions and cellular phones. AI’s performing services in the targeted media and telecommunication’s capacity amounts to the real-time offering of knowledge from, as it were, expert panels standing ready to serve any consumer, per that consumer’s immediate need. AI in robotics offers even more impressive benefits, as seen in unmanned space exploration, manufacturing, transportation, and military operations. Because artificial intelligence stands upon its capacity to mimic human cognition in decision making, the danger that AI poses is in its limited capacity to achieve the human level of abstract reasoning, in regards to qualitative decision making, though AI excels in quantitative decision making. In other words, AI cannot effectively make qualitative decisions in terms of acceptable risks versus unacceptable risks per an abstract sense of worth; therefore, the continued employment of AI during instances when unacceptable risks are evident results in moral compromises. For example, many think that the only moral question in increasing AI’s presence in manufacturing is the risk of losing low-skilled workers’ vital manufacturing jobs. Many express concern that crucial Western manufacturing jobs will vanish, like previous manufacturing jobs have vanished to autocratic nations who economically exploit their laborers. Howbeit, the true moral question directly concerns our trusting a vital part of our economy to the limited decision making capacity of artificial intelligence. Having an interest in career advancement, human laborers play a critical role as they routinely spy ways to better products and production. Also, human laborers routinely function as whistle blowers for the general public when products are faulty or when companies operate corruptly. Thus, performing with the critical judgment that AI lacks, human laborers play a significant role in advancing economic innovation and better working environments for the general public.
We may find a litany of examples that demonstrate how a full reliance upon artificial intelligence’s limited capacity for labor or services entails adverse or even moral consequences. For instance, engineers can program autonomous cars to recognize and avoid stray animals crossing roadways; however, only human reasoning enables a human driver to take greater care and risk after the human distinguishes an encounter with a small puppy on a misadventure from a cunning agile fox of the same size. Indeed, our human appreciation for the life of an adorable puppy is almost instinctive; however, our true concern is the risks to human life that AI’s limited capacity poses. For instance, military conflict continues to be the greatest threat to human life, especially when the conflict involves nuclear weapons. The introduction of AI to weaponry exponentially increases the threat. Engineers can program an autonomous drone to return fire when hostile forces fire upon it; however, only a human commander can risk using nonlethal force in deference to diplomacy after the commander distinguishes that the hostile forces are unwilling participants who fire amiss.
Our ultimate concern about fully relying upon artificial intelligence’s limited capacity is AI’s proliferation among autocratic nations who will more than likely ignore the risks. Though the free Western nations triumphed in the Cold War’s nuclear standoff with the Soviet Union, the increasing introduction of artificial intelligence into military weaponry heightens the threat of an uncontrollable war like never before.
Proponents of artificial intelligence believe that they can address AI’s unacceptable risks by programming ethical determinations into AI’s limited decision making capacity. AI developers draw inspiration from cognitive neuroscientists. The neuroscientists have successfully employed a systems approach to mapping the brain’s neurological production of the excitatory and inhibitory chemicals that produce awareness states, emotions, and the other cognitive functions that render conscious decision making. Of particular interest to AI developers is the effort to determine how the neurological processes produce cognition’s predisposition to ascertain spatial awareness of object-property relationships: the predisposition enables conscious predictions; moreover, the predisposition secures an intuitive sense of worth, which limits endless data mining.
Thus, the AI developers who pursue AI’s learning and ethical capacity understand that they must somehow mimic the way human learning and ethical determinations emerge from the brain’s intuitive ability to make predictions, as the brain inherently distinguishes worth in its spatial awareness of objects and their properties.
To mimic the neurological structuring that gives the brain its predictive capacity, artificial intelligence developers employ deep learning algorithms: algorithms, of course, being formal procedures that solve mathematical problems. Programmers design normal algorithms to respond to direct input; however, programmers design deep learning algorithms to respond to data representations of statistically based outcomes. Like so, the deep learning algorithms attempt to mimic a learning process. Programmers have successfully demonstrated the usefulness of deep learning algorithms in such technologies like speech recognition; however, AI enhanced with deep learning algorithms remain far from even approximately human and animal intelligence. AI developers have not discovered nor mimicked the neurological mechanism that gives the brain its predictive capacity. Nonetheless, because of AI’s technological successes, the proliferation of AI technology proceeds, despite the unanswered ethical questions and unacceptable risks.
We do not blame the proponents of artificial intelligence for proceeding with the development of the technology, despite the fact that they fail to match our human capacity for inherent reasoning and value based judgments. Since the dawn of Greco-Roman civilization, philosophers, scientists, and theologians have been trying to define the extent to which humans have an innate capacity to conceive abstract representations of the world prior to actual experiences. Greco-Roman philosophers pioneered modern forms of representational government in response to their philosophical considerations that either question an individuals’ innate reasoning capacity to rule themselves, or question the need for centralized undemocratic government to rule the animalistic nature of humankind. The 18th Century authors of modern government, in like manner, devised Western constitutions by first considering humankind’s state of nature: John Locke (1632-1702) inspired the framers of the U.S. Constitution with the belief that humankind has an innate reasoning capacity; therefore, the framers of the U.S. Constitution sought to limit the power of the Federal government, in deference to the innate liberty of the individual. In contrast, Jean-Jacques Rousseau (1712-1778) inspired the framers of the French Republic’s Constitution with the belief that humankind has a brutish nature that needs censorship; therefore, the framers of the French Republic’s Constitution sought to censor amasses of private wealth and political hegemony. Religious institutes even took up the question of humankind’s innate capacity for reasoning and judgment: excepting some Protestant Christians, all religions hold that individuals have the innate capacity to obtain salvation through religious works, to the extent that the religious judge others who do not participate in their religious actions; whereas Protestants who hold an orthodox belief understand that salvation is not innate but a discriminating divine conferment, to the extent that orthodox Protestants do not judge others: a cultural practice that has facilitated the rise of secular progressive government in the West. In few words, our ability to invest AI with an ethical capacity depends upon our ability to discover and mimic humanity’s innate reasoning capacity: a discovery that philosophers, scientists, and theologians have failed to make.
The only way to discover the extent to which we can invest artificial intelligence with an ethical capacity by our capturing humanity’s innate reasoning ability is to resolve the mind-matter explanatory gap: two unresolved questions that first ask what gives the mind the capability to stand apart from the world, while generalizing perceptions per the mind’s state of being, and second ask to what extent does the mind’s environment frame the mind’s perceptions. Both questions constitute the nature verses nurture question.
Most thinkers throughout history believed that the mind-matter explanatory gap were unsolvable; however, we thelandscapeoftruth.com understand that the answer is now straightforward and readily apparent. Ongoing scientific, philosophical, and theological advancements have presented the means for us to answer the question in the early 20th Century: a critical period in which physicists discovered that energy only exist in discrete quantities (quanta). The discovery presented the profound consequence that the subatomic world never exists in exact states in spacetime. The physicists found that we can only employ a statistical process that physicists term renormalization to know the exact nature of the macroscopic world that we seemingly observe in continuity.
As often occurs in history, scientists, philosophers, and theologians overlook the profundity of new discoveries because they cannot appreciate the vital roles that each of their respective disciplines play in advancing our understanding of the new discoveries. For instance, scientists frequently ignore philosophers’ epistemological considerations of the innate structure of our knowledge and the objective knowableness of what we observe. Likewise, philosophers regularly fail to seek the scientific method’s falsifiable examinations in order to prove their philosophical speculations. What is more, both scientists and philosophers routinely disregard theologians’ pursuit of a cosmological truth; wherein the theologians go one step further than the philosophers’ consideration of our knowledge structure and the knowableness of what we observe. Instead of merely considering the nature of our innate knowledge structure, the theologians consider the human worth of our minds’ liberty apart from the world; moreover, instead of merely considering the knowableness of what we observe, the theologians consider the inherent meaning of our experiences. Notwithstanding, though our theological understandings frame our constitutional conceptions of liberty and justice, as well as many other ethical determinations, the theological understandings are not scientifically falsifiable. In this way, scientific advances like artificial intelligence stand without an ethical compass.
Some contemporary theories do try to explain the meaning of energy’s quantized nature and the resulting macroscopic world that we mathematically normalize to secure critical observations of; however, all the theories amount to science fiction because none of the theories are scientifically falsifiable. Since none have apprehended the significance of our normalized apprehension of the world, mainstream scientists, philosophers, and theologians often ignore the significance. As a consequence, cognitive neuroscientists do not consider our need to renormalize energy’s quantum indeterminacy as playing a role in our identifying the mind’s rational capacity.
We have so entitled our website, thelandscapeoftruth.com, based upon the premise that we recognize the correlating roles that science, Western philosophy, and orthodox Christian theology have played in advancing modernity’s Western styled democracy, free market, and liberal society. We recognize the correlating contributions that scientists, philosophers, and Christian theologians make in advancing our understanding of newfound discoveries. By appreciating the joint contribution from science, philosophy, and theology; our landscape perspective stands more apt to recognize the profound implications of energy’s quantum indeterminacy and the subsequent need for us to normalize the appearing world. In point of fact, we understand that consciousness, itself, is a natural normalizing process: consciousness entails a subsuming apprehension of various manifestations of energy, such as light, soundwaves, and other magnitudinous relationships, under a continuous perception of spacetime. Thus, we understand that cognitive neuroscience’s systems approach to defining consciousness is flawed because the approach fails to recognize the brain and central nervous system’s renormalizing function.
To understand how the brain apprehends consciousness, we have developed a falsifiable model that demonstrates how the brain and the central nervous system (CNS) renormalize energy’s quantum indeterminacy to apprehend consciousness’ unified perception of the world. Our model, which we have entitled Immanuel’s Law, reimagines cognitive neuroscience’s systems approach, by envisioning how to account for the brain’s renormalizing process. First, our model draws from a general understanding of contemporary science’s quantum physics to detail, loosely, what the renormalizing process entails. Second, our model draws from philosophy’s epistemological considerations to describe the structure of knowledge that the renormalizing process conforms to. Third, our model appreciates our theological understanding of freedom to describe how the mind is both free and contingent to the world that it perceives. Finally, to validate the above scientific, philosophical, and theological principles, our model achieves scientific falsifiability.
Our model, Immanuel’s Law, may be a generalization of the manner in which the brain and central nervous system (CNS) apprehend consciousness through a renormalizing process; however, Immanuel’s Law secures the objective means to gauge the feasibility of artificial intelligence developers’ efforts to mimic human cognition as the developers seek to invest AI with an ethical capacity. To describe how our brain and CNS achieve an ethical capacity, our model, Immanuel’s Law, offers a falsifiable description of how the brain and the CNS achieve two abilities during their renormalizing process: first, Immanuel’s Law recognizes that we have the ability to rationalize a sense of identity and self-worth, regardless of particular physical circumstances; moreover, the model recognizes that we have the ability to confer the same sense of identity and self-worth to others. Second, Immanuel’s Law recognizes that we have the ability to employ circumstances as abstract metaphors of universal values that transcend particular circumstances. Immanuel’s Law then describes how the brain and the CNS achieve the two abilities during the renormalization process. Thus, giving detailed falsifiable descriptions of how the human brain and CNS achieve an ethical capacity, our model, Immanuel’s Law, effectively serves as a gauge to AI developers who seek to achieve the same ethical capacity in AI technology. Simply put, our model, Immanuel’s Law, identifies the particular neurological processes that directly translate as the first person sensibility and resulting moral constructs that assume the foundation and safeguard of civilization. If AI developers can program equal sensibilities into machines, the general public could rightly trust the technology with tasks that preserve the wellbeing of society. But if the developers cannot program human sensibilities into machines, then the autocratic regimes could exploit the technology for purposes that threaten the free society.
Let us briefly assess the aim of artificial intelligence, in terms of the true feasibility of its pursuit to mimic and endow human intelligence for the benefit of society. Let us measure the goals of AI developers by what our model, Immanuel’s Law, details as the actual breadth of human intelligence that the brain and the central nervous system enable. Like so, we can grasp a sober and objective understanding of the limitations and potential dangers of AI. Then we can demarcate AI’s societal benefits from AI’s threats.
To mimic the neurological structuring that gives the brain its predictive capacity, artificial intelligence developers employ deep learning algorithms: algorithms, of course, being formal procedures that solve mathematical problems. Programmers design normal algorithms to respond to direct input; however, programmers design deep learning algorithms to respond to data representations of statistically based outcomes. Like so, the deep learning algorithms attempt to mimic a learning process. Programmers have successfully demonstrated the usefulness of deep learning algorithms in such technologies like speech recognition; however, AI enhanced with deep learning algorithms remain far from even approximately human and animal intelligence. AI developers have not discovered nor mimicked the neurological mechanism that gives the brain its predictive capacity. Nonetheless, because of AI’s technological successes, the proliferation of AI technology proceeds, despite the unanswered ethical questions and unacceptable risks.
We do not blame the proponents of artificial intelligence for proceeding with the development of the technology, despite the fact that they fail to match our human capacity for inherent reasoning and value based judgments. Since the dawn of Greco-Roman civilization, philosophers, scientists, and theologians have been trying to define the extent to which humans have an innate capacity to conceive abstract representations of the world prior to actual experiences. Greco-Roman philosophers pioneered modern forms of representational government in response to their philosophical considerations that either question an individuals’ innate reasoning capacity to rule themselves, or question the need for centralized undemocratic government to rule the animalistic nature of humankind. The 18th Century authors of modern government, in like manner, devised Western constitutions by first considering humankind’s state of nature: John Locke (1632-1702) inspired the framers of the U.S. Constitution with the belief that humankind has an innate reasoning capacity; therefore, the framers of the U.S. Constitution sought to limit the power of the Federal government, in deference to the innate liberty of the individual. In contrast, Jean-Jacques Rousseau (1712-1778) inspired the framers of the French Republic’s Constitution with the belief that humankind has a brutish nature that needs censorship; therefore, the framers of the French Republic’s Constitution sought to censor amasses of private wealth and political hegemony. Religious institutes even took up the question of humankind’s innate capacity for reasoning and judgment: excepting some Protestant Christians, all religions hold that individuals have the innate capacity to obtain salvation through religious works, to the extent that the religious judge others who do not participate in their religious actions; whereas Protestants who hold an orthodox belief understand that salvation is not innate but a discriminating divine conferment, to the extent that orthodox Protestants do not judge others: a cultural practice that has facilitated the rise of secular progressive government in the West. In few words, our ability to invest AI with an ethical capacity depends upon our ability to discover and mimic humanity’s innate reasoning capacity: a discovery that philosophers, scientists, and theologians have failed to make.
The only way to discover the extent to which we can invest artificial intelligence with an ethical capacity by our capturing humanity’s innate reasoning ability is to resolve the mind-matter explanatory gap: two unresolved questions that first ask what gives the mind the capability to stand apart from the world, while generalizing perceptions per the mind’s state of being, and second ask to what extent does the mind’s environment frame the mind’s perceptions. Both questions constitute the nature verses nurture question.
Most thinkers throughout history believed that the mind-matter explanatory gap were unsolvable; however, we thelandscapeoftruth.com understand that the answer is now straightforward and readily apparent. Ongoing scientific, philosophical, and theological advancements have presented the means for us to answer the question in the early 20th Century: a critical period in which physicists discovered that energy only exist in discrete quantities (quanta). The discovery presented the profound consequence that the subatomic world never exists in exact states in spacetime. The physicists found that we can only employ a statistical process that physicists term renormalization to know the exact nature of the macroscopic world that we seemingly observe in continuity.
As often occurs in history, scientists, philosophers, and theologians overlook the profundity of new discoveries because they cannot appreciate the vital roles that each of their respective disciplines play in advancing our understanding of the new discoveries. For instance, scientists frequently ignore philosophers’ epistemological considerations of the innate structure of our knowledge and the objective knowableness of what we observe. Likewise, philosophers regularly fail to seek the scientific method’s falsifiable examinations in order to prove their philosophical speculations. What is more, both scientists and philosophers routinely disregard theologians’ pursuit of a cosmological truth; wherein the theologians go one step further than the philosophers’ consideration of our knowledge structure and the knowableness of what we observe. Instead of merely considering the nature of our innate knowledge structure, the theologians consider the human worth of our minds’ liberty apart from the world; moreover, instead of merely considering the knowableness of what we observe, the theologians consider the inherent meaning of our experiences. Notwithstanding, though our theological understandings frame our constitutional conceptions of liberty and justice, as well as many other ethical determinations, the theological understandings are not scientifically falsifiable. In this way, scientific advances like artificial intelligence stand without an ethical compass.
Some contemporary theories do try to explain the meaning of energy’s quantized nature and the resulting macroscopic world that we mathematically normalize to secure critical observations of; however, all the theories amount to science fiction because none of the theories are scientifically falsifiable. Since none have apprehended the significance of our normalized apprehension of the world, mainstream scientists, philosophers, and theologians often ignore the significance. As a consequence, cognitive neuroscientists do not consider our need to renormalize energy’s quantum indeterminacy as playing a role in our identifying the mind’s rational capacity.
We have so entitled our website, thelandscapeoftruth.com, based upon the premise that we recognize the correlating roles that science, Western philosophy, and orthodox Christian theology have played in advancing modernity’s Western styled democracy, free market, and liberal society. We recognize the correlating contributions that scientists, philosophers, and Christian theologians make in advancing our understanding of newfound discoveries. By appreciating the joint contribution from science, philosophy, and theology; our landscape perspective stands more apt to recognize the profound implications of energy’s quantum indeterminacy and the subsequent need for us to normalize the appearing world. In point of fact, we understand that consciousness, itself, is a natural normalizing process: consciousness entails a subsuming apprehension of various manifestations of energy, such as light, soundwaves, and other magnitudinous relationships, under a continuous perception of spacetime. Thus, we understand that cognitive neuroscience’s systems approach to defining consciousness is flawed because the approach fails to recognize the brain and central nervous system’s renormalizing function.
To understand how the brain apprehends consciousness, we have developed a falsifiable model that demonstrates how the brain and the central nervous system (CNS) renormalize energy’s quantum indeterminacy to apprehend consciousness’ unified perception of the world. Our model, which we have entitled Immanuel’s Law, reimagines cognitive neuroscience’s systems approach, by envisioning how to account for the brain’s renormalizing process. First, our model draws from a general understanding of contemporary science’s quantum physics to detail, loosely, what the renormalizing process entails. Second, our model draws from philosophy’s epistemological considerations to describe the structure of knowledge that the renormalizing process conforms to. Third, our model appreciates our theological understanding of freedom to describe how the mind is both free and contingent to the world that it perceives. Finally, to validate the above scientific, philosophical, and theological principles, our model achieves scientific falsifiability.
Our model, Immanuel’s Law, may be a generalization of the manner in which the brain and central nervous system (CNS) apprehend consciousness through a renormalizing process; however, Immanuel’s Law secures the objective means to gauge the feasibility of artificial intelligence developers’ efforts to mimic human cognition as the developers seek to invest AI with an ethical capacity. To describe how our brain and CNS achieve an ethical capacity, our model, Immanuel’s Law, offers a falsifiable description of how the brain and the CNS achieve two abilities during their renormalizing process: first, Immanuel’s Law recognizes that we have the ability to rationalize a sense of identity and self-worth, regardless of particular physical circumstances; moreover, the model recognizes that we have the ability to confer the same sense of identity and self-worth to others. Second, Immanuel’s Law recognizes that we have the ability to employ circumstances as abstract metaphors of universal values that transcend particular circumstances. Immanuel’s Law then describes how the brain and the CNS achieve the two abilities during the renormalization process. Thus, giving detailed falsifiable descriptions of how the human brain and CNS achieve an ethical capacity, our model, Immanuel’s Law, effectively serves as a gauge to AI developers who seek to achieve the same ethical capacity in AI technology. Simply put, our model, Immanuel’s Law, identifies the particular neurological processes that directly translate as the first person sensibility and resulting moral constructs that assume the foundation and safeguard of civilization. If AI developers can program equal sensibilities into machines, the general public could rightly trust the technology with tasks that preserve the wellbeing of society. But if the developers cannot program human sensibilities into machines, then the autocratic regimes could exploit the technology for purposes that threaten the free society.
Let us briefly assess the aim of artificial intelligence, in terms of the true feasibility of its pursuit to mimic and endow human intelligence for the benefit of society. Let us measure the goals of AI developers by what our model, Immanuel’s Law, details as the actual breadth of human intelligence that the brain and the central nervous system enable. Like so, we can grasp a sober and objective understanding of the limitations and potential dangers of AI. Then we can demarcate AI’s societal benefits from AI’s threats.
To establish a sober understanding of artificial intelligence’s scope, we must look past the unqualified public angst that reacts to AI. Indeed, we must look past the public hysteria that increases with the proliferation of artificial intelligence. The unknown potential of AI technology, like no technology, uncovers public insecurity about the moral aptitude of humanity to remain the caretakers of the world, its environment, and a prosperous free society. The widespread opinion that AI will soon surpass human intelligence reveals the increasing belief that humanity, with all their social deficiencies, will justly yield to the subjugation of superior intelligence. Once upon a time, at the dawn of Western civilization, pervasive was the understanding that human reason would establish democracy and freedom, revealing an enlightened humanity as the rightful custodians of a vast world and its natural resources. Now instead of the enlightened global society that classical philosophers sought, we experience the fallout from greed spurring inequitable global commerce: we experience a rapacious commercial force that consumes natural resources, harming the environment. We see commercial greed compelling free nations to trade with totalitarian nations, integrating free societies with oppressive societies. We witness our sought after liberty ultimately suffering: we see cultural conflict strengthening centralized government, even as global commerce proliferate advanced weaponry, strengthening totalitarian regimes. Thus, rather than having a free enlightened society with the moral aptitude to use AI for the betterment of the people, we possess a society that tends toward authoritarianism: a society that could pervert the technology to oppress free peoples.
The growing concern over the potential misuses of artificial intelligence is valid; however, the fear that artificial intelligence will even approximate the intelligence of the simplest self-aware animal is laughable, though AI is remarkable. AI is nothing more than the conditional instructions of computer programs that govern the set feature recognition domains that are mathematical models of object-property relationships and or database circumstances: that is, relationships and circumstances that sophisticated sensor technology inputs into the AI system while automated mechanisms respond. Because artificial intelligence systems process and respond to circumstances in a manner that surpass the human capacity, AI gives the appearance of true autonomous intelligence; however, AI is not an actual intelligent self-aware agent, because AI does not have the self-identity, standing capacity that appreciates novelty and universality above the object-property patterns that its programmers designed the AI system to recognize. AI, therefore, does not possess natural intelligence’s predictive capacity that sees past its impoverished senses.
The challenge to establishing actual artificial intelligence is to determine and match what enables natural intelligence’s self-identity, that is, natural intelligence’s unique standing capacity that persists despite natural intelligence’s perceived environment. Cognitive neuroscientists observe neuron configurations and highly organized sequenced neuron firings that can accommodate an innumerable amount of novel sense impressions; therefore, many believe that the highly structured neuronal network gives the epiphenomenal appearance that the brain possesses an independent standing that appreciates novelty and universality apart from sense impressions. And so, having considered the sophisticated neuron configurations as the falsifiable explanation for how the brain grasps the epiphenomenon of unique standing, AI developers seek to give machines pattern recognition capacities that mimic the complex neuronal configurations, which again the developers believe to be responsible for learning.
The highly adaptable neuron configurations may be a falsifiable explanation for the manner in which the brain appears to achieve natural intelligence’s self-identity and unique standing apart from the world that we perceive; however, the cognitive neuroscientists’ explanation does not remotely entail a falsifiable understanding for the states of intentionality that the brain achieves. They have not understood how the brain’s intentional states are unprovoked and proactive: the brain’s intentional states seem to apprehend the underlining physical relationships that the appearing impoverished object-property relationships adhere to. Then through its intentional states, natural intelligence seems to translate all object-property relationships as tools to maintain the a priori precept of self-identity, which proactively pursues novel outcomes to establish its perspectives, universally. Essentially, both cognitive neuroscientists and artificial intelligence developers fail to capture how intentionality constitutes the haecceity, the first-person “like-thisness”, of our conscious experiences: the veritable closing of the mind-matter explanatory gap.
In brief, because cognitive neuroscience does not entail a falsifiable explanation for intentionality, cognitive neuroscience ultimately fails to discover how the brain obtains human consciousness. Cognitive neuroscientists’ simply pointing out how neuronal activity maps onto aspects of human cognition does not explain how electrochemical exchanges amount to consciousness; therefore, like theologians and philosophers’ unsubstantiated statements, concerning intentionality, cognitive neuroscientists’ proclamations amount to tautological statements, which are the mere repeating of unprovable conclusions without falsifiable proof.
By showing how the brain and the central nervous system (CNS) renormalizes energy’s quantum indeterminacy to apprehend consciousness’ unified perception of the world, our model Immanuel’s Law reimagines cognitive neuroscience’s systems approach by establishing a falsifiable explanation of the brain’s intentional states. Artificial intelligence developers ultimately seek to benefit society. Our model Immanuel’s Law seeks the same: Immanuel’s Law seeks to establish human freedom by corroborating the theological, philosophical, and scientific understandings of humanity’s intentional experience.
We fully recognize that not all theologies, philosophies, and scientific postulations are valid. Also, we recognize that many have used theological, philosophical, and scientific conclusions to oppress society. Thus, as we have detailed in our overview, our site thelandscapeoftruth.com champions the New Testament doctrine of election by faith through grace alone, unto the defying of the religious works that many have used to oppress or discriminate against people: we recognize that through the influence of the Protestant Reformation, which championed the New Testament doctrine of “grace alone”, many Europeans and Americans began to accept the rule of a secular society and science, while respecting diverse beliefs. And so, to advance a responsible and orthodox approach to the Holy Bible, we have ensconced Immanuel’s Law in our doctrinal treatise, the Landscape of Truth, which seeks to educate laypeople about the liberating principles of the New Testament that facilitated the rise of the modern state.
To understand Immanuel’s Law and the theological, philosophical, and scientific principles that Immanuel’s Law unites, we must initially understand how Immanuel’s Law resolves the problem of tropes: an obscure problem that nevertheless lies at the heart of our most challenging theological and philosophical problems, concerning epistemology and intentionality. As well, the question of tropes lies at the heart of science’s inability to resolve the mind-matter explanatory gap and artificial intelligence developers’ subsequent inability to capture true AI.
Trope, a word that derives from the Greek word tropos, meaning to alter or to turn, indicates an instance or characteristic of something that is universally recognized. The capacity to recognize tropes is a fundamental building block of intelligence; however, the trope concept is extremely problematic because the concept implies that we have the innate capacity to apprehend underlining physical conventions in a way that translates contingent perceptions under universal conceptions, often prior to our full experience of the contingent events. The trope concept, therefore, invites a host of theological and philosophical questions like the question of a cosmological cause and the epistemological question, concerning the basis of our knowledge structure and the manner in which our environment incites our knowledge structure; moreover, the trope concept invites the ontological question, concerning the very nature of being and the subsequent manner in which objects relate to one another.
To some extent, we frame the effectiveness of Immanuel’s Law by the manner in which it reconciles the impasse between, first, Christian theologians and Western philosophers’ descriptions of the way we apprehend trope perceptions under universal conceptions and, second, scientists’ criticism of the way that the theologians and philosophers’ descriptions entail no falsifiable proof. First, in a straightforward manner, the Christian theology of God tackles the cosmological cause of our ability to apprehend universal conceptions under contingent perceptions: the Old Testament describes God as YHWH, absolute ontological being. Then the New Testament describes God as being one with His Logos expression who disseminates, that is, metes out God’s absolute ontological being by physically expressing the worlds; as well as expressing us in God’s ontological image to the extent that we stand upon our being in the world by having the capacity to rationalize the world and propositionally “feel after Him (Acts 17:27)”, God’s absolute being.
Next, the Greek philosopher Plato became the first notable thinker to detail how human reason inherently peers past the contingent and impoverished perceptions of the world to apprehend the universal ideals beyond the trope perceptions that we experience. Afterward, Aristotle, Plato’s intellectual successor, became the first notable to articulate the building blocks of anything that human reasoning or proposition can conceive: such building blocks as substance, quantity, qualification; and relative.
Later, elaborating upon Aristotle’s work, the 18th Century philosopher Immanuel Kant (1724-1804) concluded that our capacity to stand upon our being and rationalize the world rests upon our instinctive perception of space and time. Unlike other logicians and scholars of cognition, Kant did not merely assess our propositional conceptions with a predicate calculus, in an attempt to verify our impoverished perceptions of temporal experiences. Instead, Kant conceived a transcendental logic that stands upon a universal spatial-temporal scheme.
Finally, in the mid-20th Century, the German philosopher Martin Heidegger (1889-1976) made an important observation: he understood that our capacity to apprehend a sense of spacetime and thereby rationalize impoverished perceptions of objects as being tropes of universals is an essential building block of intentionality, itself. Heidegger’s work approximated the reality that our capacity to apprehend spacetime gives us the capacity to stand upon our being, in terms of our apprehending the continuity of our identity, as we witness the progression of experiences in spacetime. Likewise, Heidegger understood that our standing upon our being inherently gives us the capacity to translate the impoverished phenomena that appears as tropes of our universal conception of the phenomena’s continuity of identity, persisting despite the changes that the phenomena endure in spacetime.
Though scientists must observe that the theologians and philosophers’ descriptions of our inherent standing upon our being to apprehend universals are not falsifiable, scientists themselves necessarily employ universal conceptions in all scientific theories, such as the universal employment of typology. So as scientific theories become increasingly harder to test and verify, because of technological limitations, the necessity to establish other means to verify our universal conceptions becomes all the more apparent.
For instance, we know that the advancement of artificial intelligence, especially in regards to the equipping AI with an ethical capacity, depends upon the discovery of how we grasp universals: we know that AI’s advancement depends upon how much programming of foreknowledge architecture does AI machines require to mimic human and animals’ capacity to perceive universal relationships prior to our experience of them.
Overall, in mathematically appreciable terms, scientists can objectively describe the executive commands that the brain and central nervous system make to the body; however, like theologians and philosophers, scientists must resort to making dogmatic, tautological determinations to describe how brains regions are responsible for the rationalizations of cognition, which apprehends universality prior to specific experiences.
To establish a falsifiable understanding of how brain regions apprehend cognition, Immanuel’s Law consists of three derivative statements that are formerly a priori synthetic, logic based propositions that are provable: that is, statements that unite independent principles to derive novel and objective conclusions that stand independent of the principles. Immanuel’s Law, therefore, unites the pertinent theological and philosophical descriptions of intentionality with the independent description of the brain’s renormalization process, in order to establish an objective understanding of human cognition.
Because Immanuel’s Law pursues falsifiable proof that reconciles theological, philosophical, and scientific understandings of our environment and the manner in which we perceive it, the scope of Immanuel’s Law is beyond this editorial: the editorial only pursues the feasibility of artificial intelligence’s matching natural intelligence’s capacity to apprehend universal conceptions and thereby ethical sensibility. We, therefore, will only highlight the chief points of Immanuel’s Law’s second derivative statement, which details the brain’s renormalizing process to demonstrate how the brain achieves consciousness and the consequential framing of universal conceptions over contingent trope perceptions. The remaining 1st and 3rd derivative statements pursue falsifiable proof for cosmological purposefulness and systematic theology, respectively.
To begin, let us recall that physicists employ mathematical renormalizing techniques to reconcile energy’s quantum indeterminacy with the uniform macroscopic world. To further appreciate the renormalizing process, we must further understand that energy persists as quantized and discrete particle-wave frequencies that interface in interactions of no certain position or momentum. For this reason, we recognize that physicists must renormalize the uncertain interactions statistically as measured by a constant of proportionality. Like so, they renormalize uncertain microscopic interactions with the macroscopic world.
The growing concern over the potential misuses of artificial intelligence is valid; however, the fear that artificial intelligence will even approximate the intelligence of the simplest self-aware animal is laughable, though AI is remarkable. AI is nothing more than the conditional instructions of computer programs that govern the set feature recognition domains that are mathematical models of object-property relationships and or database circumstances: that is, relationships and circumstances that sophisticated sensor technology inputs into the AI system while automated mechanisms respond. Because artificial intelligence systems process and respond to circumstances in a manner that surpass the human capacity, AI gives the appearance of true autonomous intelligence; however, AI is not an actual intelligent self-aware agent, because AI does not have the self-identity, standing capacity that appreciates novelty and universality above the object-property patterns that its programmers designed the AI system to recognize. AI, therefore, does not possess natural intelligence’s predictive capacity that sees past its impoverished senses.
The challenge to establishing actual artificial intelligence is to determine and match what enables natural intelligence’s self-identity, that is, natural intelligence’s unique standing capacity that persists despite natural intelligence’s perceived environment. Cognitive neuroscientists observe neuron configurations and highly organized sequenced neuron firings that can accommodate an innumerable amount of novel sense impressions; therefore, many believe that the highly structured neuronal network gives the epiphenomenal appearance that the brain possesses an independent standing that appreciates novelty and universality apart from sense impressions. And so, having considered the sophisticated neuron configurations as the falsifiable explanation for how the brain grasps the epiphenomenon of unique standing, AI developers seek to give machines pattern recognition capacities that mimic the complex neuronal configurations, which again the developers believe to be responsible for learning.
The highly adaptable neuron configurations may be a falsifiable explanation for the manner in which the brain appears to achieve natural intelligence’s self-identity and unique standing apart from the world that we perceive; however, the cognitive neuroscientists’ explanation does not remotely entail a falsifiable understanding for the states of intentionality that the brain achieves. They have not understood how the brain’s intentional states are unprovoked and proactive: the brain’s intentional states seem to apprehend the underlining physical relationships that the appearing impoverished object-property relationships adhere to. Then through its intentional states, natural intelligence seems to translate all object-property relationships as tools to maintain the a priori precept of self-identity, which proactively pursues novel outcomes to establish its perspectives, universally. Essentially, both cognitive neuroscientists and artificial intelligence developers fail to capture how intentionality constitutes the haecceity, the first-person “like-thisness”, of our conscious experiences: the veritable closing of the mind-matter explanatory gap.
In brief, because cognitive neuroscience does not entail a falsifiable explanation for intentionality, cognitive neuroscience ultimately fails to discover how the brain obtains human consciousness. Cognitive neuroscientists’ simply pointing out how neuronal activity maps onto aspects of human cognition does not explain how electrochemical exchanges amount to consciousness; therefore, like theologians and philosophers’ unsubstantiated statements, concerning intentionality, cognitive neuroscientists’ proclamations amount to tautological statements, which are the mere repeating of unprovable conclusions without falsifiable proof.
By showing how the brain and the central nervous system (CNS) renormalizes energy’s quantum indeterminacy to apprehend consciousness’ unified perception of the world, our model Immanuel’s Law reimagines cognitive neuroscience’s systems approach by establishing a falsifiable explanation of the brain’s intentional states. Artificial intelligence developers ultimately seek to benefit society. Our model Immanuel’s Law seeks the same: Immanuel’s Law seeks to establish human freedom by corroborating the theological, philosophical, and scientific understandings of humanity’s intentional experience.
We fully recognize that not all theologies, philosophies, and scientific postulations are valid. Also, we recognize that many have used theological, philosophical, and scientific conclusions to oppress society. Thus, as we have detailed in our overview, our site thelandscapeoftruth.com champions the New Testament doctrine of election by faith through grace alone, unto the defying of the religious works that many have used to oppress or discriminate against people: we recognize that through the influence of the Protestant Reformation, which championed the New Testament doctrine of “grace alone”, many Europeans and Americans began to accept the rule of a secular society and science, while respecting diverse beliefs. And so, to advance a responsible and orthodox approach to the Holy Bible, we have ensconced Immanuel’s Law in our doctrinal treatise, the Landscape of Truth, which seeks to educate laypeople about the liberating principles of the New Testament that facilitated the rise of the modern state.
To understand Immanuel’s Law and the theological, philosophical, and scientific principles that Immanuel’s Law unites, we must initially understand how Immanuel’s Law resolves the problem of tropes: an obscure problem that nevertheless lies at the heart of our most challenging theological and philosophical problems, concerning epistemology and intentionality. As well, the question of tropes lies at the heart of science’s inability to resolve the mind-matter explanatory gap and artificial intelligence developers’ subsequent inability to capture true AI.
Trope, a word that derives from the Greek word tropos, meaning to alter or to turn, indicates an instance or characteristic of something that is universally recognized. The capacity to recognize tropes is a fundamental building block of intelligence; however, the trope concept is extremely problematic because the concept implies that we have the innate capacity to apprehend underlining physical conventions in a way that translates contingent perceptions under universal conceptions, often prior to our full experience of the contingent events. The trope concept, therefore, invites a host of theological and philosophical questions like the question of a cosmological cause and the epistemological question, concerning the basis of our knowledge structure and the manner in which our environment incites our knowledge structure; moreover, the trope concept invites the ontological question, concerning the very nature of being and the subsequent manner in which objects relate to one another.
To some extent, we frame the effectiveness of Immanuel’s Law by the manner in which it reconciles the impasse between, first, Christian theologians and Western philosophers’ descriptions of the way we apprehend trope perceptions under universal conceptions and, second, scientists’ criticism of the way that the theologians and philosophers’ descriptions entail no falsifiable proof. First, in a straightforward manner, the Christian theology of God tackles the cosmological cause of our ability to apprehend universal conceptions under contingent perceptions: the Old Testament describes God as YHWH, absolute ontological being. Then the New Testament describes God as being one with His Logos expression who disseminates, that is, metes out God’s absolute ontological being by physically expressing the worlds; as well as expressing us in God’s ontological image to the extent that we stand upon our being in the world by having the capacity to rationalize the world and propositionally “feel after Him (Acts 17:27)”, God’s absolute being.
Next, the Greek philosopher Plato became the first notable thinker to detail how human reason inherently peers past the contingent and impoverished perceptions of the world to apprehend the universal ideals beyond the trope perceptions that we experience. Afterward, Aristotle, Plato’s intellectual successor, became the first notable to articulate the building blocks of anything that human reasoning or proposition can conceive: such building blocks as substance, quantity, qualification; and relative.
Later, elaborating upon Aristotle’s work, the 18th Century philosopher Immanuel Kant (1724-1804) concluded that our capacity to stand upon our being and rationalize the world rests upon our instinctive perception of space and time. Unlike other logicians and scholars of cognition, Kant did not merely assess our propositional conceptions with a predicate calculus, in an attempt to verify our impoverished perceptions of temporal experiences. Instead, Kant conceived a transcendental logic that stands upon a universal spatial-temporal scheme.
Finally, in the mid-20th Century, the German philosopher Martin Heidegger (1889-1976) made an important observation: he understood that our capacity to apprehend a sense of spacetime and thereby rationalize impoverished perceptions of objects as being tropes of universals is an essential building block of intentionality, itself. Heidegger’s work approximated the reality that our capacity to apprehend spacetime gives us the capacity to stand upon our being, in terms of our apprehending the continuity of our identity, as we witness the progression of experiences in spacetime. Likewise, Heidegger understood that our standing upon our being inherently gives us the capacity to translate the impoverished phenomena that appears as tropes of our universal conception of the phenomena’s continuity of identity, persisting despite the changes that the phenomena endure in spacetime.
Though scientists must observe that the theologians and philosophers’ descriptions of our inherent standing upon our being to apprehend universals are not falsifiable, scientists themselves necessarily employ universal conceptions in all scientific theories, such as the universal employment of typology. So as scientific theories become increasingly harder to test and verify, because of technological limitations, the necessity to establish other means to verify our universal conceptions becomes all the more apparent.
For instance, we know that the advancement of artificial intelligence, especially in regards to the equipping AI with an ethical capacity, depends upon the discovery of how we grasp universals: we know that AI’s advancement depends upon how much programming of foreknowledge architecture does AI machines require to mimic human and animals’ capacity to perceive universal relationships prior to our experience of them.
Overall, in mathematically appreciable terms, scientists can objectively describe the executive commands that the brain and central nervous system make to the body; however, like theologians and philosophers, scientists must resort to making dogmatic, tautological determinations to describe how brains regions are responsible for the rationalizations of cognition, which apprehends universality prior to specific experiences.
To establish a falsifiable understanding of how brain regions apprehend cognition, Immanuel’s Law consists of three derivative statements that are formerly a priori synthetic, logic based propositions that are provable: that is, statements that unite independent principles to derive novel and objective conclusions that stand independent of the principles. Immanuel’s Law, therefore, unites the pertinent theological and philosophical descriptions of intentionality with the independent description of the brain’s renormalization process, in order to establish an objective understanding of human cognition.
Because Immanuel’s Law pursues falsifiable proof that reconciles theological, philosophical, and scientific understandings of our environment and the manner in which we perceive it, the scope of Immanuel’s Law is beyond this editorial: the editorial only pursues the feasibility of artificial intelligence’s matching natural intelligence’s capacity to apprehend universal conceptions and thereby ethical sensibility. We, therefore, will only highlight the chief points of Immanuel’s Law’s second derivative statement, which details the brain’s renormalizing process to demonstrate how the brain achieves consciousness and the consequential framing of universal conceptions over contingent trope perceptions. The remaining 1st and 3rd derivative statements pursue falsifiable proof for cosmological purposefulness and systematic theology, respectively.
To begin, let us recall that physicists employ mathematical renormalizing techniques to reconcile energy’s quantum indeterminacy with the uniform macroscopic world. To further appreciate the renormalizing process, we must further understand that energy persists as quantized and discrete particle-wave frequencies that interface in interactions of no certain position or momentum. For this reason, we recognize that physicists must renormalize the uncertain interactions statistically as measured by a constant of proportionality. Like so, they renormalize uncertain microscopic interactions with the macroscopic world.
Because energy’s indeterminate quantum effects occur on an unobservable microscopic scale (to the extent that we seemingly cannot observe any brain function renormalizing quantum behavior with macroscopic impressions), Immanuel’s Law’s second derivative statement adapts existing electromagnetic theories of consciousness. Immanuel’s Law’s second derivative statement observes the fact that the brain’s electromagnetic waves are particle-waves: on a microscopic scale, the particle-waves directly interface with the action potential ions of the brain’s neuronal layers; while, on a macroscopic scale, the particle-waves synthesize the central nervous system’s circuits.
To highlight the chief aspects of Immanuel’s Law’s second derivative statement, we only have to note how the statement accomplishes two key objectives: for the first objective, Immanuel’s Law’s second statement upholds the scientific principle of maintaining falsifiable objectivity as the statement claims to close the mind-matter explanatory gap. As we has said, cognitive neuroscientists resort to making tautological, dogmatic declarations as they simply demarcate the brain regions that they observe to be active during the modes of cognition and awareness states. In contrast, Immanuel’s Law envisages how the brain’s renormalizing processes map directly onto philosophers’ epistemological descriptions of cognition’s knowledge structure, as well as philosophers’ descriptions of the mind’s intentional states: that is, states that enable the mind to peer beyond what it immediately perceives. For instance, Immanuel’s Law understands that an initial renormalizing process must occur to enable unified perceptions that unite varied magnitudinous percepts within spacetime; moreover, Immanuel’s Law recognizes that a subsequent renormalizing process must occur to reduce ongoing environmental impressions unto personalized representations that facilitate and enable probing bodily reactions. And so, by qualifying the manner in which the brain’s renormalizing processes become personalized, Immanuel’s Law’s second derivative statement identifies a physical system that stands out as temporally free, as the renormalizing processes give the system the a priori capacity to translate and anticipate relationships prior to experience.
Furthermore, by envisioning the brain’s renormalizing processes as the falsifiable means to close the mind-matter explanatory gap, Immanuel’s Law second derivative statement achieves the second key objective: Immanuel’s Law elevates philosophy’s epistemological studies to an equal objective understanding with the methodical observations of the applied sciences. Likewise, Immanuel’s Law elevates Christian theology’s understandings, regarding the supervenience and freedom of the human will, unto an equal objective understanding with cognitive neuroscience’s descriptions of human and animal cognition. By returning certain philosophical and theological principles to equal and objective standing with scientific principles, Immanuel’s Law safeguards Western democratic and Judeo-Christian values to continue their oversight of ongoing scientific and technological innovation like artificial intelligence.
In chapter three of the doctrinal treatise, the Landscape of Truth, introductory material precedes Immanuel’s Law’s derivative statements. The introductory material details the impasse between Christian theology, Western philosophy, and science, in regards to each respective discipline’s approach to the mind-matter explanatory gap. The chapter initially begins with a brief discussion concerning the problematic and non-falsifiable understandings of the nature of God that the remainder of the doctrinal treatise addresses and that Immanuel’s Law secures an objective understanding of. Afterward, the introductory sections describe the impasse’s cause as being science’s inability to verify the existence of the qualitative sense that defines the temporally free and subjective mind. The introductory material notes that determining the nature of the qualitative mind is of chief importance to Church doctrine and philosophy because from the mind’s qualitative sense (qualia) arises the mind’s capacity to apprehend value and ethics; moreover, the material notes that determining the nature of the mind’s qualitative sense gives one the capacity to understand the epistemological structure of intelligence and how the mind qualifies trope instances under universals, prior to experience. The introductory material describes how science unsuccessfully employs observational strategies to reduce the qualitative mind to mechanical explanations that religious theories and unsubstantiated philosophies cannot exploit. To this end, the material notes how scientists employ the emergence theory, the discipline of behaviorism, and cognitive neuroscience; moreover, the introductory material records how contemporary philosophers observe how the scientists’ observational strategies lack an epistemological understanding as well as a phenomenological explanation for the manner in which the mind experiences uniform senses of spacetime, which enable the mind to synthesize past, present, and future experience, a priori.
Finally, the introductory sections describe how we have come to the current impasse that Immanuel’s Law resolves. The sections note that the dramatic successes of scientists’ employment of the scientific method to predict phenomena has advanced physics, biology, and technology to the extent that science’s inability to resolve the mind-matter explanatory gap suggests that a resolution to the problem is unrealistic or an fiction. In this manner, the introductory sections conclude that science’s successes produce the consequence of our facing the challenging ethical questions surrounding the proliferation of perhaps the greatest technological innovation, artificial intelligence, without the traditional guidance of our Western democratic values and Judeo-Christian principles.
To demonstrate how Immanuel’s Law resolves the impasse between Christian theologians, philosophers, and scientists in their struggle to resolve the mind-matter explanatory gap, an illustration precedes Immanuel’s Law. The illustration depicts the limitations of the scientists’ mechanical description of an athlete’s cognitive experience that commands her body, during a sporting event. The illustration at first describes the body’s organization as a means to maintain thermodynamic states, during changing environments. Then the illustration explains most of the athlete’s physical exertions as the consequence of simple to regulatory circuits and reflex arcs; however, the illustration notes that the scientists’ falsifiable mechanical description falls apart as the scientists fail to reduce the athlete’s cognitive actions to observable mechanical operations. In fact, the illustration underscores the fact that the scientists simply demarcate the brain regions that operate during certain cognitive activity.
In response to the illustration, Immanuel’s Law’s second derivative statement demonstrates how the brain and central nervous system’s renormalizing processes present the falsifiable manner in which the brain’s neuronal layers magnitudinously encode the brain’s existing electromagnetic waves, as representations of the body’s external environment. The second statement notes how a perturbation occurs that other brain regions maintenance to the extent that the statement identifies the system that corresponds to theologians and philosophers’ description of human cognition.
Though this editorial cannot do justice to the breadth of Immanuel’s Law, we can only say that in identifying a system that corresponds with consciousness without resorting to tautological, dogmatic statements, Immanuel’s Law’s second derivative statement identifies the following: Immanuel’s Law identifies the unique physicality that is temporally free but contingent to the brain and central nervous system; thereby Immanuel’s Law identifies the physicality that corresponds to the mind’s qualitative capacity and the manner in which the mind supervenes and anticipate ongoing impressions. Also, the system explains how the mind grasps representational tropes under the a priori propensity to apprehend universals.
Immanuel’s Law’s second derivative statement, along with Immanuel’s Law’s first and third statements, identifies falsifiable explanations for the many capacities of human cognition; however, for the purposes of this editorial, we may conclude that artificial intelligence developers’ efforts to employ deep learning algorithms and other programing techniques to mimic natural intelligence is futile. The developers cannot possibly match the brain’s capacity to renormalize quantum systems to supervene upon ongoing environmental impressions with personalized representations. AI developers’ efforts to enable machines with the qualitative sensibilities of human ethics is sadly laughable; therefore, we conclude that AI technology must never lack extensive human oversight, as determined by the legal overview of Western democratic legislatures. Our site, thelandscapeoftruth.com, will continue to argue for the oversight of AI in future editorials and articles.
Furthermore, by envisioning the brain’s renormalizing processes as the falsifiable means to close the mind-matter explanatory gap, Immanuel’s Law second derivative statement achieves the second key objective: Immanuel’s Law elevates philosophy’s epistemological studies to an equal objective understanding with the methodical observations of the applied sciences. Likewise, Immanuel’s Law elevates Christian theology’s understandings, regarding the supervenience and freedom of the human will, unto an equal objective understanding with cognitive neuroscience’s descriptions of human and animal cognition. By returning certain philosophical and theological principles to equal and objective standing with scientific principles, Immanuel’s Law safeguards Western democratic and Judeo-Christian values to continue their oversight of ongoing scientific and technological innovation like artificial intelligence.
In chapter three of the doctrinal treatise, the Landscape of Truth, introductory material precedes Immanuel’s Law’s derivative statements. The introductory material details the impasse between Christian theology, Western philosophy, and science, in regards to each respective discipline’s approach to the mind-matter explanatory gap. The chapter initially begins with a brief discussion concerning the problematic and non-falsifiable understandings of the nature of God that the remainder of the doctrinal treatise addresses and that Immanuel’s Law secures an objective understanding of. Afterward, the introductory sections describe the impasse’s cause as being science’s inability to verify the existence of the qualitative sense that defines the temporally free and subjective mind. The introductory material notes that determining the nature of the qualitative mind is of chief importance to Church doctrine and philosophy because from the mind’s qualitative sense (qualia) arises the mind’s capacity to apprehend value and ethics; moreover, the material notes that determining the nature of the mind’s qualitative sense gives one the capacity to understand the epistemological structure of intelligence and how the mind qualifies trope instances under universals, prior to experience. The introductory material describes how science unsuccessfully employs observational strategies to reduce the qualitative mind to mechanical explanations that religious theories and unsubstantiated philosophies cannot exploit. To this end, the material notes how scientists employ the emergence theory, the discipline of behaviorism, and cognitive neuroscience; moreover, the introductory material records how contemporary philosophers observe how the scientists’ observational strategies lack an epistemological understanding as well as a phenomenological explanation for the manner in which the mind experiences uniform senses of spacetime, which enable the mind to synthesize past, present, and future experience, a priori.
Finally, the introductory sections describe how we have come to the current impasse that Immanuel’s Law resolves. The sections note that the dramatic successes of scientists’ employment of the scientific method to predict phenomena has advanced physics, biology, and technology to the extent that science’s inability to resolve the mind-matter explanatory gap suggests that a resolution to the problem is unrealistic or an fiction. In this manner, the introductory sections conclude that science’s successes produce the consequence of our facing the challenging ethical questions surrounding the proliferation of perhaps the greatest technological innovation, artificial intelligence, without the traditional guidance of our Western democratic values and Judeo-Christian principles.
To demonstrate how Immanuel’s Law resolves the impasse between Christian theologians, philosophers, and scientists in their struggle to resolve the mind-matter explanatory gap, an illustration precedes Immanuel’s Law. The illustration depicts the limitations of the scientists’ mechanical description of an athlete’s cognitive experience that commands her body, during a sporting event. The illustration at first describes the body’s organization as a means to maintain thermodynamic states, during changing environments. Then the illustration explains most of the athlete’s physical exertions as the consequence of simple to regulatory circuits and reflex arcs; however, the illustration notes that the scientists’ falsifiable mechanical description falls apart as the scientists fail to reduce the athlete’s cognitive actions to observable mechanical operations. In fact, the illustration underscores the fact that the scientists simply demarcate the brain regions that operate during certain cognitive activity.
In response to the illustration, Immanuel’s Law’s second derivative statement demonstrates how the brain and central nervous system’s renormalizing processes present the falsifiable manner in which the brain’s neuronal layers magnitudinously encode the brain’s existing electromagnetic waves, as representations of the body’s external environment. The second statement notes how a perturbation occurs that other brain regions maintenance to the extent that the statement identifies the system that corresponds to theologians and philosophers’ description of human cognition.
Though this editorial cannot do justice to the breadth of Immanuel’s Law, we can only say that in identifying a system that corresponds with consciousness without resorting to tautological, dogmatic statements, Immanuel’s Law’s second derivative statement identifies the following: Immanuel’s Law identifies the unique physicality that is temporally free but contingent to the brain and central nervous system; thereby Immanuel’s Law identifies the physicality that corresponds to the mind’s qualitative capacity and the manner in which the mind supervenes and anticipate ongoing impressions. Also, the system explains how the mind grasps representational tropes under the a priori propensity to apprehend universals.
Immanuel’s Law’s second derivative statement, along with Immanuel’s Law’s first and third statements, identifies falsifiable explanations for the many capacities of human cognition; however, for the purposes of this editorial, we may conclude that artificial intelligence developers’ efforts to employ deep learning algorithms and other programing techniques to mimic natural intelligence is futile. The developers cannot possibly match the brain’s capacity to renormalize quantum systems to supervene upon ongoing environmental impressions with personalized representations. AI developers’ efforts to enable machines with the qualitative sensibilities of human ethics is sadly laughable; therefore, we conclude that AI technology must never lack extensive human oversight, as determined by the legal overview of Western democratic legislatures. Our site, thelandscapeoftruth.com, will continue to argue for the oversight of AI in future editorials and articles.
In the late 19th Century and early 20th Century, the physicists Max Planck (1858-1947) and Albert Einstein (1879-1955) discovered that energy exists in discrete quantities of proportionality, existing as both a wave and particle. Soon after other physicists like Niels Bohr (1885-1962), Erwin Schrodinger (1887-1961), and Werner Heisenberg (1901-1976) understood that because of energy’s quantum nature, an observer can only determine the exact locations of subatomic elements statistically. In what physicists term the Copenhagen Interpretation, Bohr, Heisenberg, and others soon attempted to formalize the fact that scientists can only apprehend probabilities in the measurement of physical phenomena, because any actual measurement is impoverished by other potential measurements that energy quanta statistically allow.
The Copenhagen Interpretation had strong opposition in the likes of renowned physicists like Albert Einstein, himself, who famously said that he is convinced that God doesn’t play dice. Also, Einstein mocked indicating that any casual observance of the moon isn’t probabilistic.
Unto this very day, physicists are still in disagreement over the implications of quantum mechanics, that is, the probabilistic apparatus of mathematics that accounts for energy’s subatomic uncertainty in its position and momentum. We of course purport that the brain renormalizes the subatomic probabilities to behold a constant moon and the world that we perceive.
Unto this very day, physicists are still in disagreement over the implications of quantum mechanics, that is, the probabilistic apparatus of mathematics that accounts for energy’s subatomic uncertainty in its position and momentum. We of course purport that the brain renormalizes the subatomic probabilities to behold a constant moon and the world that we perceive.
“I reaffirm once again that KAIST will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control,” declared an embattled Sung-Chul Shin, the university president of one of South Korea’s top universities, KAIST. The alarmed Shin responded to the abruptly organized boycott of the university by top artificial intelligence researchers. The researchers decried the university’s decision to partner with a South Korean munitions maker to introduce AI technology for military use. The researchers feared that the university would develop AI weaponry without “meaningful human control” and thereby open upon Pandora’s Box, as despots apprehended such weaponry to use against their citizenry. University President Shin responded saying that the university program would only pursue the development of AI to enhance command and control systems and navigation for unmanned undersea vehicles.
The unasked questions that the researchers have not quantified are what constitutes actual artificial intelligence and what oversight body should regulate its proliferation internationally. In early 2017, Congressman John K. Delaney launched an Artificial Intelligence (AI) Caucus for the 115th U.S. Congress, having the goal of informing the Congress of the implications of the technology, militarily, economically, and domestically. Thus, far the Caucus has focused on how to exploit the technology for economic gains, further fomenting an AI development race with nations like autocratic China who have introduced plans for China to become a global leader in AI development. Thus far, no leadership exists to define and resolve the threat.
The unasked questions that the researchers have not quantified are what constitutes actual artificial intelligence and what oversight body should regulate its proliferation internationally. In early 2017, Congressman John K. Delaney launched an Artificial Intelligence (AI) Caucus for the 115th U.S. Congress, having the goal of informing the Congress of the implications of the technology, militarily, economically, and domestically. Thus, far the Caucus has focused on how to exploit the technology for economic gains, further fomenting an AI development race with nations like autocratic China who have introduced plans for China to become a global leader in AI development. Thus far, no leadership exists to define and resolve the threat.
The end of the 18th Century saw the rise of the two renowned social contracts, the United States Constitution, which censored government power in the interest of the individual citizen, and the French Declaration of the Rights of Man, which censored individual citizens’ economic and political endeavors in the attempt to maintain equality for all. The two social contracts pioneered modern government to ensure freedom for the individual, regardless of class, kindred group, religious sect, and race. Since the founding of the social contracts, the American model for modern government has achieved greater success in championing the spread and security of democracy; whereas the French model, which fosters more centralized government, has often instigated political strife, leading to two World Wars amongst Europeans, who adopted the French model.
As detailed in our overview, our site, thelandscapeoftruth.com, supports the view that America’s chiefly Protestant Christian population has encouraged the libertarian and progressive culture that made the American model of democracy successful. Though our crediting Protestantism would seem to encourage the sectarianism that the social contracts seek to overcome, our site pin points the elements of Protestantism that are responsible for the population’s acceptance of secular government’s rule over diverse peoples who do not necessarily share the same beliefs. We underscore that the Protestant Reformation’s emphasis on the original Apostolic doctrines of grace and election by faith, regardless of religious works, kindred relation, social standing, and male or female sex, cultivated libertarian values agreeing with modern democracy, unlike other religious and political beliefs that foment sectarianism.
Our doctrinal treatise, the Landscape of Truth, seeks to educate laypeople with a systematic understanding of the biblical Testaments, demonstrating the manner in which the Testaments overcome the sectarian worldviews that have oppressed humanity since the dawn of civilization. Not only does the treatise educate laypeople on the evolution of Western philosophy and economy unto the rise of modernity; the treatise features Immanuel’s Law, which educates laypeople on the birth of modern science. The Landscape viewpoint is that a citizenry who see the interrelationships of disciplines, philosophies, and cultures will make better informed decisions in advancing governments to meet the challenges of the potential threats that emerge like the ethical challenges surrounding artificial intelligence (AI).
As detailed in our overview, our site, thelandscapeoftruth.com, supports the view that America’s chiefly Protestant Christian population has encouraged the libertarian and progressive culture that made the American model of democracy successful. Though our crediting Protestantism would seem to encourage the sectarianism that the social contracts seek to overcome, our site pin points the elements of Protestantism that are responsible for the population’s acceptance of secular government’s rule over diverse peoples who do not necessarily share the same beliefs. We underscore that the Protestant Reformation’s emphasis on the original Apostolic doctrines of grace and election by faith, regardless of religious works, kindred relation, social standing, and male or female sex, cultivated libertarian values agreeing with modern democracy, unlike other religious and political beliefs that foment sectarianism.
Our doctrinal treatise, the Landscape of Truth, seeks to educate laypeople with a systematic understanding of the biblical Testaments, demonstrating the manner in which the Testaments overcome the sectarian worldviews that have oppressed humanity since the dawn of civilization. Not only does the treatise educate laypeople on the evolution of Western philosophy and economy unto the rise of modernity; the treatise features Immanuel’s Law, which educates laypeople on the birth of modern science. The Landscape viewpoint is that a citizenry who see the interrelationships of disciplines, philosophies, and cultures will make better informed decisions in advancing governments to meet the challenges of the potential threats that emerge like the ethical challenges surrounding artificial intelligence (AI).
Reflecting upon our lessons learned from Immanuel’s Law, which our doctrinal treatise the Landscape of Truth features; we understand that artificial intelligence is not even remotely intelligence. AI is merely a sophisticated tool; and like all tools, AI is only as harmful as the intent of those who wield the AI tool. And so, the threat of AI derives from the lawmakers who foster the belief that all cultures are equal, be they facilitators of democracy and human freedom or not. Until Western lawmakers censor the proliferation of dangerous technologies to non-democratic governments, which we trade freely with in the global economy, the threat of the AI technology will increase. We here at thelandscapeoftruth.com will be there to observe.
Another Political Revolution Cannot Save Western Democracy’s Individual Liberties: Only A Revolution In Our Approach to Science Can
Published January 2, 2017 – thelandscapeoftruth.com
The nature of the West’s modern state is progressive and libertarian in the sense that the modern state seeks the equality and liberty of all individuals, regardless of an individual’s male or female sex, race, kindred group, and social standing. The commercializing and diversifying of Europe’s economies, by the Europeans’ colonizing the Americas, made the modern state possible: Western European nations’ first venture into global trade by establishing the American colonies and trading corporations created vast sources of wealth and new political powers that did not depend upon agricultural land holdings. The new powers made the individual freedoms of the modern state possible because the new powers reorganized lands and capital in such a way that ownership became obtainable through individual wage employment. Previously, group-based clanships; family entitlements; religious institutes; and fraternal orders controlled lands and capital. Thus, the commercialized modern state began to employ its social contract with individual citizens in the continual effort to replace the citizen’s natural allegiances to the group-based belief systems of the communal orders that formerly owned the lands, capital, and other resources throughout history.
To address, particularly, the fact that political, economic, religious, and other fraternal orders continue to influence the commercial market’s resources; two competing visions of modern government arose to ensure the individual citizen’s liberty and economic viability. The ideals of the American Revolution and the resulting U.S. Constitution exemplify the first vision of modern government. The American founders drew inspiration from the preceding ideals of the Protestant Reformation. Then the founders employed a non-sectarian religious ethos to reconcile, first, an individual citizen’s natural need for kindred and group-based affiliation with, second, the necessary protection of what the founders considered to be an individual’s divinely endowed gifts of liberty, reason, and industry. The early American colonialists were predominately Protestant groups, all agreeing with the Protestant Reformation ideals, which held that God’s salvation is by grace alone and not by religious works, male or female sex, kindred group, and social standing. Too, the colonialists cherished the ideal of universal Christian brotherhood and the New Testament edicts that commanded that believers should love all as themselves, even though not all will become Christians. Essentially, the founders gleaned from these liberal Protestant values that all could attain godly virtue and reason peacefully, while respecting an impartial secular government that protects individual liberty. And so, benefiting from the manner in which the populace’s diverse adherence to Judeo-Christian principles enabled the populace to self-govern and engage in the execution of government, the American founders established the U.S. Federal Government as subservient to the divine endowments of an individual citizen’s liberty. Excepting the American nation’s enslavement of African Americans, as well as the persecution of native peoples, the American republic realized its goal of liberating individual citizens.
The ideals of the French Revolution and the succeeding socialist revolutions, spanning the 19th century and early 20th century, exemplify the second vision of modern government. The revolutionary governments of Europe did not achieve the same social class cohesion that Protestant groups achieved in the United States of America. Revolutionary France, like succeeding European revolutionary governments, retained a negative impression of Europe’s centuries-old religious, economic, and social class wars. Like the founders of the American republic, the founders of the European republics recognized humankind’s capacity to achieve enlightenment through reason; however, unlike the optimistic view of the Americans, who subjugated government under the virtues that humankind naturally possesses, the founders of Europe’s new republics considered human history’s kindred, religious, and factional strife as issuing from humankind’s brutish nature. With this view in mind, the founders of the European republics sought to protect individual liberties with social contracts that are suspicious of faction and concentrations of wealth. The European republics, therefore, subjugated all group-based orders under the sovereignty of centralized governments that aspire to consider all policies through the objective lenses of the physical and social sciences. And so, rather than employing a non-sectarian religious ethos to reconcile an individual’s want for liberty with an individual’s want for group-based affiliation, the European founders conceived the suppression of all factions in the pursuit of an egalitarian society that often redistributes wealth through taxation and regulation.
The European, more scientific, socialist’s view of modern representational government seems superior to the United States of America’s view; however, the European socialist approach falls short of modern government’s goal of guaranteeing individual liberty and economic viability to the same extent that the United States’ model achieves. First, the socialist model has admirable aspirations in its attempt to protect individual citizens from being economically exploited by powerful political, business, fraternal, and religious groups. Nonetheless, the socialist’s egalitarian efforts to repress all powers, also, stifle the general public’s moral maturity. In other words, the socialist’s egalitarian approach fails to reward the personal merit, thrift, diligence, productivity, and patience that often result in the accrual of wealth and influence, justly. At the same time, the socialist egalitarian approach often rewards corruption, slothfulness, and the excesses that often contribute to failed businesses and lifestyle choices. In the end, socialist programs often yield the effect of impressing upon the general public the sense of moral relativism and indifference: attitudes that paralyze the public from participating fully in democratic government. What is more, as the socialists stifle concentrations of wealth and power, the socialist’s egalitarian approach correspondingly fails to command or guarantee the ongoing innovations and new monies needed to secure livable wages for individual citizens. Eventually, heavily socialist regimes become irresponsive to an individual citizen’s daily needs, forcing the citizens to fall back to supporting the combative factions that the modern government arose to protect individual citizens from.
The ideals of the French Revolution and the succeeding socialist revolutions, spanning the 19th century and early 20th century, exemplify the second vision of modern government. The revolutionary governments of Europe did not achieve the same social class cohesion that Protestant groups achieved in the United States of America. Revolutionary France, like succeeding European revolutionary governments, retained a negative impression of Europe’s centuries-old religious, economic, and social class wars. Like the founders of the American republic, the founders of the European republics recognized humankind’s capacity to achieve enlightenment through reason; however, unlike the optimistic view of the Americans, who subjugated government under the virtues that humankind naturally possesses, the founders of Europe’s new republics considered human history’s kindred, religious, and factional strife as issuing from humankind’s brutish nature. With this view in mind, the founders of the European republics sought to protect individual liberties with social contracts that are suspicious of faction and concentrations of wealth. The European republics, therefore, subjugated all group-based orders under the sovereignty of centralized governments that aspire to consider all policies through the objective lenses of the physical and social sciences. And so, rather than employing a non-sectarian religious ethos to reconcile an individual’s want for liberty with an individual’s want for group-based affiliation, the European founders conceived the suppression of all factions in the pursuit of an egalitarian society that often redistributes wealth through taxation and regulation.
The European, more scientific, socialist’s view of modern representational government seems superior to the United States of America’s view; however, the European socialist approach falls short of modern government’s goal of guaranteeing individual liberty and economic viability to the same extent that the United States’ model achieves. First, the socialist model has admirable aspirations in its attempt to protect individual citizens from being economically exploited by powerful political, business, fraternal, and religious groups. Nonetheless, the socialist’s egalitarian efforts to repress all powers, also, stifle the general public’s moral maturity. In other words, the socialist’s egalitarian approach fails to reward the personal merit, thrift, diligence, productivity, and patience that often result in the accrual of wealth and influence, justly. At the same time, the socialist egalitarian approach often rewards corruption, slothfulness, and the excesses that often contribute to failed businesses and lifestyle choices. In the end, socialist programs often yield the effect of impressing upon the general public the sense of moral relativism and indifference: attitudes that paralyze the public from participating fully in democratic government. What is more, as the socialists stifle concentrations of wealth and power, the socialist’s egalitarian approach correspondingly fails to command or guarantee the ongoing innovations and new monies needed to secure livable wages for individual citizens. Eventually, heavily socialist regimes become irresponsive to an individual citizen’s daily needs, forcing the citizens to fall back to supporting the combative factions that the modern government arose to protect individual citizens from.
In their efforts to secure individual liberty and economic viability, both the American and European approaches to modern representational government have overcome great challenges; since modern government’s late 18th century inception unto our current era, which sees a globalized commercial market. Despite their past successes, both approaches to modern government currently falter: European colonialism and American venture capitalism have equipped non-Western nations with the industrial capacities that Western individual liberties and entrepreneurialism have developed; however, the non-Western cultures still retain the group-based social prejudices, religious discrimination, and exploitive economic practices that persecute individuals because of their race, male or female sex, faith, and social standing. And so, the cause of the faltering of both American and European approaches to modern government is that both approaches fail to safeguard Western citizens’ liberty and economic vitality. The approaches fail to address three challenges that globalism presents:
First, western governments fail to address the comparative economic advantage that non-Western cultures have as the cultures benefit from the slave labor status of peoples whom the non-Western cultures oppress: the non-Western cultures still oppress people because of their male or female sex, race, kindred group, social status, and faith. Second, western governments fail to assimilate immigrants who still hold the cultural prejudices of their non-Western homelands. Third, the protestant Church and the scientific community (the two institutions that have inspired the two respective philosophical approaches to modern government) have not established an objective understanding that distinguishes personal liberty and identity from environmental influences and behavioral traits. As a result, the Church and the scientific community have failed to equip the post-Cold War millennial generation with a decisive knowledge base that defends western libertarian traditions against the non-libertarian cultures of the non-western world.
Because of their upholding the doctrine of election by grace through faith (regardless of male or female sex, race, kindred group, social status, and religious works); Protestant Christians can lay rightful claim for fostering a liberal American environment where people of faith accept secular government and the individual freedom of conscience to choose one’s belief. Still, though Protestant Church doctrines may be more compatible with modern secular government, the Church has not established the preeminence of its doctrines over non-Western belief systems: like other belief systems, the Church has only employed tautological arguments (that is, arguments that only reemphasize positions without objective proof) to establish the existence of God and the individual souls whom he saves.
To make matters worse, modern science, featuring quantum physics and cognitive neuroscience, seems to disprove the possibility of either establishing a singular truth and existence of God or establishing the existence of a unified human soul. Quantum physics seems to disprove the possibility of establishing a singular truth that governs the universe, because quantum physics observes that energy manifests by distinct quantities of energy decaying from one magnitude to another: a process that defies the certainty of time and origin and a process that scientists must mathematically rationalize to define the seemingly normalized world that we observe. Furthermore, cognitive neuroscientists observe that what appears to us as our unified conscience minds are actually the mechanical collaboration of a multiplicity of brain regions that complexes of electro-neuron cells define.
What we may conclude is that modern science lacks an answer to what philosophers call the epistemological questions: questions that first ask how do our perceptions interface with the world in a manner that justifies our understanding. The epistemological questions then ask what capacity do our human faculties give us to affirm our knowledge. While scientists assume that their scientific method of proving hypotheses through critical observation settles the epistemological questions, the unanswered questions remain the heart of our theological, philosophical, scientific, and political disputes: the Church schisms that resulted from questions concerning Lord Jesus’ deity is a theological derivation of the epistemological questions, compelling theologians to consider our human capacity for virtue in relation to the necessity for God to confer Lord Jesus’ virtue upon us. Too, philosophers’ time honored arguments concerning the human capacity to obtain reason and virtue in relation to the supposed preexistence of a rational structure within nature are derivations of the epistemological questions. Likewise, contemporary scientists’ expectation that we possess the reasoning capacity to apprehend natural laws to assess the world that we observe is a derivation of the epistemological questions. Finally, viewing the political consequences of the unanswered epistemological questions, conservative and religious groups look in horror as modern science’s unanswered questions compel legislatures and other jurists to reconsider what innate capacity defines an individual and his or her right to private property or need for government handouts. As well, lawmakers try to reconsider an individual’s environmental influences, as lawmakers reconsider an individual’s culpability in his or her committing of crimes. To be sure, because of the Church’s weakness, the unknowns that contemporary science leaves society with shifts modern government from the American libertarian approach (which relies upon individual endowments from a creator) unto the European approach, which relies upon a centralized government that attempts to suppress faction and concentrations of wealth, without success.
The Church and our scientific institutions fail to establish objective evidence that justifies the Western constitutional tradition of protecting the individuality of a person and his or her liberty over environmental, kindred, religious, political and other group-based influences. Nevertheless, the future of Western liberties does not have to be uncertain. The perspective domains of theologians, philosophers, scientists, and jurists often prevent them from fully recognizing the crucial contributions that each domain offers to fulfill human knowledge. From a landscape perspective, thelandscapeoftruth.com recognizes the crucial contributions of not only science’s scientific method, but also theology’s metaphysical suppositions, philosophy’s epistemological inquiry, and political science’s legal examinations.
For example, from a landscape perspective, we observe the fact that physicists do not even remotely recognize the profound implications of the mathematical procedures that the physicists employ to reconcile the uncertain states of the subatomic world with the normalized world that we observe: we observe that an understanding of the implications can directly give cognitive neuroscientists the means to distinguish the unified mind from the neurological complexes that contribute to it. We see that an understanding of the implications offer an objective means for Christian theologians to prove their metaphysical determinations; moreover, the implications readily offer the means to answer philosophers’ epistemological questions. Also, we observe that the implications offer political scientists the means to establish an individual’s legal standing. Finally, of most import for our current discussion, we observe that an understanding of the implications (for Christian theology, western philosophy, science, and western liberties) offer the millennial generation the objective means to defend Western culture without resorting to bigotry against non-Western immigrants, who are in fact Western peoples’ human brethren. To this end, we celebrate an understanding of the implications for the physicists’ mathematical procedure in a theological, philosophical, and scientific judgment that bears the name Immanuel’s Law. Let’s take a look.
Because of their upholding the doctrine of election by grace through faith (regardless of male or female sex, race, kindred group, social status, and religious works); Protestant Christians can lay rightful claim for fostering a liberal American environment where people of faith accept secular government and the individual freedom of conscience to choose one’s belief. Still, though Protestant Church doctrines may be more compatible with modern secular government, the Church has not established the preeminence of its doctrines over non-Western belief systems: like other belief systems, the Church has only employed tautological arguments (that is, arguments that only reemphasize positions without objective proof) to establish the existence of God and the individual souls whom he saves.
To make matters worse, modern science, featuring quantum physics and cognitive neuroscience, seems to disprove the possibility of either establishing a singular truth and existence of God or establishing the existence of a unified human soul. Quantum physics seems to disprove the possibility of establishing a singular truth that governs the universe, because quantum physics observes that energy manifests by distinct quantities of energy decaying from one magnitude to another: a process that defies the certainty of time and origin and a process that scientists must mathematically rationalize to define the seemingly normalized world that we observe. Furthermore, cognitive neuroscientists observe that what appears to us as our unified conscience minds are actually the mechanical collaboration of a multiplicity of brain regions that complexes of electro-neuron cells define.
What we may conclude is that modern science lacks an answer to what philosophers call the epistemological questions: questions that first ask how do our perceptions interface with the world in a manner that justifies our understanding. The epistemological questions then ask what capacity do our human faculties give us to affirm our knowledge. While scientists assume that their scientific method of proving hypotheses through critical observation settles the epistemological questions, the unanswered questions remain the heart of our theological, philosophical, scientific, and political disputes: the Church schisms that resulted from questions concerning Lord Jesus’ deity is a theological derivation of the epistemological questions, compelling theologians to consider our human capacity for virtue in relation to the necessity for God to confer Lord Jesus’ virtue upon us. Too, philosophers’ time honored arguments concerning the human capacity to obtain reason and virtue in relation to the supposed preexistence of a rational structure within nature are derivations of the epistemological questions. Likewise, contemporary scientists’ expectation that we possess the reasoning capacity to apprehend natural laws to assess the world that we observe is a derivation of the epistemological questions. Finally, viewing the political consequences of the unanswered epistemological questions, conservative and religious groups look in horror as modern science’s unanswered questions compel legislatures and other jurists to reconsider what innate capacity defines an individual and his or her right to private property or need for government handouts. As well, lawmakers try to reconsider an individual’s environmental influences, as lawmakers reconsider an individual’s culpability in his or her committing of crimes. To be sure, because of the Church’s weakness, the unknowns that contemporary science leaves society with shifts modern government from the American libertarian approach (which relies upon individual endowments from a creator) unto the European approach, which relies upon a centralized government that attempts to suppress faction and concentrations of wealth, without success.
The Church and our scientific institutions fail to establish objective evidence that justifies the Western constitutional tradition of protecting the individuality of a person and his or her liberty over environmental, kindred, religious, political and other group-based influences. Nevertheless, the future of Western liberties does not have to be uncertain. The perspective domains of theologians, philosophers, scientists, and jurists often prevent them from fully recognizing the crucial contributions that each domain offers to fulfill human knowledge. From a landscape perspective, thelandscapeoftruth.com recognizes the crucial contributions of not only science’s scientific method, but also theology’s metaphysical suppositions, philosophy’s epistemological inquiry, and political science’s legal examinations.
For example, from a landscape perspective, we observe the fact that physicists do not even remotely recognize the profound implications of the mathematical procedures that the physicists employ to reconcile the uncertain states of the subatomic world with the normalized world that we observe: we observe that an understanding of the implications can directly give cognitive neuroscientists the means to distinguish the unified mind from the neurological complexes that contribute to it. We see that an understanding of the implications offer an objective means for Christian theologians to prove their metaphysical determinations; moreover, the implications readily offer the means to answer philosophers’ epistemological questions. Also, we observe that the implications offer political scientists the means to establish an individual’s legal standing. Finally, of most import for our current discussion, we observe that an understanding of the implications (for Christian theology, western philosophy, science, and western liberties) offer the millennial generation the objective means to defend Western culture without resorting to bigotry against non-Western immigrants, who are in fact Western peoples’ human brethren. To this end, we celebrate an understanding of the implications for the physicists’ mathematical procedure in a theological, philosophical, and scientific judgment that bears the name Immanuel’s Law. Let’s take a look.
This editorial cannot entirely do justice to the scope of Immanuel’s Law: the editorial can only highlight Immanuel’s Law’s significance. So named from the Hebrew word Immanuel, meaning “God with us”, Immanuel’s Law is equally a theological, philosophical, and scientific judgment, because Immanuel’s Law directly secures the objective means to address the most pressing questions that concerns each discipline: first, unlike previous creationist and intelligent design attempts, Immanuel’s Law does not resort to making tautological statements, which only reassert dogmatic claims that do not derive or make scientific predictions. Instead, Immanuel’s Law is a theological judgment that presents objective evidence for not only the existence of God, but also the existence of his three persons, in accordance to the ancient Creeds of Christendom. Second, rather than solely relying upon the semantics of logical forms to frame ideas and verify understanding, Immanuel’s Law is a philosophical judgment that obtains the objective means to resolve the epistemological questions, concerning the structure and breadth of our understanding: Immanuel’s Law distinguishes our knowledge structure and how we interface with the most basic physicality of nature, in a manner that shows how we experience emergent phenomena in a way that justifies our beliefs. Third, rather than taking for granted that we can ascertain natural laws under which we can explain how physical phenomena emerge and operate, Immanuel’s Law obtains the objective means to describe the absolute state preceding the so-called Big Bang event, as well as describe why the emergent universe adheres to the contingencies and constants of the natural laws. As a proof, Immanuel’s Law obtains the objective means to describe how the emergent universe enables the structure of our perceptions. Also, Immanuel’s Laws obtains the objective means to distinguish the unified conscious mind from the neurological systems that support it.
Immanuel’s Law frames the objective evidence that resolves theology, philosophy, and science’s epistemological problem in three of what logicians term a priori synthetic statements: that is, derivative statements that functionally unite independent principles to derive unique, verifiable outcomes that are not directly related to the independent principles. Expounding upon the Nobel Laureate Max Planck’s discovery that energy propagates in discrete quantities, Immanuel Law’s first derivative (a priori synthetic) statement resolves epistemology’s quest to establish the manner in which our emerging environments enable our conscious perceptions.
Immanuel’s Law’s initial derivate statement observes how only finite quantities of energy propagate after the Big Bang event, whereas a singularity of infinite energy preceded the event. And so, Immanuel’s Law’s initial derivative statement observes how, first, the quanta of energy and, second, the emergent world’s unique solid states stand as operatives of a functional expression from the preceding singularity of infinite energy. The initial statement of course notes how Christian theology recognizes the preceding infinite energy as the omnipotent attribute of the deity. While Immanuel’s Law employs its second derivative (a priori synthetic) statement as an ultimate proof for the first statement, Immanuel’s Law details supporting derivative statements. The supporting derivative statements show how the first statement’s described functional expression demonstrates that causality is not linear, as quantum physics, itself, demonstrates. The supporting statements, in fact, demonstrate how physicists readily embrace the quantum reality of microscopic behavior, even as the physicists overlook the critical fact that they have the psychological disposition to frame their theories from an old worldview: a pre-quantum physics, normalized, linear perspective.
As an ultimate proof for Immanuel’s Law’s first derivative (a priori synthetic) statement, Immanuel Law’s second derivative statement resolves epistemology’s quest to establish how the brain and central nervous system apprehend the basic foundation of the conscious mind, which ongoing perceptions of its environment support. The second derivative statement takes direct aim at cognitive neuroscientists’ inability to understand how the electrochemical exchanges of the brain and central nervous system amount to the first person sense of consciousness. The second statement recognizes that cognitive neuroscientists have successfully used imaging techniques and brain pathology studies to distinguish brain regions that are responsible for sensory input and motor output, as well as cognition modes; however, the second statement recognizes that the cognitive neuroscientists routinely dismiss the actual experience of the first person independent sense as an epiphenomenon, which is the mere fictional appearance or illusion of the self. In other words, the second statement recognizes that the cognitive neuroscientists’ bafflement leads the neuroscientists to hold that our conscious experience is merely the sophisticated outcome of environmental impressions: a belief that has devastating consequences culturally and constitutionally, as we have described in the introduction to this editorial.
Immanuel’s Law second derivative statement recognizes that many theorists continue to present brilliant theories that seek to discover the neuronal correlate to our first person experience: the theories consider whether the electromagnetic fields in the brain play the necessary unifying operation that corresponds to the first person conscious sense; moreover, the theories explore the notion that the first person correlate has a quantum mechanics based explanation, such as quantum entanglement. The second derivative statement demonstrates that because the theorists frame their electromagnetic and quantum-based theories of consciousness in an old worldview, that is, a pre-quantum physics worldview, the theories fail to envision the scope of what a physical correlate of consciousness entails.
As an ultimate proof for Immanuel’s Law’s first derivative (a priori synthetic) statement, Immanuel Law’s second derivative statement resolves epistemology’s quest to establish how the brain and central nervous system apprehend the basic foundation of the conscious mind, which ongoing perceptions of its environment support. The second derivative statement takes direct aim at cognitive neuroscientists’ inability to understand how the electrochemical exchanges of the brain and central nervous system amount to the first person sense of consciousness. The second statement recognizes that cognitive neuroscientists have successfully used imaging techniques and brain pathology studies to distinguish brain regions that are responsible for sensory input and motor output, as well as cognition modes; however, the second statement recognizes that the cognitive neuroscientists routinely dismiss the actual experience of the first person independent sense as an epiphenomenon, which is the mere fictional appearance or illusion of the self. In other words, the second statement recognizes that the cognitive neuroscientists’ bafflement leads the neuroscientists to hold that our conscious experience is merely the sophisticated outcome of environmental impressions: a belief that has devastating consequences culturally and constitutionally, as we have described in the introduction to this editorial.
Immanuel’s Law second derivative statement recognizes that many theorists continue to present brilliant theories that seek to discover the neuronal correlate to our first person experience: the theories consider whether the electromagnetic fields in the brain play the necessary unifying operation that corresponds to the first person conscious sense; moreover, the theories explore the notion that the first person correlate has a quantum mechanics based explanation, such as quantum entanglement. The second derivative statement demonstrates that because the theorists frame their electromagnetic and quantum-based theories of consciousness in an old worldview, that is, a pre-quantum physics worldview, the theories fail to envision the scope of what a physical correlate of consciousness entails.
To resolve the problem of discovering the physicality that corresponds with our first person conscious sense, Immanuel’s Law’s second derivative statement understands the following: in order to capture, physically, the way the conscious mind is both temporally free and contingently identity dependent upon its physical environment, the brain and central nervous system that incite the mind must feature a normalizing operation that reconciles quantum behavior with the normalized world that the mind captures and or represents. Being inspired by computer engineers who regularly utilize silicone crystals to perform quantum mechanics based operations of conducting or inhibiting electric currents, the second statement discards cognitive neuroscientists’ systems approach to defining brain operations.
In other words, the statement discards the matter-of-fact diagrammatic labeling of which brain region is responsible for each cognitive and bodily operation. Instead, the second statement captures the actual first person sense by envisioning how the brain and the central nervous system normalize sensory input into uniform representations that are real-time, virtual, magnitudinous representations of the environment that the body senses. In short, Immanuel’s Law second derivative statement demonstrates how the brain encodes electromagnetic waves in a manner that virtually represents the emergent world and renders consciousness as a magnitudinous sense that is contingently juxtaposition to the world that it intentionally queries.
By defining the brain and the central nervous system’s operations in terms of the way the brain encodes light to apprehend a normalized world, Immanuel’s Law apprehends supporting derivative (a priori synthetic) statements that dramatically further our theological, philosophical, and scientific understanding. For example, the second statement describes how raw sensory input routes through the brainstem to be diffused by adjacent brain regions until the circuit reaches the outer cortex. The second statement then describes the interplay between other brain regions that ultimately encode present electromagnetic rays to render the system, which captures the first person sense, as a dynamic reference frame; as opposed to classical and Einsteinian physics’ supposition that we have relative and indefinite points of reference during our observations of the world. The second statement rather demonstrates how our cognitions actively work upon the world, continuously reconstructing and reimagining the world above and beyond what we immediately sense.
Even as Immanuel’s Law’s first statement describes the actual extension of the world, the second statement’s descriptions of consciousness’ normalized virtual representations vindicate classical philosophers who envisioned how our knowledge structure stands upon the mind’s only producing representations of the world, without the mind’s immediately grasping the world. The second statement’s supporting derivative statements also describe the actual operation that grasp humanity’s language capacity: the supporting statements accomplish this by describing the operative demarcation between animal cognition and human cognition, which entails a further normalizing capacity that refines the whole dynamic circuit.
Finally, to resolve the problem of securing objective evidence for the Creeds of Christendom, Immanuel’s Law third derivative (a priori synthetic) statement distinguishes the physical correlations for God’s distinct persons, as well as the animal spirit and human soul. Foremost, the third derivative statement establishes that the dynamic reference frame, which the brain apprehends by encoding light, is spiritedness because of its temporal standing: that is, the dynamic reference frame does not depend upon any single circuit or sense impression. Instead dynamic reference frame depends upon its normalizing capacity to model its perspective relative to the emergent solid states that Immanuel’s Law’s first derivative statement defines as distinct operatives: even operatives that functionally derive from the singularity preceding the propagation of energy quanta, which the so-called Big Bang initiated.
Following its description of the way the dynamic reference frame is spiritedness, Immanuel’s Law’s third statement describes the physical correlate of the human soul like so: the statement initially distinguishes the brain’s maintenance operations that occur after the brain encodes the electromagnetic waves that incite consciousness’ dynamic reference frame. Then the statement observes that these operations, occurring in brain regions like the neocortex and memory centers, are not the direct product of sensory input circuits. Rather, the statement notes that the operations only serve to maintenance the dynamic reference frame. And so, the statement demarcates the physical correlate of the human soul as the dynamic circuit’s capacity not only to generate ongoing virtual representations of the normalized world, but also to generate representations that appreciate the way the normalized world appears in a systematized manner to incite our representations’ analogous nature. The statement observes that the new maintenance representations are a dynamic physicality that is temporarily free in respect to immediate environmental impressions and the normalized world, itself. The new representations only remain as an adjunct to the dynamic reference frame.
An important understanding that Immanuel’s Law’s second and third statements capture is the schematized way that the normalized world appears and empowers the human mind to generate analogous representations that enable the mind to objectify and envision the perspectives of others. Without having an understanding of the genesis of the mind’s objective capacity, cognitive neuroscientists term this ability the mind’s intersubjective capacity. This ability gives humanity the means to frame experiences aesthetically, spiritually, and morally.
Immanuel’s Law’s second and third statements’ description of how the normalized world incites the brain’s capacity to generate normalized representations proves the first statement’s assertion that the appearing world is a functional expression. Immanuel’s Law recognizes that the appearing world features subatomic particle decay, planetary evolution, as well as geological and biological adaptive capacities. At the same time, Immanuel’s Law recognizes that contemporary scientists lack epistemological understanding; therefore, the scientists psychologically defer to an old worldview when they frame the appearing world’s evolving systems. The scientists fail to recognize that they observe the effect of a normalized world that appears to be a self-sufficient system, even though the appearing world is an asymmetric functional expression. As evidence, Immanuel’s Law’s second and third statements highlight the fact that the simplest animal dynamic reference frame is a system that rationalizes subatomic behavior that does not entail the certain position and momentum of the appearing normalized world. Immanuel’s Law, therefore, demonstrates the impossibility for an evolved biological appendage to acquire the capacity to rationalize subatomic behavior that cannot approximate the evolved certitude of any normalized, macroscopic appendage. Like so, Immanuel’s Law concludes that animal and human consciousness is not an evolutionary outcome. Further still, Immanuel’s Law demonstrates that the dynamic normalizing system that affords human consciousness the capacity to rationalize all physical systems transcend the scope of the theory of evolution: a theory that its author framed in an old world view, prior to the rise of quantum physics, which scientists continue to lack an epistemological understanding of.
An astute reader of Immanuel’s Law may conclude that the first and second derivative (a priori synthetic) statements answer the epistemological questions that, first, seek the manner in which our environment facilitate our knowledge structure and, second, seek the neurological basis of the same. Yet, Immanuel’s Law’s third derivative statement settles the theological aspect of the epistemological questions, which ultimately seeks the fulfillment of our personal standing in our social and physical environments. To this end, the third statement employs the objective evidence that the first and second statement establish to prove the validity of orthodox Christianity, as affirmed by the Christian Creeds.
Immanuel’s Law theological descriptions stand well beyond the scope of this editorial; however, in regards to fulfilling epistemological knowledge, we may loosely note how the third statement demonstrates how the functional expression of the world stands synonymous with God’s expressed Word; whereas God the Son fulfills humanity’s first-person perspective, while God the Holy Spirit fulfills humanity’s inter-subjective perspective. The third statement ultimately demonstrates that an orthodox understanding of Christianity is an understanding that the salvation of the soul and its perspectives is beatific work of God, alone, even as Creation is a beatific expression that human works cannot attain: that is, regardless of one’s race, kindred group, male or female sex, religious works, or social standing.
In the end, Immanuel’s Law demonstrates that human beings and their capacity to desire the ultimate knowledge of justice and the right is not an accident of nature. To the contrary, Immanuel’s Law demonstrates that the capacity to desire liberty is a gift to the individual. Immanuel’s Law, therefore, does not merely establish the legal theories of the framers of Western liberties, who framed the liberties with no objective proof. With its objective clarity, Immanuel’s Law takes Western theology, philosophy, and science to another level.
By defining the brain and the central nervous system’s operations in terms of the way the brain encodes light to apprehend a normalized world, Immanuel’s Law apprehends supporting derivative (a priori synthetic) statements that dramatically further our theological, philosophical, and scientific understanding. For example, the second statement describes how raw sensory input routes through the brainstem to be diffused by adjacent brain regions until the circuit reaches the outer cortex. The second statement then describes the interplay between other brain regions that ultimately encode present electromagnetic rays to render the system, which captures the first person sense, as a dynamic reference frame; as opposed to classical and Einsteinian physics’ supposition that we have relative and indefinite points of reference during our observations of the world. The second statement rather demonstrates how our cognitions actively work upon the world, continuously reconstructing and reimagining the world above and beyond what we immediately sense.
Even as Immanuel’s Law’s first statement describes the actual extension of the world, the second statement’s descriptions of consciousness’ normalized virtual representations vindicate classical philosophers who envisioned how our knowledge structure stands upon the mind’s only producing representations of the world, without the mind’s immediately grasping the world. The second statement’s supporting derivative statements also describe the actual operation that grasp humanity’s language capacity: the supporting statements accomplish this by describing the operative demarcation between animal cognition and human cognition, which entails a further normalizing capacity that refines the whole dynamic circuit.
Finally, to resolve the problem of securing objective evidence for the Creeds of Christendom, Immanuel’s Law third derivative (a priori synthetic) statement distinguishes the physical correlations for God’s distinct persons, as well as the animal spirit and human soul. Foremost, the third derivative statement establishes that the dynamic reference frame, which the brain apprehends by encoding light, is spiritedness because of its temporal standing: that is, the dynamic reference frame does not depend upon any single circuit or sense impression. Instead dynamic reference frame depends upon its normalizing capacity to model its perspective relative to the emergent solid states that Immanuel’s Law’s first derivative statement defines as distinct operatives: even operatives that functionally derive from the singularity preceding the propagation of energy quanta, which the so-called Big Bang initiated.
Following its description of the way the dynamic reference frame is spiritedness, Immanuel’s Law’s third statement describes the physical correlate of the human soul like so: the statement initially distinguishes the brain’s maintenance operations that occur after the brain encodes the electromagnetic waves that incite consciousness’ dynamic reference frame. Then the statement observes that these operations, occurring in brain regions like the neocortex and memory centers, are not the direct product of sensory input circuits. Rather, the statement notes that the operations only serve to maintenance the dynamic reference frame. And so, the statement demarcates the physical correlate of the human soul as the dynamic circuit’s capacity not only to generate ongoing virtual representations of the normalized world, but also to generate representations that appreciate the way the normalized world appears in a systematized manner to incite our representations’ analogous nature. The statement observes that the new maintenance representations are a dynamic physicality that is temporarily free in respect to immediate environmental impressions and the normalized world, itself. The new representations only remain as an adjunct to the dynamic reference frame.
An important understanding that Immanuel’s Law’s second and third statements capture is the schematized way that the normalized world appears and empowers the human mind to generate analogous representations that enable the mind to objectify and envision the perspectives of others. Without having an understanding of the genesis of the mind’s objective capacity, cognitive neuroscientists term this ability the mind’s intersubjective capacity. This ability gives humanity the means to frame experiences aesthetically, spiritually, and morally.
Immanuel’s Law’s second and third statements’ description of how the normalized world incites the brain’s capacity to generate normalized representations proves the first statement’s assertion that the appearing world is a functional expression. Immanuel’s Law recognizes that the appearing world features subatomic particle decay, planetary evolution, as well as geological and biological adaptive capacities. At the same time, Immanuel’s Law recognizes that contemporary scientists lack epistemological understanding; therefore, the scientists psychologically defer to an old worldview when they frame the appearing world’s evolving systems. The scientists fail to recognize that they observe the effect of a normalized world that appears to be a self-sufficient system, even though the appearing world is an asymmetric functional expression. As evidence, Immanuel’s Law’s second and third statements highlight the fact that the simplest animal dynamic reference frame is a system that rationalizes subatomic behavior that does not entail the certain position and momentum of the appearing normalized world. Immanuel’s Law, therefore, demonstrates the impossibility for an evolved biological appendage to acquire the capacity to rationalize subatomic behavior that cannot approximate the evolved certitude of any normalized, macroscopic appendage. Like so, Immanuel’s Law concludes that animal and human consciousness is not an evolutionary outcome. Further still, Immanuel’s Law demonstrates that the dynamic normalizing system that affords human consciousness the capacity to rationalize all physical systems transcend the scope of the theory of evolution: a theory that its author framed in an old world view, prior to the rise of quantum physics, which scientists continue to lack an epistemological understanding of.
An astute reader of Immanuel’s Law may conclude that the first and second derivative (a priori synthetic) statements answer the epistemological questions that, first, seek the manner in which our environment facilitate our knowledge structure and, second, seek the neurological basis of the same. Yet, Immanuel’s Law’s third derivative statement settles the theological aspect of the epistemological questions, which ultimately seeks the fulfillment of our personal standing in our social and physical environments. To this end, the third statement employs the objective evidence that the first and second statement establish to prove the validity of orthodox Christianity, as affirmed by the Christian Creeds.
Immanuel’s Law theological descriptions stand well beyond the scope of this editorial; however, in regards to fulfilling epistemological knowledge, we may loosely note how the third statement demonstrates how the functional expression of the world stands synonymous with God’s expressed Word; whereas God the Son fulfills humanity’s first-person perspective, while God the Holy Spirit fulfills humanity’s inter-subjective perspective. The third statement ultimately demonstrates that an orthodox understanding of Christianity is an understanding that the salvation of the soul and its perspectives is beatific work of God, alone, even as Creation is a beatific expression that human works cannot attain: that is, regardless of one’s race, kindred group, male or female sex, religious works, or social standing.
In the end, Immanuel’s Law demonstrates that human beings and their capacity to desire the ultimate knowledge of justice and the right is not an accident of nature. To the contrary, Immanuel’s Law demonstrates that the capacity to desire liberty is a gift to the individual. Immanuel’s Law, therefore, does not merely establish the legal theories of the framers of Western liberties, who framed the liberties with no objective proof. With its objective clarity, Immanuel’s Law takes Western theology, philosophy, and science to another level.
Wide spread economic opportunity continues to be a vital building block for the support of modern representational government. The opportunity for individual advancement because of personal merit is essential for the healthful welfare of the citizenry. For most of human history, whole group-based associations of people endured economic exploitation because of the lack of economic opportunity. Since the inception of modern representational government, economic downturns tested the moral aptitude of both the American libertarian approach to government and the European egalitarian approach: economic downturns test the respective government’s capacity to champion the welfare of the individual, because the downturns compel people to seek the security of group-based affiliations, regardless of the harm done to disenfranchised individuals.
For example, in early 19th Century United States of America, the highly profitable cotton industry compelled southern states to employ the slave labor of African Americans. Likewise, in the latter half of 19th Century America, burgeoning industrial factories exploited the poor immigrant populations of the Irish, Italians, and East Europeans. The founders of the United States of America drew libertarian inspiration from the nation’s primarily Reformed Protestant population, which had united American culture with a moral since of brotherhood and charity, despite one’s individual belief or social standing; therefore, the Church stood well positioned to champion the abolition movement to free the slaves and eventually secure civil rights a century later. Furthermore, the Church stood well positioned to form charity agencies to better the lives of immigrants in the northern factory cities.
Unfortunately, European governments did not have the Protestant cultural ethos to unite the European peoples, during economic downturns; therefore, factional strife took a more lethal form, as the Europeans’ more centralized non-libertarian governments could all too easily switch industrial factories into military work horses. Europeans multiplied colonies in the non-industrialized and non-Western world, in order to exploit the perspective nations’ resources and socially oppressed peoples. Soon the conflicting interests of European nations, who also experienced factional conflict within, resulted in two industrialized World Wars.
Fortunately, the industrial, military, and libertarian might of the United States of American intervened, saving Europe from the autocratic socialist and communist regimes that sought to control industry and rob European peoples of their constitutional rights. Not having an autocratic cultural tradition, the United States pioneered the establishment of the United Nations, with its security council; lent large sums of capital to financially broken European democracies; and finally provided a military umbrella to protect Western European nations as they integrated a common European market in the interests of peace.
Because of the weakening influence of American Churches, the West faces a challenge, as we noted above. The challenge that Western nations face today is that the safeguard of the American approach to government is transforming into the European socialist model: a model that is egalitarian to the extent that the European model does not reconcile the economically and culturally disenfranchised. Presently, Europe’s former colonies grow in industrial and military strength without due protection to individual liberty. The lethal power of nuclear weapons is within the reach of those who seek to oppress religious minorities, minority kindred groups, and lifestyles that are contrary to the oppressors’ non-Protestant works-based religions.
For example, in early 19th Century United States of America, the highly profitable cotton industry compelled southern states to employ the slave labor of African Americans. Likewise, in the latter half of 19th Century America, burgeoning industrial factories exploited the poor immigrant populations of the Irish, Italians, and East Europeans. The founders of the United States of America drew libertarian inspiration from the nation’s primarily Reformed Protestant population, which had united American culture with a moral since of brotherhood and charity, despite one’s individual belief or social standing; therefore, the Church stood well positioned to champion the abolition movement to free the slaves and eventually secure civil rights a century later. Furthermore, the Church stood well positioned to form charity agencies to better the lives of immigrants in the northern factory cities.
Unfortunately, European governments did not have the Protestant cultural ethos to unite the European peoples, during economic downturns; therefore, factional strife took a more lethal form, as the Europeans’ more centralized non-libertarian governments could all too easily switch industrial factories into military work horses. Europeans multiplied colonies in the non-industrialized and non-Western world, in order to exploit the perspective nations’ resources and socially oppressed peoples. Soon the conflicting interests of European nations, who also experienced factional conflict within, resulted in two industrialized World Wars.
Fortunately, the industrial, military, and libertarian might of the United States of American intervened, saving Europe from the autocratic socialist and communist regimes that sought to control industry and rob European peoples of their constitutional rights. Not having an autocratic cultural tradition, the United States pioneered the establishment of the United Nations, with its security council; lent large sums of capital to financially broken European democracies; and finally provided a military umbrella to protect Western European nations as they integrated a common European market in the interests of peace.
Because of the weakening influence of American Churches, the West faces a challenge, as we noted above. The challenge that Western nations face today is that the safeguard of the American approach to government is transforming into the European socialist model: a model that is egalitarian to the extent that the European model does not reconcile the economically and culturally disenfranchised. Presently, Europe’s former colonies grow in industrial and military strength without due protection to individual liberty. The lethal power of nuclear weapons is within the reach of those who seek to oppress religious minorities, minority kindred groups, and lifestyles that are contrary to the oppressors’ non-Protestant works-based religions.
A litany of social consequences result if the general public accepts the orthodox judgment Immanuel’s Law: that is, Immanuel’s Law’s conclusion that consciousness is the dynamic perspective that the brain achieves after it normalizes quantum behavior by encoding electromagnetic waves, present in the brain. The least of these social consequences is the complete reconsideration of the questions of artificial intelligence and abortion:
Many ethical considerations are yet to be determined, in regards to the supposed independent decision making by intelligent devices. Immanuel’s Law determination of what the physical makeup of intelligence entails determines that what some consider to be intelligent devices are nothing more that programmed appendages that cannot make sentient decisions.
Many ethical considerations are yet to be determined, in regards to the supposed independent decision making by intelligent devices. Immanuel’s Law determination of what the physical makeup of intelligence entails determines that what some consider to be intelligent devices are nothing more that programmed appendages that cannot make sentient decisions.
In regards to the question of when does a fetus have the status of an unborn child, Immanuel’s Law’s determination is straightforward. Immanuel’s Law determines that the fetus becomes sentient the moment the fetus achieves an integrated neuronal circuit that can encode present electromagnetic brain waves. This capacity occurs relatively early in the pregnancy. We will address the issue of abortion in future editorials.
The orthodox judgement, Immanuel’s Law, is an integral part of the doctrinal treatise entitled the Landscape of Truth: An Orthodox Understanding of the Biblical Testaments for the True Worshippers. As stated in its introduction, the doctrinal treatise has two primary goals: to place a systematic understanding of the biblical Testaments into the hands of laypeople and to secure objective proof for the existence of God as defined in the New Testament scriptures and assented to in the Christian Creeds. Immanuel’s Law, therefore, establishes the second goal of the work, by securing objective evidence for the existence of God.
Immanuel’s Law takes up the better part of chapter three of the Landscape of Truth. The background and purpose for Immanuel’s Law begins at the section entitled “the Questions of Metaphysics and the Mind-Matter Explanatory Gap.” The background material proceeds by educating readers on philosophical and scientific theories that factor into the question of God, human consciousness, and the human spirit: the topics cover cognitive neuroscience and scientific history, as well as other philosophical considerations. Though the reading may be challenging and may require an advance reading level, the background material is necessary; moreover, the payoff is unparalleled in importance, as described above. The reading level is not far more advanced than the reading level required to appreciate this editorial.
Immanuel’s Law takes up the better part of chapter three of the Landscape of Truth. The background and purpose for Immanuel’s Law begins at the section entitled “the Questions of Metaphysics and the Mind-Matter Explanatory Gap.” The background material proceeds by educating readers on philosophical and scientific theories that factor into the question of God, human consciousness, and the human spirit: the topics cover cognitive neuroscience and scientific history, as well as other philosophical considerations. Though the reading may be challenging and may require an advance reading level, the background material is necessary; moreover, the payoff is unparalleled in importance, as described above. The reading level is not far more advanced than the reading level required to appreciate this editorial.
On the whole, Immanuel’s Law resolves theological, philosophical, scientific, and political questions that thinkers have pondered and fought over for millennia. we are suffice to say that allot of pain experienced from cultural, religious, and political quarrels could have been avoided had we this understanding earlier. To avoid the pending confrontations that will almost certainly grow as the West integrates non-Western cultures, we can at least consider the significance of Immanuel’s Law now.
In American Democracy, Presidents Still cannot Escape the Reliance upon Invocations of God during Perilous Times, rather than the Reliance upon Atheism and Science
Published July 13, 2015 – thelandscapeoftruth.com
Like the two sides of the nickel and the penny coins that respectively bear their facial profiles, Presidents Thomas Jefferson and Abraham Lincoln seemed to hold both religious and irreligious convictions. Both Presidents possessed the wisdom, foresight, and strength needed for them to shepherd a young American nation through perilous times. Since Presidents Jefferson and Lincoln’s times, both Christians and atheists have sought to associate their political convictions with the legacies of Jefferson and Lincoln; however, both legacies deny either Christians or atheists the sole claim. Presidents Jefferson and Lincoln left posterity without a grasp of the Presidents’ true convictions because Jefferson and Lincoln were both intellectuals who appreciated the critical observations of natural philosophy (the precursor to modern science), and both Jefferson and Lincoln harbored a skepticism that questioned whether the organized Church could encompass a true knowledge of creation. Even so, when faced with sublime national perils, both Jefferson and Lincoln forsook their sole reliance upon critical observation as the governors of their decisions. Indeed, both Presidents invoked a truth and justice that transcended their dire circumstance: Jefferson and Lincoln invoked God, liberally employing biblical rhetoric as the only capable articles that still give voice to their convictions.
Facing an uncertain future and the real potential of execution for his helping to lead the rebellion from Great Britain, Thomas Jefferson invoked the providence of God as the justification for the rebellion: in the American Declaration of Independence, Jefferson wrote “We hold these truths to be self-evident, that all mean are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness.”
At last coming face to face with the immense carnage that resulted from his decision to wage total civil war to save the union that the Framers of the Constitution initiated, Abraham Lincoln gave a Gettysburg Address to honor the fallen soldiers: Lincoln said, “Four score and seven years ago our fathers brought forth on this continent a new nation, conceived in liberty, and dedicated to the proposition that all men are created equal . . . . we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth.”
At last coming face to face with the immense carnage that resulted from his decision to wage total civil war to save the union that the Framers of the Constitution initiated, Abraham Lincoln gave a Gettysburg Address to honor the fallen soldiers: Lincoln said, “Four score and seven years ago our fathers brought forth on this continent a new nation, conceived in liberty, and dedicated to the proposition that all men are created equal . . . . we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth.”
Not being the least bit outmatched by his distinguished predecessors, President Franklin Delano Roosevelt readily sought refuge in the invocation of God, as President Roosevelt addressed the nation to inform them that he had just sent their sons on a perilous mission to save the free world by attacking the beaches of Normandy, in an effort to retake the European continent from the tyranny of Nazi Germany. After a brief announcement of the uncertain mission, President Roosevelt asked the nation to join him in corporate prayer. Roosevelt began by saying, “Almighty God: Our sons, pride of our Nation, this day have set upon a mighty endeavor, a struggle to preserve our Republic, our religion, and our civilization, and to set free a suffering humanity . . .”
Recent demographic studies suggest that the younger citizens of the United States of America are increasingly turning away from organized religion, namely Christianity, America’s historically dominant faith. The question that many have pondered is will successive generations of Americans accept an openly professing atheist as a President. Can atheists cultivate a cultural synthesis that preserves the liberties that modern states retain? In other words, can atheists’ utilitarian conceptions of ethics and morality inspire all segments of society to overlook personal ambition, kindred affinity, and personal welfare to secure the constitutional integrity of the state?
Obviously, many would answer yes to the question that asks do atheist have the capacity to inspire citizens to uphold modern democracy and the modern state’s constitutional rights; however, when one considers the full implications of modern science’s conception of the advent of humanity in the universe, one might draw a different conclusion, especially if one pondered what the United States would look like if Christianity became as extinct as the worship of the Roman gods.
Modern science essentially views the advent of the reasoning animal, humanity, as the fortuitous outcome of natural selection that results from the fitness of a species to survive for successive generations. By following, modern science’s conception of humanity to its logical conclusion, we understand that human reason’s conceptions of justice, equality, love, and empathy (the very conceptions that make civilization possible) are insubstantial, only possessing utilitarian usefulness.
In contrast, Christianity, holds that the righteous Lord Jesus, the savior of the soul’s human reasoning, stands resurrected, eternally to render justice, equality, empathy, and love as substantial eternal states that exist beyond humanity’s temporal circumstances. Apostolic Christianity’s key theological understanding, which the Protestant Reformers grasped to nurture America’s democratic spirit, is the doctrine of God’s indiscriminate election of the saints, despite religious works, kindred relation, social standing, and male or female sex. For instance, the effect of the doctrine of election is such that true Christians understand and accept that even their dearests may or may not become Christians; that religious works do not esteem Christians over others; that the temporal adherence to a secular state that secures religious liberty is necessary, until the Christ alone unites the Church; and that Christians gain personal affinity with an international Church body that transcends kindred affinity and personal ambition, rendering all equal under the Law of God.
A purely irreligious modern state, with no Christian cultural influence, cannot guarantee the constitutional liberties as envisioned by the Framers of the American Constitution for three significant, interrelating reasons: first, an atheistic state that legislates purely by scientific injunction results in educated elite who prove to be insensible to the peculiar concerns of the populace: an increasingly centralized and autocratic government results. Second, an atheistic state supports nihilistic moral principles that benefit the state above the people. Third, an atheistic state that legislates purely by scientific injunction questions the actual existence of personhood, to the point that the liberties under God, which the constitutional Framers conceived, stand in question.
Obviously, many would answer yes to the question that asks do atheist have the capacity to inspire citizens to uphold modern democracy and the modern state’s constitutional rights; however, when one considers the full implications of modern science’s conception of the advent of humanity in the universe, one might draw a different conclusion, especially if one pondered what the United States would look like if Christianity became as extinct as the worship of the Roman gods.
Modern science essentially views the advent of the reasoning animal, humanity, as the fortuitous outcome of natural selection that results from the fitness of a species to survive for successive generations. By following, modern science’s conception of humanity to its logical conclusion, we understand that human reason’s conceptions of justice, equality, love, and empathy (the very conceptions that make civilization possible) are insubstantial, only possessing utilitarian usefulness.
In contrast, Christianity, holds that the righteous Lord Jesus, the savior of the soul’s human reasoning, stands resurrected, eternally to render justice, equality, empathy, and love as substantial eternal states that exist beyond humanity’s temporal circumstances. Apostolic Christianity’s key theological understanding, which the Protestant Reformers grasped to nurture America’s democratic spirit, is the doctrine of God’s indiscriminate election of the saints, despite religious works, kindred relation, social standing, and male or female sex. For instance, the effect of the doctrine of election is such that true Christians understand and accept that even their dearests may or may not become Christians; that religious works do not esteem Christians over others; that the temporal adherence to a secular state that secures religious liberty is necessary, until the Christ alone unites the Church; and that Christians gain personal affinity with an international Church body that transcends kindred affinity and personal ambition, rendering all equal under the Law of God.
A purely irreligious modern state, with no Christian cultural influence, cannot guarantee the constitutional liberties as envisioned by the Framers of the American Constitution for three significant, interrelating reasons: first, an atheistic state that legislates purely by scientific injunction results in educated elite who prove to be insensible to the peculiar concerns of the populace: an increasingly centralized and autocratic government results. Second, an atheistic state supports nihilistic moral principles that benefit the state above the people. Third, an atheistic state that legislates purely by scientific injunction questions the actual existence of personhood, to the point that the liberties under God, which the constitutional Framers conceived, stand in question.
Immanuel Kant (1724-1804), the famed German philosopher who seemingly demonstrated the impossibility of proving the existence of God, articulated an atheistic approach to ethics. Kant understood that morality stood upon an ability to make decisions universal under which people adhere to universal principles under a sense of duty rather than personal feelings or emotions. Complementing Kant’s universalist conceptions of morality were people like Jeremy Bentham who sought to frame a moral code under the axiom that “it is the greatest happiness of the greatest number that is the measure of right and wrong.”
We see many examples of how the centralized state bureaucracy that pitilessly operates under secular notions of morality disenfranchises the sensibilities of many people. For instance, as part of an effort to shrink the size of the federal government, President Ronald Reagan discarded his predecessor’s efforts to continue the funding federal mental health centers. As a result, the population of mentally ill homeless people rose.
In the 2010 Affordable Care Act, championed by President Barack Obama, we see a more recent example of a President making policy decisions based upon utilitarian principles that benefit the majority at the expense of a minority. The Affordable Care act created an advisory board of 15 unelected bureaucrats who operate to secure savings within Medicare: in alarm, critics warn that the panel could effectively ration health care to seniors.
Besides the minimal ill effects of the utilitarian policy decisions that political leaders make today, we have seen the dire effects of modern autocratic governments that held atheistic views of personhood. We have witnessed how the atheistic Nazi party, which ruled Germany in the mid-20th Century, supported the evolutionary notion of eugenics, which compelled them to purify their German population by extinguishing their Jewish population. We have also seen how the early atheistic government of the Soviet Union removed and starved entire ethnic groups, such as the Crimean Tatars, to secure the Soviet State from political discord.
Though we attribute the cultural contributions of Protestant Christianity as providing the fertile ground upon which the Framers planted the American representational government, the ancient Roman Republic and Empire constitute the legal forms that the Framers sought to emulate. Unfortunately, the Framers failed to respond effectively to the cause of Rome’s fall. As Rome incorporated the diverse peoples of the Mediterranean world, a multicultural people forsook Rome’s original religion and austere code of conduct. The cult of the Emperor arose and a centralized disaffected government arose with him. Stoicism, the ancient world’s utilitarian moral expression, also arose. Soon enough, the Empire fell as weakening social and cultural solidarity failed to buttress the Empire from the invading hordes of Germanic tribes.
The underlining reason for Christianity’s lost grasp upon modern America is the Church’s failed attempts to prove scientifically the existence of God and the substantiality of the biblical Testaments: thelandscapeoftruth.com will address this issue in subsequent articles, but for our current purposes, we must underscore that to protect the Church and the modern liberty of personhood that the Church’s doctrine of election inspires, the Church must objectively prove the actual existence of personhood under God. The vitality of the nation’s conservative and libertarian movements today stand upon a scientific establishment of personhood. Increasingly, federal courts and other institutes diminish personhood as being a conflation of behavioral responses that have no substantive standing.
The Landscape of Truth’s chief aim is to establish a systematic theology for laypeople by establishing a scientific understanding of personhood and the three persons of God. The Landscape’s chapter three, entitled Immanuel’s Law, helps the reader to understand the basic conventions of natural philosophy, classical physics, and modern physics. The chapter demonstrates the shortfall of contemporary theologians’ failed attempts to prove the existence of God. Then the chapter and succeeding chapters demonstrably prove not only the being of God, but also the existence of our personhood and the manner in which we grow in our relationship with God throughout the advancement of civilization.
Today’s conservative movements and institutions are fighting an uphill battle that they will ultimately lose, unless they scientifically establish the existence of personhood, upon which our constitutional rights stand, as conceived by our 18th Century minded Framers of the Constitution. Thelandscapeoftruth.com will address this issue more in future articles.
Americans have never Ranked High in Science Education; however, American Culture has Advanced Technology like No other Nation Before
Published August 24, 2015 – thelandscapeoftruth.com
It is the best job that he has ever had! Just ask him. He’s not shy: he will gladly tell you that he is glad to be employed. Due to the wide spread effects of the 2013 United States’ Federal Government shutdown and NASA budget cuts, he an aerospace mechanical designer got laid-off with allot of his coworkers. Facing the layoff, he cheered up his friends by saying that the good Lord will provide jobs for all. Soon one by one they called and congratulated one another as they all found new jobs. His turn came as he landed a job at Johns Hopkins University Applied Physics Laboratory.
A thrilling situation he found himself in: one minute he is working on a solar probe; the next minute he is working on an instrument that takes samples from a comet; another minute he finds himself sketching concepts for future lunar landers. What more can a mechanical designer ask for?
The mechanical designer did not fully appreciate his new job, however, until he recognized the vital contributions that technicians and craftsmen make to render the scientific projects a reality. One day sitting at his desk, an electrician yelled down to him from a ladder congratulating the space department for the New Horizon satellite’s successful encounter with Pluto. The designer said that he had absolutely nothing to do with it because he was not employed at the Lab at that time. The electrician then said that he liked to think that he had a hand in making the mission successful: the electrician informed the designer that the lift off of the satellite had less than an hour window to break through the clouds to make its trajectory; however, the power went off at the command and control center. The electrician explained that he and his crew were tasked to build generators to get the control center powered, in order to make lift off. The designer excitedly replied that the electrician had just as much cause to celebrate as the scientists, engineers, and designers.
The mechanical designer also gained a profound appreciation for other craftsmen, machinists, and technicians. The designer’s supervisors task the designer to construct a full scale satellite replica for integration and testing. The supervisors instructed the mechanical designer to press the techs hard to make schedule and budget. One day as the designer arrived at a shop, the technicians expected designer’s visit and pressing demands: they quickly enumerated the processes and checks that they had to perform. They took him to the relevant machine shops to witness the production in progress. Then upon returning to view the structure in building, the techs asked the designer, “Do you want it right or fast? We just do it right!”
Satisfied leaving the shop, the mechanical designer heard the classic rock music crank back up. Turning around, the designer saw the foreman flapping his arms and shaking his belly, dancing to the tune “Play that Funky Music White Boy.” As the foreman screamed, “I like that song!” the designer laughed hard, thinking to himself that he never got to tell them that they are behind schedule. The next week the foreman ran into the designer and the designer’s supervisors. The foreman told the supervisors how hard the designer had been working them, cracking the whip, he said! The supervisors looked at the designer, saying “Very good, very good.” The foreman then walked away, winking at the designer.
True to their word, the technicians, machinists, and craftsmen made schedule. The designer was congratulated for the finished product. He again thought that it is the best job that he has ever had. Just ask him. He will tell you!
The mechanical designer did not fully appreciate his new job, however, until he recognized the vital contributions that technicians and craftsmen make to render the scientific projects a reality. One day sitting at his desk, an electrician yelled down to him from a ladder congratulating the space department for the New Horizon satellite’s successful encounter with Pluto. The designer said that he had absolutely nothing to do with it because he was not employed at the Lab at that time. The electrician then said that he liked to think that he had a hand in making the mission successful: the electrician informed the designer that the lift off of the satellite had less than an hour window to break through the clouds to make its trajectory; however, the power went off at the command and control center. The electrician explained that he and his crew were tasked to build generators to get the control center powered, in order to make lift off. The designer excitedly replied that the electrician had just as much cause to celebrate as the scientists, engineers, and designers.
The mechanical designer also gained a profound appreciation for other craftsmen, machinists, and technicians. The designer’s supervisors task the designer to construct a full scale satellite replica for integration and testing. The supervisors instructed the mechanical designer to press the techs hard to make schedule and budget. One day as the designer arrived at a shop, the technicians expected designer’s visit and pressing demands: they quickly enumerated the processes and checks that they had to perform. They took him to the relevant machine shops to witness the production in progress. Then upon returning to view the structure in building, the techs asked the designer, “Do you want it right or fast? We just do it right!”
Satisfied leaving the shop, the mechanical designer heard the classic rock music crank back up. Turning around, the designer saw the foreman flapping his arms and shaking his belly, dancing to the tune “Play that Funky Music White Boy.” As the foreman screamed, “I like that song!” the designer laughed hard, thinking to himself that he never got to tell them that they are behind schedule. The next week the foreman ran into the designer and the designer’s supervisors. The foreman told the supervisors how hard the designer had been working them, cracking the whip, he said! The supervisors looked at the designer, saying “Very good, very good.” The foreman then walked away, winking at the designer.
True to their word, the technicians, machinists, and craftsmen made schedule. The designer was congratulated for the finished product. He again thought that it is the best job that he has ever had. Just ask him. He will tell you!
The continuance of short term gains in the global economy demands advances in science and technology; however, the sustaining of economic prosperity and freedom depend upon the strength of traditional trades; the upward mobility of the working and middle classes; the vitality of a self-governing Judeo-Christian culture; and the safe-guarding of Western liberties. At the signing of the U.S. Constitution, the newly established sovereign nation of the United States of America thrived in an agricultural economy, under which the framers of the Constitution defined America’s laws and edicts. Since America’s economic life revolved mostly around agricultural output, people made a living in professions that had endured throughout Europe’s agricultural history: people worked in husbandry, carpentry, maritime labor, banking, clergy work, accounting, investment, finance, education, and textiles. All Constitutional rights, in one way or another, reflected the people’s ability to own, retain, congregate upon, transfer, and govern private property, especially in landholding. The sovereignty of the U.S. Constitution and its Federal Government stood upon its jurisdiction over the identified national boundaries of the United States. Moreover, the strong presence of the Protestant Church cultivated liberal beliefs that allowed the people to self-govern by embracing social norms that prevented the need for government interference into private life: the Church championed equality of all classes under God and public literacy for biblical education.
The modern market economy turns the founding agricultural principles of the American Constitution upside down. The gross majority of employment does not center upon agriculture: private corporations hold most of the arable lands, and the vast amount of the general public reside in urban regions or areas that local governments zone as non-agricultural, residential. The market economy is an outcome of investments made by wealthy individuals or corporations that seek to increase their profits by exploiting new sciences and technologies that are marketable. Instead of people’s economic viability standing upon the people’s ownership of land and other tangible goods, people’s economic viability stands upon the people’s ownership of commercial stocks, pensions, and personal liquidity. The problem is that increasingly investors pursue business ventures that do not affect the livelihood or productivity of all social strata, like the universal agricultural economy did. Also, new markets are increasingly transcending national boundaries to the effect that international business and governmental entities are emerging to challenge the sovereignty of historic nation-states. As the market marries the West with non-Western countries that do not retain Western constitutional liberties, Western citizens’ constitutional rights hang in the balance as multinational businesses exploit the labor forces of non-Western peoples who have lower standards of living, because they do not have the same Western rights.
Though the United States has not ranked high in science education, the U.S. has led in science and technological, even surpassing other Protestant nations. America’s innovative track record demonstrates that a nation’s scientific prowess does not only depend upon science education, but also depends upon the free flow of ideas from an enfranchised populace that seeks to better their lives through invention.
The Protestant Church’s vital role in securing an enfranchised and upwardly mobile population has gone unnoticed. Though its history has been tumultuous, the Protestant Church has championed the doctrine of election, despite class, race, kindred relationship, religious work, and male or female sex; therefore, the Church has cultivated a culture that defies classism and religious elitism; essentially because of the Church’s acceptance that not all will share the Christian faith. At the same time, the Church has solidified social norms to the extent that the public’s ethical expectations have kept the excesses of government in check.
The technological inventions that American culture has produced are prolific. For example, besides the smaller scale inventions like the calculator, Americans have produced inventions that affect the productivity of all social strata: Americans have produced the refrigerator and air-conditioning; the transistor and personal computer; and the airplane and solar compass. Americans have produced the assembly line and affordable luxury items like auto-mobiles, forever changing the face of society.
In the light of Americans’ success (despite their lack of scientific prowess), the challenge for scientists today is to demonstrate the practical effects of scientific education. Scientists must marry scientific education with a variety of trades, allowing for innovation to arise from the people’s fervor to perfect their trades and maximize their trades’ social contribution. As well, scientists must be keenly mindful of the understanding that scientific productivity springs from a free people; therefore, social and applied scientists must factor in safe-guards for Western liberty as the scientists consider the effects of newfound scientific understanding.
The modern market economy turns the founding agricultural principles of the American Constitution upside down. The gross majority of employment does not center upon agriculture: private corporations hold most of the arable lands, and the vast amount of the general public reside in urban regions or areas that local governments zone as non-agricultural, residential. The market economy is an outcome of investments made by wealthy individuals or corporations that seek to increase their profits by exploiting new sciences and technologies that are marketable. Instead of people’s economic viability standing upon the people’s ownership of land and other tangible goods, people’s economic viability stands upon the people’s ownership of commercial stocks, pensions, and personal liquidity. The problem is that increasingly investors pursue business ventures that do not affect the livelihood or productivity of all social strata, like the universal agricultural economy did. Also, new markets are increasingly transcending national boundaries to the effect that international business and governmental entities are emerging to challenge the sovereignty of historic nation-states. As the market marries the West with non-Western countries that do not retain Western constitutional liberties, Western citizens’ constitutional rights hang in the balance as multinational businesses exploit the labor forces of non-Western peoples who have lower standards of living, because they do not have the same Western rights.
Though the United States has not ranked high in science education, the U.S. has led in science and technological, even surpassing other Protestant nations. America’s innovative track record demonstrates that a nation’s scientific prowess does not only depend upon science education, but also depends upon the free flow of ideas from an enfranchised populace that seeks to better their lives through invention.
The Protestant Church’s vital role in securing an enfranchised and upwardly mobile population has gone unnoticed. Though its history has been tumultuous, the Protestant Church has championed the doctrine of election, despite class, race, kindred relationship, religious work, and male or female sex; therefore, the Church has cultivated a culture that defies classism and religious elitism; essentially because of the Church’s acceptance that not all will share the Christian faith. At the same time, the Church has solidified social norms to the extent that the public’s ethical expectations have kept the excesses of government in check.
The technological inventions that American culture has produced are prolific. For example, besides the smaller scale inventions like the calculator, Americans have produced inventions that affect the productivity of all social strata: Americans have produced the refrigerator and air-conditioning; the transistor and personal computer; and the airplane and solar compass. Americans have produced the assembly line and affordable luxury items like auto-mobiles, forever changing the face of society.
In the light of Americans’ success (despite their lack of scientific prowess), the challenge for scientists today is to demonstrate the practical effects of scientific education. Scientists must marry scientific education with a variety of trades, allowing for innovation to arise from the people’s fervor to perfect their trades and maximize their trades’ social contribution. As well, scientists must be keenly mindful of the understanding that scientific productivity springs from a free people; therefore, social and applied scientists must factor in safe-guards for Western liberty as the scientists consider the effects of newfound scientific understanding.
The Wright brothers, Orville (1871-1948) and Wilbur (1867-1912), embody American invention’s springing from a Protestant culture that has nurtured the free exchange of ideas. The Wright brothers claim credit for inventing the world’s first successful airplane, which necessarily entailed fixed-wing adjustability. The Wright brothers were the sons of a Protestant Bishop. They maintained a Christian work ethic. Their success was less an outcome of their scientific education: their success was more an outcome of their practical and diligent application of the scientific method, in regards to their scrupulous recording and responding to observations.
Currently, the United States of America still remains a leader in technological innovation. In 2013, Bloomberg Rankings ranked America number one out of fifty top nations in technological innovations. The difficulty in current innovative trends, however, is that the innovations do not affect all strata of society. Current economic trends see job growth in part-time employment; however, the long term wage earning and retirement jobs of the past have not returned.
The purpose of the doctrinal treatise, the Landscape of Truth, is to rationalize the theological, scientific, economic, social, political, and personal relationships that comprise an individual’s life-experience, in order to empower the individual with an understanding of his or her environment. The work accomplishes its ends by obtaining a systematic theology of the biblical Testaments, by providing objective proof for not only the three persons of God, but also the existence of the human soul and spirit. The work demonstrates the liberating and oppressive aspects of human government, throughout the biblical epic and beyond. In the third chapter of the Landscape, the work gives the reader an understanding of modern science’s origins and growth, as it blossomed with the modern world’s industrial and commercial economy.
Current developments show China, Russia, India, and Brazil enjoying economic growth; however, their growth has more to do with multinational corporations’ exploiting their cheap labor forces. The free Western nations will continue to lead in technological and scientific innovation. The challenge is for Western nations to ensure that the innovations affect the working and middle class free peoples, in order to safeguard Western liberty, itself.