View allAll Photos Tagged combinatorial

Synthesize insights

Chance opportunism

Combinatorial creativity

Recentemente ho visto una trasmissione sulla Route 66 : l'itinerario on the road più famoso al mondo, mi è venuto spontaneo interpretare tali suggestioni con questa photo-combinatoria ambientata nella vicina campagna al confine fra Veneto e Friuli.

I recently saw a broadcast on Route 66: the most famous on the road itinerary in the world, it came naturally to me to interpret these suggestions with this photo-combinatorial set in the nearby countryside on the border between Veneto and Friuli

Mikhail Nekhemyevich Tal[a] (9 November 1936 – 28 June 1992) was a Soviet-Latvian chess player and the eighth World Chess Champion. He is considered a creative genius and is widely regarded as one of the most influential chess players. Tal played in an attacking and daring combinatorial style. His play was known above all for improvisation and unpredictability. Vladislav Zubok said of him, "Every game for him was as inimitable and invaluable as a poem".

 

His nickname was "Misha", a diminutive for Mikhail, and he earned the nickname "The Magician from Riga". Both The Mammoth Book of the World's Greatest Chess Games and Modern Chess Brilliancies[6] include more games by Tal than any other player. He also held the record for the longest unbeaten streak in competitive chess history with 95 games (46 wins, 49 draws) between 23 October 1973 and 16 October 1974, until Ding Liren's streak of 100 games (29 wins, 71 draws) between 9 August 2017 and 11 November 2018. In addition, Tal was a highly regarded chess writer.

 

Tal died on 28 June 1992 in Moscow, Russia

High Resolution Images emailed to you are for Sale via email at $25 per image. If I print and send them the charge is $40. If I send them framed(16x20) it becomes $200.

You can pay to my PayPal account.

 

a simple capture of acomplex optical sculpture made of mirrors and thin neon lights...

 

A hypercube of dimension n has 2n "sides" (a 1-dimensional line has 2 end points; a 2-dimensional square has 4 sides or edges; a 3-dimensional cube has 6 2-dimensional faces; a 4-dimensional tesseract has 8 cells). The number of vertices (points) of a hypercube is 2n (a cube has 23 vertices, for instance).

  

A simple formula to calculate the number of "n-2"-faces in an n-dimensional hypercube is: 2n2 − 2n

 

The number of m-dimensional hypercubes (just referred to as m-cube from here on) on the boundary of an n-cube is

 

, where and n! denotes the factorial of n.

For example, the boundary of a 4-cube (n=4) contains 8 cubes (3-cubes), 24 squares (2-cubes), 32 lines (1-cubes) and 16 vertices (0-cubes).

  

This identity can be proved by combinatorial arguments; each of the 2n vertices defines a vertex in a m-dimensional boundrary. There are ways of choosing which lines ("sides") that defines the subspace that the boundrary is in. But, each side is counted 2m times since it has that many vertices, we need to divide with this number. Hence the identity above.

    

Combinatorial black and white.

One element of the combinatorial is missing ;-) Try to guess!

Sudoku (数独,) (English pronunciation: /suːˈdoʊkuː/ soo-DOH-koo) is a logic-based, combinatorial number-placement puzzle. The objective is to fill a 9×9 grid with digits so that each column, each row, and each of the nine 3×3 sub-grids that compose the grid (also called "boxes", "blocks", "regions", or "sub-squares") contains all of the digits from 1 to 9. The puzzle setter provides a partially completed grid, which typically has a unique solution. Completed puzzles are always a type of Latin square with an additional constraint on the contents of individual regions.

 

The puzzle was popularized in 1986 by the Japanese puzzle company Nikoli, under the name Sudoku, meaning single number. It became an international hit in 2005.

 

Today's Bible Verse:

 

All Scripture is God-breathed and is useful for teaching, correcting and training in righteousness.

 

2 TIMOTHY 3:16

Chi è ciascuno di noi se non una combinatoria d'esperienze, d'informazioni, di letture, d'immaginazioni? Ogni vita è un'enciclopedia, una biblioteca, un inventario d'oggetti, un campionario di stili, dove tutto può essere continuamente rimescolato e riordinato in tutti i modi possibili.

Italo Calvino

  

Who is each of us if not a combinatorial of experiences, information, readings, of imaginations? Every life is an encyclopedia, a library, an inventory of objects, a collection of styles, where everything can be constantly shuffled and reordered in every ways possible.

Italo Calvino

I have an updated processing of this shot www.flickr.com/photos/combinatorial/4226387051/

 

My first photo to make Explore... Dec 16, 2007, #365

NOTE: this is a semi-log graph, so a straight line is an exponential; each y-axis tick is 100x. This graph covers a 100,000,000,000,000,000,000x improvement in computation/$.

 

I have color coded it to show the transition among the integrated circuit architectures. I also added the current NVIDIA workhorses — the A100 and H100. You can see how the mantle of Moore's Law has transitioned most recently from the GPU (green dots) to the ASIC (yellow and orange dots), and the H100 itself is a transitionary species — from GPU to ASIC, with 8-bit performance optimized for AI models. Remember, there are thousands of invisible dots below the frontier of humanity's capacity to compute (e.g., everything from Intel in the past 13 years).

 

Tesla DOJO's dominance should not be a surprise, as Intel ceded leadership to NVIDIA a decade ago, and further handoffs were inevitable. The computational frontier has shifted across many technology substrates over the past 120 years, most recently from the CPU to the GPU to ASICs optimized for neural networks (the majority of new compute cycles).

 

Of all of the depictions of Moore’s Law, this is the one (originally by Ray Kurzweil) that I find to be the most useful, as it captures what customers actually value — computation per constant dollar.

 

Humanity’s capacity to compute has compounded for as long as we can measure it, exogenous to the economy, and starting long before Intel co-founder Gordon Moore noticed a refraction of the longer-term trend in the belly of the fledgling semiconductor industry in 1965.

 

Why the transition within the integrated circuit era? Intel lost to NVIDIA for neural networks because the fine-grained parallel compute architecture of a GPU maps better to the needs of deep learning. There is a poetic beauty to the computational similarity of a processor optimized for graphics processing and the computational needs of a sensory cortex, as commonly seen in neural networks today. A custom chip (like the Tesla D1 ASIC) optimized for neural networks extends that trend to its inevitable future in the digital domain. Further advances are possible in analog in-memory compute, an even closer biomimicry of the human cortex. The best business planning assumption is that Moore’s Law, as depicted here, will continue for the next 20 years as it has for the past 120.

 

For those unfamiliar with this chart, here is a more detailed description:

 

Moore's Law is both a prediction and an abstraction

 

Moore’s Law is commonly reported as a doubling of transistor density every 18 months. But this is not something the co-founder of Intel, Gordon Moore, has ever said. It is a nice blending of his two predictions; in 1965, he predicted an annual doubling of transistor counts in the most cost effective chip and revised it in 1975 to every 24 months. With a little hand waving, most reports attribute 18 months to Moore’s Law, but there is quite a bit of variability. The popular perception of Moore’s Law is that computer chips are compounding in their complexity at near constant per unit cost. This is one of the many abstractions of Moore’s Law, and it relates to the compounding of transistor density in two dimensions. Others relate to speed (the signals have less distance to travel) and computational power (speed x density).

 

Unless you work for a chip company and focus on fab-yield optimization, you do not care about transistor counts. Integrated circuit customers do not buy transistors. Consumers of technology purchase computational speed and data storage density. When recast in these terms, Moore’s Law is no longer a transistor-centric metric, and this abstraction allows for longer-term analysis.

 

What Moore observed in the belly of the early IC industry was a derivative metric, a refracted signal, from a longer-term trend, a trend that begs various philosophical questions and predicts mind-bending futures.

 

Ray Kurzweil’s abstraction of Moore’s Law shows computational power on a logarithmic scale, and finds a double exponential curve that holds over 120 years! A straight line would represent a geometrically compounding curve of progress.

 

Through five paradigm shifts – such as electro-mechanical calculators and vacuum tube computers – the computational power that $1000 buys has doubled every two years. For the past 35 years, it has been doubling every year.

 

Each dot is the frontier of computational price performance of the day. One machine was used in the 1890 Census; one cracked the Nazi Enigma cipher in World War II; one predicted Eisenhower’s win in the 1956 Presidential election. Many of them can be seen in the Computer History Museum.

 

Each dot represents a human drama. Prior to Moore’s first paper in 1965, none of them even knew they were on a predictive curve. Each dot represents an attempt to build the best computer with the tools of the day. Of course, we use these computers to make better design software and manufacturing control algorithms. And so the progress continues.

 

Notice that the pace of innovation is exogenous to the economy. The Great Depression and the World Wars and various recessions do not introduce a meaningful change in the long-term trajectory of Moore’s Law. Certainly, the adoption rates, revenue, profits and economic fates of the computer companies behind the various dots on the graph may go though wild oscillations, but the long-term trend emerges nevertheless.

 

Any one technology, such as the CMOS transistor, follows an elongated S-shaped curve of slow progress during initial development, upward progress during a rapid adoption phase, and then slower growth from market saturation over time. But a more generalized capability, such as computation, storage, or bandwidth, tends to follow a pure exponential – bridging across a variety of technologies and their cascade of S-curves.

 

In the modern era of accelerating change in the tech industry, it is hard to find even five-year trends with any predictive value, let alone trends that span the centuries. I would go further and assert that this is the most important graph ever conceived.

 

Why is this the most important graph in human history?

 

A large and growing set of industries depends on continued exponential cost declines in computational power and storage density. Moore’s Law drives electronics, communications and computers and has become a primary driver in drug discovery, biotech and bioinformatics, medical imaging and diagnostics. As Moore’s Law crosses critical thresholds, a formerly lab science of trial and error experimentation becomes a simulation science, and the pace of progress accelerates dramatically, creating opportunities for new entrants in new industries. Boeing used to rely on the wind tunnels to test novel aircraft design performance. Ever since CFD modeling became powerful enough, design moves to the rapid pace of iterative simulations, and the nearby wind tunnels of NASA Ames lie fallow. The engineer can iterate at a rapid rate while simply sitting at their desk.

 

Every industry on our planet is going to become an information business. Consider agriculture. If you ask a farmer in 20 years’ time about how they compete, it will depend on how they use information, from satellite imagery driving robotic field optimization to the code in their seeds. It will have nothing to do with workmanship or labor. That will eventually percolate through every industry as IT innervates the economy.

 

Non-linear shifts in the marketplace are also essential for entrepreneurship and meaningful change. Technology’s exponential pace of progress has been the primary juggernaut of perpetual market disruption, spawning wave after wave of opportunities for new companies. Without disruption, entrepreneurs would not exist.

 

Moore’s Law is not just exogenous to the economy; it is why we have economic growth and an accelerating pace of progress. At Future Ventures, we see that in the growing diversity and global impact of the entrepreneurial ideas that we see each year. The industries impacted by the current wave of tech entrepreneurs are more diverse, and an order of magnitude larger than those of the 90’s — from automobiles and aerospace to energy and chemicals.

 

At the cutting edge of computational capture is biology; we are actively reengineering the information systems of biology and creating synthetic microbes whose DNA is manufactured from bare computer code and an organic chemistry printer. But what to build? So far, we largely copy large tracts of code from nature. But the question spans across all the complex systems that we might wish to build, from cities to designer microbes, to computer intelligence.

 

Reengineering engineering

 

As these systems transcend human comprehension, we will shift from traditional engineering to evolutionary algorithms and iterative learning algorithms like deep learning and machine learning. As we design for evolvability, the locus of learning shifts from the artifacts themselves to the process that created them. There is no mathematical shortcut for the decomposition of a neural network or genetic program, no way to "reverse evolve" with the ease that we can reverse engineer the artifacts of purposeful design. The beauty of compounding iterative algorithms (evolution, fractals, organic growth, art) derives from their irreducibility. And it empowers us to design complex systems that exceed human understanding.

 

Why does progress perpetually accelerate?

 

All new technologies are combinations of technologies that already exist. Innovation does not occur in a vacuum; it is a combination of ideas from before. In any academic field, the advances today are built on a large edifice of history. . This is why major innovations tend to be 'ripe' and tend to be discovered at the nearly the same time by multiple people. The compounding of ideas is the foundation of progress, something that was not so evident to the casual observer before the age of science. Science tuned the process parameters for innovation, and became the best method for a culture to learn.

 

From this conceptual base, come the origin of economic growth and accelerating technological change, as the combinatorial explosion of possible idea pairings grows exponentially as new ideas come into the mix (on the order of 2^n of possible groupings per Reed’s Law). It explains the innovative power of urbanization and networked globalization. And it explains why interdisciplinary ideas are so powerfully disruptive; it is like the differential immunity of epidemiology, whereby islands of cognitive isolation (e.g., academic disciplines) are vulnerable to disruptive memes hopping across, much like South America was to smallpox from Cortés and the Conquistadors. If disruption is what you seek, cognitive island-hopping is good place to start, mining the interstices between academic disciplines.

 

It is the combinatorial explosion of possible innovation-pairings that creates economic growth, and it’s about to go into overdrive. In recent years, we have begun to see the global innovation effects of a new factor: the internet. People can exchange ideas like never before Long ago, people were not communicating across continents; ideas were partitioned, and so the success of nations and regions pivoted on their own innovations. Richard Dawkins states that in biology it is genes which really matter, and we as people are just vessels for the conveyance of genes. It’s the same with ideas or “memes”. We are the vessels that hold and communicate ideas, and now that pool of ideas percolates on a global basis more rapidly than ever before.

 

In the next 6 years, three billion minds will come online for the first time to join this global conversation (via inexpensive smart phones in the developing world). This rapid influx of three billion people to the global economy is unprecedented in human history, and so to, will the pace of idea-pairings and progress.

 

We live in interesting times, at the cusp of the frontiers of the unknown and breathtaking advances. But, it should always feel that way, engendering a perpetual sense of future shock.

Chi è ciascuno di noi se non una combinatoria d'esperienze, d'informazioni, di letture, d'immaginazioni? Ogni vita è un'enciclopedia, una biblioteca, un'inventario d'oggetti, un campionario di stili, dove tutto può essere continuamente rimescolato e riordinato in tutti i modi possibili.

Italo Calvino

  

Who is each of us if not a combinatorial of experiences, information, readings, of imaginations? Every life is an encyclopedia, a library, an inventory of objects, a collection of styles, where everything can be constantly shuffled and reordered in every way possible.

Italo Calvino

poetic combinatorial

 

(La Màquina de Pensar)

Cyanotype, traditional iron salt party mix, combinatorially grappled in a head-shaped tub, brushed onto gelatin-sized vellum, subsequently exposed to Sol for an amount of time -- in the winter Texas air for ten minutes perhaps -- Finally, developed casually, while smoking, in water, vinegar, ammonia and tea-tannins.

  

Sa pratique est celle d’un artiste mais aussi d’un chercheur ou d’un hétérotopologue, tel que défini par Foucault dans son texte «les espaces autres». Cette recherche et construction de sens dans le ‘’liminal’’ ou ‘’l’entre deux’’ l’amène à produire des sculptures automatisées en effondrement, des images animées infinies qui tournent en boucle sur elles-mêmes, des chimères et accumulations linguistiques de mouvements artistiques inexistants.

Chimera est une installation composée d’un ensemble de barres qui génère aléatoirement toutes les minutes des associations de préfixes et de mouvements, tendances politiques, artistiques, économiques et religieuses. Chaque barre peut fonctionner individuellement ou en essaim programmé. La richesse combinatoire génère une infinité de propositions existantes, absurdes, inventives, sombres, anachroniques, utopiques. Chimera interroge la composition structurelle et temporelle du langage et la façon dont ce dernier conditionne la lecture et le partitionnement de l’histoire, notamment artistique. ‘‘Chimera’’ tend à être un outil d’ouverture à la création de nouvelles mouvances et pensées futures, mais c’est aussi inversement un outil de critique tendant vers une forme ‘‘d’épuisement ’’ des possibles par le recyclage de formes existantes ou passées de façon accélérée.

 

His practice is that of an artist but also of a researcher or a heterotopologist, as defined by Foucault in his text “other spaces”. This search and construction of meaning in the ''liminal'' or ''the in-between'' leads him to produce automated collapsing sculptures, infinite animated images that loop on themselves, chimeras and accumulations linguistics of non-existent artistic movements.

Chimera is an installation made up of a set of bars which randomly generates associations of prefixes and movements, political, artistic, economic and religious trends every minute. Each bar can operate individually or in a programmed swarm. Combinatorial wealth generates an infinity of existing propositions, absurd, inventive, dark, anachronistic, utopian. Chimera questions the structural and temporal composition of language and the way in which the latter conditions the reading and partitioning of history, particularly artistic history. ''Chimera'' tends to be a tool of openness to the creation of new movements and future thoughts, but it is also conversely a tool of criticism tending towards a form of ''exhaustion'' of possibilities through the recycling of existing or past forms in an accelerated manner.

I took this photo of the latest hot lot of processor chips of various sizes at the spook shop summit (InQTel CEO Summit). Pretty shiny bling.

 

I am in the D-Wave board meeting now, and we just got a peek of next week's TIME Magazine cover (below). And it made the Charlie Rose show.

 

Here are some excerpts:

 

"The Quantum Quest for a Revolutionary Computer

 

The D-Wave Two is an unusual computer, and D-Wave is an unusual company. It's small, just 114 people, and its location puts it well outside the swim of Silicon Valley. But its investors include the storied Menlo Park, Calif., venture-capital firm Draper Fisher Jurvetson, which funded Skype and Tesla Motors. It's also backed by famously prescient Amazon founder Jeff Bezos and an outfit called In-Q-Tel, better known as the high-tech investment arm of the CIA. Likewise, D-Wave has very few customers, but they're blue-chip: they include the defense contractor Lockheed Martin; a computing lab that's hosted by NASA and largely funded by Google; and a U.S. intelligence agency that D-Wave executives decline to name.

 

The reason D-Wave has so few customers is that it makes a new type of computer called a quantum computer that's so radical and strange, people are still trying to figure out what it's for and how to use it. It could represent an enormous new source of computing power--it has the potential to solve problems that would take conventional computers centuries, with revolutionary consequences for fields ranging from cryptography to nanotechnology, pharmaceuticals to artificial intelligence.

 

That's the theory, anyway. Some critics, many of them bearing Ph.D.s and significant academic reputations, think D-Wave's machines aren't quantum computers at all. But D-Wave's customers buy them anyway, for around $10 million a pop, because if they're the real deal they could be the biggest leap forward since the invention of the microprocessor. …

 

Physicist David Deutsch once described quantum computing as "the first technology that allows useful tasks to be performed in collaboration between parallel universes." Not only is this excitingly weird, it's also incredibly useful. If a single quantum bit (or as they're inevitably called, qubits, pronounced cubits) can be in two states at the same time, it can perform two calculations at the same time. Two quantum bits could perform four simultaneous calculations; three quantum bits could perform eight; and so on. The power grows exponentially.

 

The supercooled niobium chip at the heart of the D-Wave Two has 512 qubits and therefore could in theory perform 2^512 operations simultaneously. That's more calculations than there are atoms in the universe, by many orders of magnitude. "This is not just a quantitative change," says Colin Williams, D-Wave's director of business development and strategic partnerships, who has a Ph.D. in artificial intelligence and once worked as Stephen Hawking's research assistant at Cambridge. "The kind of physical effects that our machine has access to are simply not available to supercomputers, no matter how big you make them. We're tapping into the fabric of reality in a fundamentally new way, to make a kind of computer that the world has never seen."

 

Naturally, a lot of people want one. This is the age of Big Data, and we're burying ourselves in information-- search queries, genomes, credit-card purchases, phone records, retail transactions, social media, geological surveys, climate data, surveillance videos, movie recommendations--and D-Wave just happens to be selling a very shiny new shovel. "Who knows what hedge-fund managers would do with one of these and the black-swan event that that might entail?" says Steve Jurvetson, one of the managing directors of Draper Fisher Jurvetson. "For many of the computational traders, it's an arms race."

 

One of the documents leaked by Edward Snowden, published last month, revealed that the NSA has an $80 million quantum-computing project suggestively code-named Penetrating Hard Targets. Here's why: much of the encryption used online is based on the fact that it can take conventional computers years to find the factors of a number that is the product of two large primes. A quantum computer could do it so fast that it would render a lot of encryption obsolete overnight. You can see why the NSA would take an interest. …

 

For its first five years, the company existed as a think tank focused on research. Draper Fisher Jurvetson got onboard in 2003, viewing the business as a very sexy but very long shot. "I would put it in the same bucket as SpaceX and Tesla Motors," Jurvetson says, "where even the CEO Elon Musk will tell you that failure was the most likely outcome." By then Rose was ready to go from thinking about quantum computers to trying to build them--"we switched from a patent, IP, science aggregator to an engineering company," he says. Rose wasn't interested in expensive, fragile laboratory experiments; he wanted to build machines big enough to handle significant computing tasks and cheap and robust enough to be manufactured commercially. With that in mind, he and his colleagues made an important and still controversial decision.

 

Up until then, most quantum computers followed something called the gate-model approach, which is roughly analogous to the way conventional computers work, if you substitute qubits for transistors. But one of the things Rose had figured out in those early years was that building a gate-model quantum computer of any useful size just wasn't going to be feasible anytime soon. …

 

Adiabatic quantum computing may be technically simpler than the gate-model kind, but it comes with trade-offs. An adiabatic quantum computer can really solve only one class of problems, called discrete combinatorial optimization problems, which involve finding the best--the shortest, or the fastest, or the cheapest, or the most efficient--way of doing a given task.

 

This is great if you have a really hard discrete combinatorial optimization problem to solve. Not everybody does. But once you start looking for optimization problems, or at least problems that can be twisted around to look like optimization problems, you find them all over the place: in software design, tumor treatments, logistical planning, the stock market, airline schedules, the search for Earth-like planets in other solar systems, and in particular in machine learning.

 

Google and NASA, along with the Universities Space Research Association, jointly run something called the Quantum Artificial Intelligence Laboratory, or QuAIL, based at NASA Ames, which is the proud owner of a D-Wave Two. "If you're trying to do planning and scheduling for how you navigate the Curiosity rover on Mars or how you schedule the activities of astronauts on the station, these are clearly problems where a quantum computer--a computer that can optimally solve optimization problems--would be useful," says Rupak Biswas, deputy director of the Exploration Technology Directorate at NASA Ames. Google has been using its D-Wave to, among other things, write software that helps Google Glass tell the difference between when you're blinking and when you're winking.

 

Lockheed Martin turned out to have some optimization problems too. It produces a colossal amount of computer code, all of which has to be verified and validated for all possible scenarios, lest your F-35 spontaneously decide to reboot itself in midair. "It's very difficult to exhaustively test all of the possible conditions that can occur in the life of a system," says Ray Johnson, Lockheed Martin's chief technology officer. "Because of the ability to handle multiple conditions at one time through superposition, you're able to much more rapidly--orders of magnitude more rapidly--exhaustively test the conditions in that software." The company re-upped for a D-Wave Two last year.

 

Another challenge Rose and company face is that there is a small but nonzero number of academic physicists and computer scientists who think that they are partly or completely full of sh-t. Ever since D-Wave's first demo in 2007, snide humor, polite skepticism, impolite skepticism and outright debunkings have been lobbed at the company from any number of ivory towers. "There are many who in Round 1 of this started trash-talking D-Wave before they'd ever met the company," Jurvetson says. "Just the mere notion that someone is going to be building and shipping a quantum computer--they said, 'They are lying, and it's smoke and mirrors.'"

 

Seven years and many demos and papers later, the company isn't any less controversial. Any blog post or news story about D-Wave instantly grows a shaggy beard of vehement comments, both pro- and anti-. …

 

But where quantum computing is concerned, there always seems to be room for disagreement. Hartmut Neven, the director of engineering who runs Google's quantum-computing project, argues that the tests weren't a failure at all--that in one class of problem, the D-Wave Two outperformed the classical computers in a way that suggests quantum effects were in play. "There you see essentially what we were after," he says. "There you see an exponentially widening gap between simulated annealing and quantum annealing ... That's great news, but so far nobody has paid attention to it." Meanwhile, two other papers published in January make the case that a) D-Wave's chip does demonstrate entanglement and b) the test used the wrong kind of problem and was therefore meaningless anyway. For now pretty much everybody at least agrees that it's impressive that a chip as radically new as D-Wave's could even achieve parity with conventional hardware.

 

The attitude in D-Wave's C-suite toward all this back-and-forth is, unsurprisingly, dismissive. "The people that really understand what we're doing aren't skeptical," says Brownell. Rose is equally calm about it; all that wrestling must have left him with a thick skin. "Unfortunately," he says, "like all discourse on the Internet, it tends to be driven by a small number of people that are both vocal and not necessarily the most informed." He's content to let the products prove themselves, or not. "It's fine," he says. "It's good. Science progresses by rocking the ship. Things like this are a necessary component of forward progress."

 

Are D-Wave's machines quantum computers?

 

For now the answer is itself suspended, aptly enough, in a state of superposition, somewhere between yes and no. If the machines can do anything like what D-Wave is predicting, they won't leave many fields untouched. "I think we'll look back on the first time a quantum computer outperformed classical computing as a historic milestone," Brownell says. "It's a little grand, but we're kind of like Intel and Microsoft in 1977, at the dawn of a new computing era."

   

It's amazing that our biological bodies are seemingly built upon technology that has been passed to us through numerous replications of our DNA strains through each and every cell division ad infinitum. Through meosis, mitosis, and it continues again.

 

There is this moment of rebirth when our genetic codex is melanged through combinatronic permutation after permutated combination in this grand scheme of survival, life, and existence. It is the greatest hedging of the bets done so for purely longer-term species survival, but is this concept of an individual that we adore so much just an elaborate illusion? Instead, are we really just infinitesimal parts of a whole, the huge composite structure of machinery that completes the circle of life, the way of the tao, the nature of the au natural, the systemic mechanisms of divinity?

 

Are we just an experimental device, a mouthpiece for control over the dominion of evolutionary advantage such that our DNA is mixed into a combinatorial cocktail and then reborn anew time and time again for the pure fact of increasing the probability of species survival and thusly letting the natural forces at bay enhance and design our technology?

 

With all of this illusion and deceiving, it's hard to discern reality from dream, so it comes to light that we are a product of continuous being whereby there has been no end since the beginning. Is this so? And all this where the replication from one system to another has been so seamless that an illusion upon an illusion upon an illusion began to surface seemingly making us distinct individuals, when in fact we are but one grand individual?

 

There is no spoon, yet there is no divinity. I find this to be simultaneously true and false at the same time, which is blasphemy at its best. There isn't a spoon, it's all an illusion, but yet we do exist, we think, breathe, eat, and live, then where is the divinity? Is it all around us, including us, ourselves, our individualistic entities of existential being?

 

We are also not multiple beings, but one continuous breed of life that has been spawned, remixed, recoded, reconfigured time and time again just to maintain strength, vigor, and an edge on the competition in this jungle of an environment.

 

Ultimately, we are one. We are one being, one machine, one system, one divinity. We will not fully become aware of our supremeness nor immortality until we have reached our destination called destiny, but when we do reach it, we will be one, and with that oneness, we will be divine.

 

La machine parfaite est divinité.

 

AWESOME when viewed in LIGHTBOX!!!!!

 

Follow Me:

TwitterFacebookDiggStumbleUponYouTube

Google BuzzMySpaceVimeoFriendfeedMixx

PicasaYelpRedditNewsvineNetvibesFlickr

OrkutdeviantARTLast .fmLinkedInBlogger

SoundCloud

 

The Cloudscapes

(Cloudscapes - Digital Artwork Blog)

Syncretic Divine

(Geopolitics & Philosophy Blog)

Harmonic Future

(Electrosymphonic Music - Online Radio Station)

Abiogenesis - the atheist and evolutionist belief - that life can spontaneously generate itself from sterile matter, whenever environmental conditions are conducive .... And the belief that this actually happened in the early Earth.

 

Is it possible?

 

IMPOSSIBLE ACCORDING TO INFORMATION THEORY.

 

Three fundamentals are essential for the material universe to exist: matter - energy - information.

Obviously, all theories about how the universe operates, and its origins, must take account of all three. However, every evolutionary, origin of life hypothesis yet devised (primordial soup, hydrothermal vent, etc. etc.) concentrates on the chemistry/physics of life, i.e. the interaction of matter and energy.

Atheists and evolutionists have virtually ignored the essential role and origin of information. We should demand to know why? Especially as we are told (through the popular media and education system) that an evolutionary, origin of life scenario, should be regarded as irrefutable, scientific fact.

 

Atheists and evolutionists are well aware that the information required for life cannot just arise of its own accord in a primordial soup. So why do they usually omit this crucial fact from their origin of life story?

 

In order to store information, a storage code is required. Just as the alphabet and language is the code used to store information in the written word, life requires both the information itself, which controls the construction and operation of all living things, and the means of storing that information. DNA is the storage code for living things.

No evolutionary, origin of life hypothesis has ever explained either how the DNA storage system was formed, or how the information encoded within that DNA storage system originated. In fact, even to attempt to look for the origin of information in physical matter is to ignore the natural laws about information.

 

Information theory completely rules out the spontaneous generation of life from non-life.

Information theory tells us: ANY MODEL FOR THE ORIGIN OF LIFE BASED SOLELY ON PHYSICAL AND/OR CHEMICAL PROCESSES, IS INHERENTLY FALSE. And: THERE IS NO KNOWN LAW OF NATURE, NO KNOWN PROCESS AND NO KNOWN SEQUENCE OF EVENTS, WHICH CAN CAUSE INFORMATION TO ORIGINATE BY ITSELF IN MATTER… So information theory not only rules out all evolutionary hypotheses which cannot explain the origin of information in original life, it also rules out all evolutionary hypotheses which cannot explain the origin of the completely new, increasingly complex information which would be required to be added to a gene pool for progressive evolution to take place in existing life.

 

Because of their zealous and unshakable faith in Darwinian evolution, most evolutionists choose to ignore this. They simply refuse to face this most important question of all, where does the complex information essential for all life come from? The reason seems obvious, it is because there are only two answers which could be compatible with the evolution fable, both are unscientific nonsense which violate information theory. They are: 1. That information can just arise magically out of nowhere. OR 2. That the material universe is an intelligent entity, which can actually create information.

(See more on genetic information and the DNA code later on)

 

Verdict of science - abiogenesis is not possible.

 

IMPOSSIBLE ACCORDING TO THE LAW OF BIOGENESIS.

 

The Law of Biogenesis rules out the spontaneous generation of life from non-living matter under all known circumstances. All modern scientists now accept this well tested law as valid. It has never been falsified. In fact, the concept of medical sterilisation, hygiene & food preservation is wholly dependent on this law.

No sensible scientist would dare to claim that spontaneous generation of life ever happens in the world today, and there is no reason whatsoever to believe that this Law (like every natural law) is not always valid, in all places and at all times, within the material universe.

Yet, amazingly, because of their belief in biological evolution, evolutionists are quite prepared to flout this well, established Law and to resurrect the ancient belief in abiogenesis (life arising from non-life). Like latter-day advocates of the ancient Greek belief (that the goddess Gea could make life arise spontaneously from stones), evolutionists and atheists routinely present to the public (as a fact), the preposterous notion that, original life on earth (and even elsewhere in the universe) just spontaneously generated itself from inert matter. Apparently, all that was required to bypass this well established Law was a chance accumulation of chemicals in some alchemist’s type brew of ‘primordial soup’ combined with raw energy from the sun, lightning or geothermal forces. (Such is their faith in the creative powers of matter). They call this science? Incredible!

 

Verdict of science - abiogenesis is not possible.

 

IMPOSSIBLE ACCORDING TO THE SECOND LAW OF THERMODYNAMICS.

 

The second Law of Thermodynamics rules out the spontaneous generation of life from non-life as a chance event. Even if we ignore the above reasons why spontaneous generation of life is impossible, the formation and arrangement by chance of all the components required for living cells is also impossible. The arrangement of all the components within the simplest of living cells is extremely precise; these components cannot just arrange themselves by chance.

According to the Second Law of Thermodynamics, when left to themselves, things naturally become more disordered, rather than more ordered. Or in other words, things will naturally go to more probable arrangements and disorder is overwhelmingly more probable than order. Disorder actually increases with the passage of time and also with the application of raw (undirected) energy (for example, heat).

Yet we are repeatedly told the evolution fable, that the numerous components required to form a first, self-replicating, living cell just assembled themselves in precise order, by pure chance, over a vast period of time, aided by the random application of raw, undirected energy.

 

Verdict of science - abiogenesis is not possible.

 

IMPOSSIBLE ACCORDING TO THE LAW OF CAUSE AND EFFECT.

 

A fundamental principle of science is the law of cause and effect. It is a primary law of science, and the very basis of the scientific method.

The law of cause and effect tells us that an effect cannot be greater than its cause/s.

Life is not an intrinsic property of matter/energy - so it is beyond the capabilities of matter/energy to produce a property (life) it doesn't possess.

The interaction of matter and energy cannot produce an effect with properties extra and superior to its own properties, that would violate the law of cause and effect.

 

Can chemistry create biology - which has entirely different properties to its own?

Of course it can't.

Biology includes such properties as genetic information, the DNA code, consciousness and intelligence. To believe that chemistry can create biology - means believing that something inanimate can create additional, new properties that it doesn't possess. To exceed the limitations of its own properties would violate the law of cause and effect.

 

For matter/energy to be able to produce life whenever environmental conditions permit, it would have to be inherently predisposed to produce life.

It would have to embody an inherent plan/blueprint/instructions for life, as one of its properties. The inevitable question then has to be - where does an inherent predisposition for life come from? It can only signify the existence of purpose in the universe and that is something atheists could never accept.

A purpose, order or plan can only come from a planner or intelligent entity. So it is a catch 22 situation for atheists ... the atheist/ evolutionist belief in abiogenesis either violates the law of cause and effect, OR is an admission of purpose in the universe. It can only be one or the other. Atheists cannot possibly accept the existence of purpose in the universe, because that would be the end of atheism. So the atheist belief in abiogenesis violates the law of cause and effect.

 

Verdict of science - abiogenesis is not possible.

 

IMPOSSIBLE ACCORDING TO MATHEMATICS.

 

Even if we ignore the Law of Biogenesis, Information Theory and the Second Law of Thermodynamics (which all completely rule out the spontaneous generation of a living cell from non-living matter). Mathematical probability also rules out the spontaneous generation of life from non-living matter.

The laws of probability are summed up in the Law of Chance. According to this Law, when odds against a chance event are 10 to the power of 15, the chance of that event happening are negligible on a terrestrial scale. At odds of 10 to the power of 50, there is virtually no chance, even on a cosmic scale. The most generous and favourable, mathematical odds against a single living cell appearing in this way by chance are a staggering 10 to the power of 40,000. A more likely calculation would put the odds at an even more awesome 10 to the power of 119,850. Remember odds of 10 to the power of 50 is sufficient to make an event virtually impossible (except, perhaps, by magic!!).

 

Verdict of science - abiogenesis is not possible

 

Fred Hoyle, The Big Bang in Astronomy, New Scientist 19 Nov 1981. p.526. On the origin of life in primeval soup.

“I don’t know how long it is going to be before astronomers generally recognise that the combinatorial arrangement of not even one among the many thousands of biopolymers on which life depends could have been arrived at by natural processes here on the Earth. Astronomers will have a little difficulty at understanding this because they will be assured by biologists that it is not so. The biologists having been assured in their turn by others that it is not so. The “others” are a group of persons who believe, quite openly, in mathematical miracles. They advocate the belief that tucked away in nature, outside of normal physics, there is a law which performs miracles.”

 

“Since science does not have the faintest idea how life on earth originated, it would only be honest to confess this to other scientists, to grantors, and to the public at large. Prominent scientists speaking ex cathedra, should refrain from polarising the minds of students and young productive scientists with statements that are based solely on beliefs.” Bio-informaticist, Hubert P. Yockey. Journal of Theoretical Biology [Vol 91, 1981, p 13].

 

Conclusion: Abiogenesis is impossible - it is just another atheist myth debunked by science.

 

Evolutionists and atheists are quite entitled to abandon the scientific method and all common sense by choosing to believe that all the necessary information for life can just appear in matter, as if by magic. They can also choose to believe that: the Laws of; Biogenesis, Mathematical Probability, Cause and Effect and Second Law of Thermodynamics, were all somehow magically suspended to enable their purported evolution of life from sterile matter to take place. They can believe whatever they like. But they have no right to present such unscientific, flights of fancy through the media and our education system, as though they are supported by science.

  

More about DNA and the origin of life.

 

The discovery of DNA should have sounded the death knell for evolution. It is only because atheists and evolutionists tend to manipulate and interpret evidence to suit their own preconceptions that makes them believe DNA is evidence FOR evolution.

 

It is clear that there is no natural mechanism which can produce constructional, biological information, such as that encoded in DNA.

Information Theory (and common sense) tells us that the unguided interaction of matter and energy cannot produce constructive information.

 

Do atheists/evolutionists even know where the very first, genetic information in the alleged Primordial Soup came from?

Of course they don't, but with the usual bravado, they bluff it out, and regardless, they rashly present the spontaneous generation of life as a scientific fact.

However, a fact, it certainly isn't .... and good science it certainly isn't.

 

Even though atheists/evolutionists have no idea whatsoever about how the first, genetic information originated, they still claim that the spontaneous generation of life (abiogenesis) is an established scientific fact, but this is completely disingenuous. Apart from the fact that abiogenesis violates the Law of Biogenesis, the Law of Cause and Effect and the Second Law of Thermodynamics, it also violates Information Theory.

 

Evolutionists/atheists have an enormous problem with explaining how the DNA code itself originated. However that is not even the major problem. The impression is given to the public by evolutionists that they only have to find an explanation for the origin of DNA by natural processes - and the problem of the origin of genetic information will have been solved.

That is a confusion in the minds of many people that evolutionists/atheists cynically exploit,

Explaining how DNA was formed by chemical processes, explains only how the information storage medium was formed, it tells us nothing about the origin of the information it carries.

 

To clarify this it helps to compare DNA to other information, storage mediums.

For example, if we compare DNA to the written word, we understand that the alphabet is a tangible medium for storing, recording and expressing information, it is not information in itself. The information is recorded in the sequence of letters, forming meaningful words.

You could say that the alphabet is the 'hardware' created from paper and ink, and the sequential arrangement of the letters is the software. The software is a mental construct, not a physical one.

The same applies to DNA. DNA is not information of itself, just like the alphabet it is the medium for storing and expressing information. It is an amazingly efficient storage medium. However, it is the sequence or arrangement of the amino acids which is the actual information, not the DNA code.

So, if evolutionists are ever able to explain how DNA was formed by chemical processes, it would explain only how the information storage medium was formed. It will tell us nothing about the origin of the information it carries.

Thus, when atheists and evolutionists tell us it is only a matter of time before 'science' will be able to fill the 'gaps' in our knowledge and explain the origin of genetic information, they are not being honest. Explaining the origin of the 'hardware' by natural processes is an entirely different matter to explaining the origin of the software.

Next time you hear evolutionists/atheists skating over the problem of the origin of genetic information with their usual bluff and bluster, and parroting their usual nonsense about science being able to fill such gaps in knowledge in the future, don't be fooled. They cannot explain the origin of genetic information, and never will be able to. The software cannot be created by chemical processes or the interaction of energy and matter, it is not possible. If you don't believe that. then by all means put it to the test, by challenging any evolutionist to explain how genetic information (not DNA) can originate by natural means? I can guarantee they won't be able to do so.

 

Dr James Tour - 'The Origin of Life' - Abiogenesis decisively refuted.

youtu.be/B1E4QMn2mxk

 

FOUNDATIONS OF SCIENCE

The Law of Cause and Effect. Dominant Principle of Classical Physics. David L. Bergman and Glen C. Collins

www.thewarfareismental.net/b/wp-content/uploads/2011/02/b...

 

"The Big Bang's Failed Predictions and Failures to Predict: (Updated Aug 3, 2017.) As documented below, trust in the big bang's predictive ability has been misplaced when compared to the actual astronomical observations that were made, in large part, in hopes of affirming the theory."

kgov.com/big-bang-predictions

Professor Jeremy Sanders, FRS, Head of the 800 Committee, University of Cambridge.

 

He is also the Deputy Vice-Chancellor of the University.

 

AND, he is the Head of the School of Physical Sciences and a Fellow of Selwyn College.

 

AND, last month the Royal Society awarded him the Davy Medal for his pioneering contributions to several fields, most recently to the field of dynamic combinatorial chemistry at the forefront of supramolecular chemistry.

 

His research work: "We are interested in molecular recognition: Metal-ligand, pi-pi, donor-acceptor and hydrogen bonding interactions are used to create new supramolecular systems that may have useful recognition, catalytic or photophysical properties. Building blocks include peptides and metalloporphyrins, and products include macrocycles, nanotubes, rotaxanes and catenanes. For more detailed descriptions see our Group Web page ".

In: KAPPELMAYR, Barbara (Red.) (1995). Geïllustreerd handboek van de kunst. VG Bild-Kunst/De Hoeve, Alphen aan de Rijn. ISBN 90 6113 763 2

---

Pp. 874ff in: QUADRALECTIC ARCHITECTURE – A Panoramic Review by Marten Kuilman. Falcon Press (2011). ISBN 978-90-814420-0-8

 

quadralectics.wordpress.com/4-representation/4-2-function...

 

‘Real’ palaces were designed and constructed in Spain at about the same time as Palladio provided the Valmarana family with shelter in Italy. The Royal Palace of the Escorial is located some forty-five kilometers northwest of Madrid (Spain) at the rim of the Guadarrama Mountains. It appears as a great stone platform carved from the mountain and its harmonizing with the landscape makes it a stone scape. It has reminiscence, according to George KUBLER (1982, p. 98), to certain Quattrocento paintings of ideal cities drawn with a single-point perspective in Renaissance Italy. He gives the panel painting ‘A City Square’, attributed to Luciano de Laurana, in the Walters Art Gallery in Baltimore, as an example.

 

The history of the Escorial has four distinct elements, which were planned by King Philip II (1527 – 1598) after he became King of Spain in 1556: 1. The initial purpose as a place to house the tombs of the dynasty, in particular his father Charles V, who was buried in Yuste; 2. The foundation of a monastery (with hospital buildings); 3. A basilica (with a dome); 4. A palace (with a library). These four intentions, which were brought forward more or less simultaneously, have aspects of higher division thinking, but the psychological setting of the King is hard to prove.

 

Spain was in the second half of the sixteenth century on the heights of its political power, covering the larger part of Europe when Philip II was King of Spain and Portugal, King of Naples, Duke of Milan, Ruler of the Spanish Netherlands, and King consort of England (as the husband of Mary I). It was furthermore, a global player in the colonial expansions across the Atlantic.

 

King Philip II began his search for a foundation of a new monastery in 1558 – 1559. He called it San Lorenzo de la Victoria – referring to the victory in the battle of San Quintin (in northern France) on 10 August 1557, on the day of San Lorenzo. The King employed the help of the Jeronymite Order, but their suggestions and plan, where about half the size than the cuadro (block), which was laid out in April 1562 in a location near El Escorial. The plan of the monastery, which was first to be started, had a classical tetradic design.

 

George KUBLER (1982) mentioned three Jeronymite friars, who played a major role in the history of the construction of the Scoria. Juan de San Jeronimo was present from 1562 to 1591 as the chief accountant and most authoritative as a chronicler. Antonio de Villacastin was the Obrero mayor (chief workman) and Jose de Sigüenza wrote a history of the building by recording the progress of design and construction.

 

The official work started in 1563 with the intention of Philip II to bring the body of his father Charles V, the Emperor, who died in 1558, from Yuste to the new location. Philip had an interest in building matters, which only increased after his European tour at his father’s command (1548 – 1551). The King visited England for the marriage to Queen Mary (1516 – 1558, also known as Bloody Mary, because she had three hundred religious dissenters burned at the stake) in July 1554. He was accompanied at that (political-inspired) trip by the architect and engineer Gaspar de Vega, who had to study foreign buildings and constructions, which could be useful in Spain. Vega returned overland and visited places like the Louvre, St.Germain-en-Laye and Fontainebleau.

 

The three main architects of the Escorial were Francisco de Villalpando, Juan Bautista de Toledo, and Juan de Herrera. The first named architect was originally a bronze worker, who translated Serlio. He was titled as a ‘geometer and architect’, which was the first official use of this term by a Spanish royal patron. His qualities as a humanist and theorist gained him (royal) recognition in the liberal art of architecture (KUBLER, 1982).

 

The second, Juan Bautista de Toledo, was appointed as an architect in 1559. He had been Michelangelo’s assistant at St. Peter from 1546 to 1548. His promotion turned into a personal tragedy when his wife and two daughters and all his books and papers were lost when the ship sank, which had to bring them from Naples to Spain. His appointment – after this event and as an outsider – was marred with conflicts and crises, but the King backed him until he died on 21 May 1567.

 

The third, Juan de Herrera, was an assistant of Toledo, appointed by the King in 1563 to check on the unpredictable authority of Toledo. He was appointed in 1576 as a royal architect – after years working in the background, with close ties to the King as Master of the Horse (1569 – 1577) and later (1579) as a court chamberlain.

 

The inactive year of Toledo’s death (1567) was followed two years later by an increase in activities. Flemish slaters expanded their trade after the work on the King's temporary dwelling La Fresneda was finished. The main staircase, which was the showpiece of the monastery, the roofing of the kitchen wing, and the paving made good progress. The cloister was finished in 1579 when the parapets were placed. The basilica started in 1574 and was finished in 1586.

 

The building of the fountain began in 1586, following the symbolism of the Garden of Eden, with four rivers watering Asia, Africa, Europe and America. The design had similarities with the Fons Vitae, also with four basins, at the Manga cloister of Santa Cruz in Coimbra (Portugal), built in 1533 – 1534.

 

The work on the actual royal dwelling (King’s House) in the northeast quadrant had begun in 1570 – 1572. It took nearly fifteen years until the court moved from their provisional quarters to the new accommodation in August 1585, but most of the palace and the college had still to be finished.

 

The library portico, which was part of Toledo’s ‘’universal plan’, only started when the construction of the palace, basilica, and college had ceased and was finished in 1583. The hospital buildings (infirmary) were situated outside the main cuadro (of 1562) at the southwestern corner. Farm buildings, later known as La Compana, were also outside the monastery. The northern service buildings (casas de oficios) were mentioned in 1581. Fig. 727 shows the Escorial in a reconstruction of the situation in 1568.

 

The history of the Escorial came into a new phase after Philip died in September 1598. The complex was complete except for its initial purpose: the underground burial chamber intended for the tombs of the dynasty. The circular plan of Panteón, initiated under Herrera’s direction, had four stairs and a light shaft. However, little work was done until 1617 – 1635 when G.B. Crescenzi altered the plan from circular to octagonal. After he died in 1635 the work was completed in 1654 by Fray Nicolas de Madrid (following Crescenzi’s plan). The crypt was described by Fray Francisco de los Santos as the Panteon. His book included all the rituals of transferring the royal bodies since 1586.

 

Several fires caused damage to the complex in later years. The first one happened in 1577 at the southwest tower. A most destructive fire took place on the 7th of June 1671, in which also the monastery roofs burst into flames. Many manuscripts were destroyed. Some sixty years later, in 1731, the fire started again at a chimney in the college. The Compana was destroyed in 1744, and the last great fires took place in 1763 and 1825.

 

A plague of termites threatened the building in 1953. This event sparked a restoration program instigated by the government. The crossing towers in the monastery and college were rebuilt in 1963. Their spires were re-designed by Bartolomé Zúmbigo in 1673 in a Baroque fashion but changed again to the original layout of Herrera as given in the last quarter of the sixteenth century. The result was an example of the use of two of the major elements of a quadralectic architecture: the octagonal roof fitted onto the square of the tower.

 

Characterization of the Escorial complex by art historians (like Nikolaus Pevsner) pointed to a classification as a ‘mannerist’ building. Mannerism is the term (from maniera) used for imitation and exaggeration of the work of the High Renaissance. Its severity and simplicity were associated in the first half of the twentieth century (mainly by German art historians) with puritanism and asceticism, like the character of Philip II himself. This perception was later challenged and even denied: ‘If psychic states and architectural forms were this closely related in the process of design, then architecture as a whole would long ago have been recognized as a dictionary of psychic attitudes’ (KUBLER, 1982; p. 126).

 

The plan of the Escorial near Madrid follows tetradic lines with a four-division in function (palace, college, monastery, and place of contemplation) organized around a church with a square ground plan.

 

Some observers pointed to Post-Reformation geomancy as initiating the design. Nigel PENNICK (1979) stated that ‘the Escorial at Madrid was built according to a Jesuit interpretation of the Vision of Ezekiel’. Others go further back and tried to find Renaissance ideas of magic underlying the design of the Escorial (TAYLOR, 1967). René Taylor wondered whether the courtier and ‘architect’ Herrera could not be ‘a Magus, a man deeply versed in Hermetism and occult lore, who by virtue of this was attached in a special way to the King?’

 

George Kubler (pp. 128 – 130) denied the view that the King and Herrera had occult views. He could prove that the King did not sympathize with astrology and horoscopes. The court’s association with the mystic Ramon Lull (1232 – 1316) – the ‘Doctor illuminatus’ with his combinatorial method for categorizing all possible knowledge (see p. 780), but also with his intention to convert Muslims to Christianity – was purely academically, according to Kubler. It is regrettable that none of these authors make any reference to a particular type of division thinking, which might elucidate such labels like Mannerism, Puritanism, astrology, magic, etc.

---

Bibliography

 

KUBLER, George (1982). Building the Escorial. Princeton University Press, Princeton, New Jersey. ISBN 0-691-03975-5

 

PENNICK, Nigel (1979). The Ancient Science of Geomancy. Man in harmony with the earth. Thames and Hudson Ltd., London.

 

TAYLOR, René (1967). Architecture and Magic. Considerations on the Idea of the Escorial. Pp. 81 – 109 in: Essays in the History of Archtecture Presented to Rudolf Wittkower. New York.

Abstract

This dissertation seeks to define the importance of John Dee’s interpretation of mediaeval and Renaissance esoterica regarding the contacting of daemons and its evolution into a body of astrological and terrestrial correspondences and intelligences that included a Biblical primordial language, or a lingua adamica. The intention and transmission of John Dee’s angel magic is linked to the philosophy outlined in his earlier works, most notably the Monas Hieroglyphica, and so this dissertation also provides a philosophical background to Dee’s angel magic. The aim of this dissertation is to establish Dee’s conversations with angels as a magic system that is a direct descendant of Solomonic and Ficinian magic with unique Kabbalistic elements. It is primarily by the Neoplatonic, Hermetic, Kabbalistic, and alchemical philosophy presented in the Monas Hieroglyphica that interest in Dee’s angel magic was transmitted through the Rosicrucian movement. Through Johann Valentin Andreae’s Chymische Hochzeit Christiani Rosencreutz anno 1459, the emphasis on a spiritual, inner alchemy became attached to Dee’s philosophy. Figures such as Elias Ashmole, Ebenezer Sibley, Francis Barret, and Frederick Hockley were crucial in the transmission of interest in Dee’s practical angel magic and Hermetic philosophy to the founders of the Hermetic Order of the Golden Dawn. The rituals of the Golden Dawn utilized Dee’s angel magic, in addition to creative Kabbalistic elements, to form a singular practice that has influenced Western esoterica of the modern age. This study utilizes a careful analysis of primary sources including the original manuscripts of the Sloane archives, the most recent scholarly editions of Dee’s works, authoritative editions of original documents linked to Rosicrucianism, and Israel Regardie’s texts on Golden Dawn practices.

 

Introduction

 

John Dee’s (1527-1609) conversations with angels have been the subject of scrutiny of various parties since their inception. Nobles were divided in their opinions of the supernatural. Dee and his notorious scryer, Edward Kelly, were praised, supported, threatened, or betrayed for their experiments in super-celestial magic; a kind of magic especially noted amongst detractors for its risk in contacting chthonic spirits. The traditional Christian perspective regarding the summoning of angels has been suspect since the Middle Ages due to the biblical assertion that, whatever the entity’s own claims, a ‘demon’ may appear in the guise of an ‘angel’, especially those bearing non-traditional names (II Corinthians 11. 13-14). What made Dee capable of accepting this risk while expecting positive results?

Prior to his conversations with angels, Dee’s reputation was that of a learned man of the highest caliber. He had been offered the position of Court Mathematician by the kings and emperors of various countries after his lectures on Euclid at the University of Paris in 1550.2 His personal library’s vastness was well marked as the largest in all of England. 3 His comprehensive mastery of its contents and its ramshackle organization made his presence necessary in order to even navigate it.4 The quality of the library and its learned archivist were such that it was frequented by the leading lights of the day, including Queen Elizabeth herself.5 Why would such a man of such great erudition seemingly eschew reason, turn his back on his higher learning, instead attempting to receive the answers to his life’s scholarly inquiries from a crystal ball? In Dee’s final years and those following his death, the dangerous reputation of a magus dealing in super-celestial magic caught up with him. Despite Dee’s low reputation after his death, Johann Valentin Andreae (1586-1654) published his Rosicrucian work, Chymische Hochzeit Christiani Rosencreutz anno 1459 (or the Chemical Wedding; 1616),6 which featured Dee’s Monas Hieroglyphica on the invitation to an allegorical wedding that described the process of the inner alchemy of the human spirit (which will be further discussed later in this dissertation).7 Elias Ashmole (1617-1692) also made it his mission to collect Dee’s writings and corresponded with his son, Arthur Dee (1579-1651), with the intention of writing a biography on Arthur’s father, which was never completed. Méric Casaubon (1599-1671) used Dee’s journals to write the True & Faithful Relation (1659) that, at the time, seemed to seal Dee’s fate (despite Casaubon’s noting of and respect for his pious and fervent Christianity) as a deluded diabolist who had clearly overstepped the station of man in the spiritual hierarchy by attempting to directly contact and hold conversation with angels.

Frederick Hockley is thought to have been a member of the possibly spurious Society of Eight and possessed a great interest in Dee’s use of crystals to contact angels.10 Hockley and MacKenzie’s works and reputations were highly regarded by William Wynn Westcott who, alongside Samuel Liddel MacGregor Mathers and Robert Woodman, founded the Hermetic Order of the Golden Dawn in 1888.11 The Golden Dawn’s Second Order introduced its members to Dee’s Enochian tables and angel magic in the form of Book H12 and Enochian Chess.13

This dissertation shall attempt to treat the following questions: How did Dee’s philosophy and angel magic prove resilient enough to survive Casaubon’s damning persecution and persist into the modern era? What was the importance of Enochian angel magic to the Western esoteric traditions?

The first chapter, in two sections, will examine the sources of influence on John Dee’s angel magic. The first section will present the sources of Dee’s Hermetic philosophy that served as his rationale for his capability to perform theological magic; namely Marsilio Ficino, Giovanni Pico della Mirandola, and the Corpus Hermeticum and their reflections in Dee’s works. The second section will investigate the sources of practical magic that Dee used as inspiration for his own practice (directly or indirectly); namely Peter de Abano, Johannes Trithemius, Heinrich Agrippa Cornelius von Nettesheim, and the various pseudoepigraphic or authorless grimoires such as the Liber Juratus Honorii, Ars Paulina, Ars Almadel, Ars Notoria, and Arbatel de Magia Veterum, and others. The second chapter, in two sections, will examine the transmission of John Dee’s Hermetic philosophy after his death. The first section will present John Dee’s Hermetic and Apocalyptic philosophies as transmitted through the Rosicrucian writings of the Fama Fraternitatis, Confessio Fraternitatis, and the Chemical Wedding. The second section will investigate the transmission and revival of Dee’s practical magic through the fringe-Masonic societies; especially through Frederick Hockley. The third chapter will examine the transmission of Enochian angel magic within the Hermetic Order of the Golden Dawn and its direct descendent order, the Stella Matutina. The examination will include Book H, Enochian Chess, the connection of Enochian angel magic to spiritual alchemy, Robert Felkin’s usage of Dee’s angel magic within the Stella Matutina, and the reformation of the Stella Matutina into the Order of Smaragdum Thalasses; the Order of Smaragdum Thalasses being the last known Golden Dawn organization to have made use of Enochian angel magic.

Overall, this dissertation intends to illustrate the resilience and importance of John Dee’s philosophy and its transmission from his angelic conversations to the highly influential Hermetic Order of the Golden Dawn, and thus to the modern era.

 

Chapter 1: The Philosophy and Practice of John Dee’s Angel Magic

It might be so if madness were simply an evil; but there is also a madness which is a divine gift, and the source of the chiefest blessings granted to men. For prophecy is a madness, and the prophetess at Delphi and the priestesses at Dodona when out of their senses have conferred great benefits on Hellas, both in public and private life, but when in their senses few or none.1

In his outline of the history of magic and exaltation to the divine, Szönyi highlights the furies of Plato’s Phaedrus.2 In Phaedrus, Socrates praised the madness that comes as a gift from the Muses, which Szönyi equates to an occult knowledge only available to the ‘hypersensitive elect’. As mentioned before, Méric Casaubon praised John Dee’s Christian piety and goodness (though he also regarded Dee as deluded and a bit gullible) throughout the preface to his True & Faithful Relation.4 French neatly illustrated the fall of Dee’s reputation in the centuries after his death and illustrated how Casaubon’s perception of pious delusion was further degraded into ‘execrable insanity’ by Thomas Smith in his Vita Joannis Dee (1707).5 By the nineteenth century, the character of Dee had devolved from Casaubon’s misled, pious scholar to an immoral conjuror of spirits6 and a necromancer fit for sensationalist fiction.7 Calder aptly noted that the nineteenth century likely viewed all sixteenth century science as ‘devil-ridden superstition’ and quoted a treatment of Dee by an anonymous writer in Blackwood’s Edinburgh Magazine (1842): The majority of them were in all probability half mad and those who were whole mad of course set the fashion and were followed as the shining lights of the day. Regarding Dee in comparison to his assistant, Kelly, the article stated, ‘Dee was more respectable, because he was only half a rogue; the other half was made up of craziness.’9 Dee seemed to be possessed by this Platonic, divine madness and eschewed the orthodox Aristotelian assertion that science was to be the deduction of causal demonstrations on the basis of self-evident principles that could only be intuited and not demonstrated within a given discipline.11 The undercurrents of Neoplatonism that accepted magical practice within Arabic Aristotelianism provided a framework through which Neoplatonic philosophy, and thus Hermetic philosophy, could be combined to form a perspective that allowed the practice of magic to be considered a viable applied science. John Dee’s angelic conversations were not the casting off of his high learning, but the very application of it in a context of divine madness. The next section will examine the Hermetic background of Dee’s angel magic. Ficino and Pico: The Hermetic Roots of Dee

This dissertation cannot effectively present Dee’s Hermetic philosophy without addressing Marsilio Ficino (1433-1499), the translator of the Corpus Hermeticum, and the author of De religione Christiana, De Triplici Vita, Libri Tres, Theologica Platonica, and Epistolae,13 and a densely annotated Omni Divini Platonis opera (1532), all of whose books sat on Dee’s shelves.14 In a time when the age of a work lent it greater authority,15 Ficino, and all other scholars of the Renaissance, believed Hermes Trismegistus to have been a very real figure and a pre-cursor to all Greek wisdom: Of the sources for his magic to which Ficino himself refers the most are the Asclepius and, of course, Plotinus. The Asclepius, like the Orphica, had great authority for Ficino because it was a work of Hermes Trismegistus, a priscus theologus even more ancient than Orpheus, indeed contemporary with Moses; Plotinus was merely a late interpreter of this antique Egyptian wisdom. Ficino applied the Hermetic writings as the basis of Neoplatonic philosophy. He believed the Plotinian lemma ‘De Favore Coelitus Hauriendo’ to be an expansion on the ability of man to create gods in the making of statues as described by Hermes in Asclepius 24 and 37.17 The similarities to Christianity present in Platonic and Neoplatonic texts assisted in their assimilation into Ficino’s theology18 and provided a fine vehicle for his Hermetic Christianity.19 While this section deals with the philosophy behind Dee’s angel magic, Ficino’s own theological magic is deeply rooted in his theological philosophy and must be examined. Ficino’s Hermetic-Christian magic was transmitted through the Stoic and Aristotelian elements of the stellar influences on man,20 a philosophical framing of magic that Dee shared.21 Like the Greek sources it drew on, Ficino’s Christian super-

celestial magic was ‘daemonic’ (not to be confused with the Christian invective ‘demonic’). As Ficino states: [...] every person has at birth one certain daemon, the guardian of his life, assigned by his own personal star which helps him to that very task to which the celestials summoned him when he was born. Therefore anyone having thoroughly scrutinized his own natural bent [...] by the aforesaid indicators will so discover his natural work as to discover at the same time his own star and daemon. Following the beginnings laid down by them, he will act successfully, he will live prosperously; if not, he will find fortune adverse and will sense that the heavens are his enemy.23

Furthermore: Now remember that you receive daemons or, if you will, angels, more and more worthy by degrees in accordance with the dignity of the professions, and still worthier ones in public government; but even if you proceed to these more excellent [levels], you can receive from your Genius and natural bent an art and a course of life neither contrary to, nor very unlike, themselves. Ficino’s cosmos are composed of a hierarchy of ‘good’ and ‘bad’ daemons assigned to the planets and the houses of the zodiac whom are responsible for communicating the will of the Anima Mundi to the inferior spheres. Ficino believed that through astrological interaction with nature, ‘celestial goods’ can descend to the pious magus’ ‘rightly prepared spirit’ to receive fuller gifts from beneficial daemons.26 Interestingly, Ficino outlines a talismanic imagery in order to connect with his astral daemons that is clearly influenced by the Picatrix.27 We shall use the planet Mercury as our example: For example, if anyone looks for a special benefit from Mercury, he ought to locate him in Virgo, or at least locate the Moon there in an aspect with Mercury, and then make an image out of tin or silver; he should put on it the whole sign of Virgo and its character and the character of Mercury. [...] The form of Mercury: a man sitting on a throne in a crested cap, with eagle's feet, holding a cock or fire with his left hand, winged, sometimes on a peacock, holding a reed with his right hand, in a multicolored garment. The Picatrix states the following of the stones proper to each planet and the formation of figures:

Of the metals, Mercury has quicksilver and part of tin and glass, and of stones it has emerald and all stones of this type has part of azumbedich. [...] The image of Mercury according to Hermes is the image of a man with a rooster on his head, sitting in a throne; his feet look like those of an eagle and in the palm of his left hand he has fire and under his feet are the signs stated before. This is its form. Dee’s magical practice likewise exhibited angels that corresponded to the planets through the metals associated with them30 and the respective days of the week.31 However, Dee owes much of the structure of his seals and talismans to Giovanni Pico, discussed later in this section.

Supplied with the basis of ancient, newly unearthed lore anterior to the Neoplatonists and Arabic astrological magic, Ficino’s theology was drawn from this long-forgotten, secret wisdom worthy of the title prisca theologia (Ficino’s idea of a primordial faith from which all faiths stem).32 33 The next section of this chapter will address in detail just how influential the quest for a singular, united faith was to Dee. In 1614, a mere six years after Dee’s death, a long debate on the authenticity of Corpus Hermeticum’s antiquity came to an end. Isaac Casaubon (1559-1614), Méric Casaubon’s father, correctly identified the Corpus Hermeticum as having been written in the second and third centuries C.E.34 Still the Hermetic (and intrinsically Platonic and Neoplatonic)35 influences on the culture and science of the Renaissance and the Enlightenment —while controversial36— are arguably visible. The importance of the blend of Neoplatonic and Aristotelian philosophy that amalgamated the Great Chain of Being as represented by Ficino (further supported by

Johannes Trithemius and Heinrich Cornelius Agrippa, discussed later) cannot be overlooked. The Great Chain of Being as a concept predates Greek thought and was vitally important in the forging of cosmologies. As Lovejoy and Szönyi both

pointed out, Proclus used Cicero to succinctly summarize the idea and metaphor of the Great Chain of Being connecting all things to God: Since, from the Supreme God Mind arises, and from Mind, Soul, and since this in turn creates all subsequent things and fills them all with life, and since this single radiance illumines all and is reflected in each, as a single face might be reflected in many mirrors placed in a series; and since all things follow in continuous succession, degenerating in sequence to the very bottom of the series, the attentive observer will discover a connection of parts, from the Supreme God down to the last dregs of things, mutually linked together without a break. And this is Homer’s golden chain, which God, he says, bade hang down from heaven to earth. The Hermetica alone supplies no means through which to interact with the entities above Man in this Great Chain, and so Ficino developed his methods from Arabic and mediaeval medicine, matter theory, physics, and metaphysics all based upon his studies in Neoplatonism.43 Copenhaver gives special attention to Proclus in the formation of Ficino’s magic, an idea and further acknowledged and corroborated by Clulee and Szönyi.The most significant connection in regards to the connection of Neoplatonism

to the Hermetica is Proclus’ statement Thus all things are full of gods [...]. The authorities on the priestly art have thus discovered how to gain the favor of powers above, mixing some things together and setting others apart in due order. Ficino thought this to be Hermes Trismegistus’ understanding of the cosmos as relayed by Proclus, as exemplified in Asclepius in Hermes’ discourse on the ensouled gods created by man in the forms of statues. Thus, man can form a way to interact with intermediary entities by creating the images of gods. Proclus suggested the practice of a ceremonial magic in mentioning that through consecrations and divine services practitioners could achieve ‘association with the [daemons], from whom they returned forthwith to actual works of the gods’. Ficino derived the natural ingredients of his magic from Proclus’ De Sacrificio,50

which he included in his De Vita:

Under the Solar star, that is Sirius, they set the Sun first of all, and then Phoebean daemons, which sometimes have encountered people under the form of lions or cocks, as Proclus testifies, then similar men and Solar beasts, Phoebean plants then, similarly metals and gems and vapor and hot air. By a similar system they think a chain of beings descends by levels from any star of the firmament through any planet under its dominion. If, therefore, as I said, you combine at the right time all the Solar things through any level of that order, i.e., men of Solar nature or something belonging to such a man, likewise animals, plants, metals, gems, and whatever pertains to these, you will drink in unconditionally the power of the Sun and to some extent the natural power of the Solar daemons.51

Ficino clearly felt the weight of what he perceived as a monumental discovery of a tradition of theology and philosophy that had remained unbroken from Hermes to Plato.52 The assertions of a world full of gods by Hermes, the Stoics,53 Plato, and the Neoplatonists clearly impressed themselves on Ficino, but, with the further connection of Arabic medicine and Hermes’ fortunate student being none other than Asclepius (the Greek god of medicine of healing), it seems a matter of course that so pious and learned a theologian would craft a magical system when it was so neatly assembled before him. One question remained: how does one make this daemonic, astrological magic compliant with Christianity? Dee faced a similar question in his conversations with angels, though Ficino chose a much different solution.

Where Ficino drew on nature to connect with the planetary daemons, Dee drew on the planetary daemons to connect with nature.54 All of Dee’s sigils, talismans, and orations came from the angels themselves in compliance, rather than reliance, with esoteric literature available to him.55 It seemed Dee believed he had found a path that reconciled celestial magic with Christianity more aptly than Ficino’s daemonic astrology; a path less ‘daemonic’ and more ‘angelic’.

Ficino relied on the ancient Christian authority of Lactantius (c. 240-320). Lactantius, a Christian apologist, utilized Hermes Trismegistus’ Asclepius in reconciliation with Christianity as the ‘original faith of mankind’ in his work Divinae Institutiones (304-313).56 While this text is not a directly supportive work of Hermeticism,57 it shows a precedent for Hermetic philosophy to be used as a method of reconciling differing patterns of belief. Ficino found this argument a viable counter- balance to St. Augustine of Hippo’s (354-430) objection to Asclepius in Book VIII of De civitate Dei (415-417).58 Ficino also found Lactantius’ argument in support of his idea of the prisca theologia.59 These arguments linking Christianity to Hermeticism are certainly felt in Dee’s reworking of grimoire magic into a profoundly Christian, prayer- based practice at its inception.60

Plato’s key role in Ficino’s cosmology also necessitated a Christian sanitization. Here again, we find Plato’s four furies, the ‘divine madnesses’, but combined with the theology of the Christian Pseudo-Dionysius the Areopagite, wherein each madness (prophetic, religious-mystical, poetical, and love) brings the aspirant closer to unity with God.61

In the Propaedeumata Aphoristica (1558), Dee seems to have agreed with Ficino on the stars indeed having powers that mankind can benefit from, but through the use of mirrors rather than the agency of daemons.62 Clulee compares the Propaedeumata to Dee’s Monas Hieroglyphica (1564) stating that where the Propaedeumata presents man’s interaction with the cosmos as a mechanically physical fact, the Monas sought to illustrate the power of symbols over that which the symbols represent.63

Thus, Dee more clearly illustrates his acceptance of Ficino’s Neoplatonic-

Hermetic theological philosophy within the Monas.64 In the Neoplatonic paradigm,

Calder underlines Proclus (and ancient mathematicians such as Theon and Nicomachus)

as a figure of important influence on Dee’s philosophy in the Monas Hieroglyphica in

terms of the notion of One, or Unity.65 Proclus posed a problem wherein the One, or

God, can only be approached by analogy or negation and supplies the analogy that

‘[t]he One is like the sun’s light which illuminates the world and radiates far and wide

while it remains undiminished at its source’.66 Dee seems supremely confident of his

attempt to communicate the One in a single symbol rife with countless analogies:

Though I call it hieroglyphic. he who has examined its inner structure will grant that all the same there is [in it] an underlying clarity and strength almost mathematical, such as is rarely applied in [writings on] matters so rare. Or is it not rare, I ask, that the common astronomical symbols of the planets (instead of being dead, dumb, or, up to the present hour at least, quasi-barbaric signs) should have become characters imbued with immortal life and should now be able to express their especial meanings most eloquently in any tongue and to any nation?67

The recent scholarly opinion regarding the Hermetic element of Dee’s philosophy

as illustrated in the Monas is unified and agreed upon by Walton, Clulee, Szönyi, and

Harkness68 in the following:

Since the Creator made the whole cosmos, not with hands but by the Word, understand that he is present and always is, creating all things, being one alone, and by his will producing all beings.69

Ficino’s reconciliation of his philosophy, magic, and Christianity were highly formative to Dee’s justifications for his questionably heretical angelic conversations. However, Dee also incorporated Kabbalistic elements Ficino eschewed. Ficino’s friend, Giovanni Pico della Mirandola, artfully reconciled Kabbalah with Platonic and Hermetic philosophy, as well as Christianity.70 The connection of the divinity of the cosmos and man’s ability to connect with them through images is granted new depths when combined with the power of names presented in practical Kabbalah, as written by Johannes Reuchlin (1455-1522), and further linked with Hermeticism and Christianity through Pico. Pico’s contribution to the Hermetic-Kabbalistic philosophy most certainly piqued Dee’s interests, as exemplified in his Hermetic-Christian definition of the ‘real Cabbala’ in his Monas Hieroglyphica.

 

It is fascinating and highly relevant to this essay that Pico proclaimed Ramon Llull’s works, or the Ars Raymundi, to be Kabbalistic.72 Ramón Llull (1232/3-1316) channeled the idea of the Great Chain of Being in his assertion of the capacity of man to ascend the scala naturae, or the ladder of nature, through intellectual contemplation.73 Llull used the combination of a series of nine letters (B, C, D, E, F, G, H, I, and K) representing ‘absolute attributes’, to which nine relations, nine questions, nine subjects, nine virtues, and nine vices were added.74 75 The resulting number of binary combinations was calculated to be 17,804,320,388,674,561, which Llull explored with the use of geometrical figures meant to enumerate the terms and generate combinatorial pairings of the aspects of reality.76 The acceptance of pseudo-Llullian alchemical and Kabbalistic works as authentic in conjunction with his mystic, mathematical diagrams only served to make the Ars Raymundi all the more appealing to Dee.77 Pico argues that Llull’s usage of combining letters of the Hebrew alphabet was not unlike Kabbalistic techniques78 and relied on Llull’s Ars Combinatoria for his own system.79

Regarding Pico’s own system, in his Nine Hundred Theses (1486), he succinctly states his thoughts on Kabbalah and Platonism: That which among the Cabalists is called <[...] Metatron> is without doubt that which is called Pallas by Orpheus, the paternal mind by Zoroaster, the son of God by Mercury, wisdom by Pythagoras, the intelligible sphere by Parmenides.80

He then addresses Kabbalah and Christianity:

11>7. No Hebrew Cabalist can deny that the name Jesus, if we interpret it following the method and principles of the Cabala, signifies precisely all this and nothing else, that is: God the Son of God and the Wisdom of the Father, united to human nature in the unity of assumption through the third Person of God, who is the most ardent fire of love.81

Pico’s clear devotion to Hermetic philosophy was illustrated in the dedication of ten theses to ‘Mercury Trismegistus’ that explicated man’s connection to a living nature, and thus to a God who is present in that life.82 Pico clearly believed in not merely the syncretism of faiths, but the reconciliation of seemingly disparate religious, philosophical, and cultural paradigms.

Johannes Reuchlin boldly deepened the connections between Kabbalah and Christianity in a time when Judaism was defined as a form of Satanism, perhaps even if unwitting.83 Pico’s Theses inspired Reuchlin to write De Verbo Mirifico (1494) in defense of Pico, and the central work on Christian Kabbalah, De Arte Cabalistica (1517).84 In De Verbo Mirifico, Reuchlin presented what he believed to be the reality and name of the Christian God made known through the Son in the pentagrammaton, the five lettered name he believed to signify Jesus Christ.85 De Verbo Mirifico was listed in Dee’s catalogue and it is quite likely Dee was familiar with its material based on the tone of his magical practices86 and some of the aphorisms in the Propaedeumata Aphoristica.87 Through Pico and Reuchlin, the idea that the presence of God existed in images was expanded to include names of power.88 This presentation of the Kabbalah in a Christian, magical context was a crucial element to Dee’s practice.89

The encoding of the Sigillum Dei Aemeth,90 the Kings and Princes of the Heptarchia Mystica, and the divine names of the nations of the world and the angels overseeing them in the Liber Scientiae Auxilii all go to great lengths to identify the names of the angels.91 Dee presumably considered the use of these names crucial to contacting the angels in order to achieve divine understanding related to their offices, though there are no existing records of Dee ever using the names and orations described in the aforementioned books in such a way.

The significant link between Pico and Dee was the transmission of the combined Hermetic, Kabbalistic, and Platonic ideas through Agrippa’s De Occulta Philosophia Libri Tres (1533), especially in regards to the threefold world (elementary, celestial, and intellectual/supercelestial)92 93 that Dee presents in his Mathematicall Praeface to the Elements of Geometrie of Euclid of Megara (1570). Dee utilized this threefold world as the basis of his supercelestial magic dealing with ‘intelligences’ or angels.94 His

treatment of the threefold world in the Mathematical Preface follows:

All thinges which are, & haue beyng, are found vnder a triple diuersitie generall. For, either, they are demed Supernaturall, Naturall, or, of a third being [...] which, by a peculier name also, are called Thynges Mathematicall.95

The linkage between the emanations of God in Neoplatonism influencing

Kabbalistic works has been conjectured, but regardless of such a connection,96 the

theological philosophies seemed to have been more separated by the cultures that

espoused them rather than the actual contents of their literature.97 The inclusion of

Kabbalah into the Neoplatonic and Hermetic philosophy under the auspices of a deeper

Christianity influenced Dee’s thought, and eventually his magical practice. This will be

evidenced and examined in greater depth in the following section treating his angelic

conversations.

 

www.academia.edu/921740/Enochian_Angel_Magic_From_John_De...

I just did a fireside chat with Stanford President John Hennessy and Eric Schmidt (who is now on his was to LA to announce the Android music deal).

 

Brilliant minds with a similar longing for a respect for data and truth.

 

Here are some of the questions I prepared that we did not get to as even bigger issues loomed in their minds:

 

On the topic of meaningful innovation — where does it come from, how can we foster it, what can we learn over time about the process of innovation vs. the product of innovation (e.g., tuning the parameters of communication and team size vs. target setting and visionary leadership).

 

The topics could naturally turn to globalization and competitiveness - the fractal fates of people, companies and nations. Do they embrace the primary vectors of change and growth or retreat to atavistic comforts? For how long can someone opt out of progress and still catch up? In an era of exponential change, the sea change of history has become the drumbeat of decades... with a ever-quickening cadence.

 

I am personally very interested in the dynamics of accelerating technological change and the societal implications on the education imperative (and adult reeducation imperative, as careers no longer last a lifetime) and the rich-poor gap in modern economies like the U.S. (network effects -> power law in income distribution).

 

I am also interested in disruptive entrepreneurship, the change agents of society. To the extent that all good ideas are a combinations of prior ideas (Stuart Kauffman, Matt Ridley, Kevin Kelly), the combinatorial explosion of possibility space may explain accelerating change, and the disruptive power of interdisciplinary idea-pairings could be compared to the differential immunity of epidemiology (islands of cognitive isolation — a.k.a. academic disciplines — are vulnerable to disruptive memes much like South America was to smallpox from Cortés and the Conquistadors). If disruption is what you seek, cognitive island-hopping is good place to start, mining the interstices between academic disciplines.

 

When we consider the combinatorial explosion of possibly interacting ideas as the fountainhead of innovation, it not only creates the economy and explains accelerating change, it also subsumes biological evolution (raising the primary vector of progress to a higher level of abstraction) and nurtures a rational optimism for the future.

 

And some quotes form my talk this morning:

 

“All technologies are combinations of technologies that already exist.” — Brian Arthur

• Combinatorial Explosion (explains accelerating change in technology)

• Creates Economy

 

“Science quickly became the greatest tool for making new things the world has ever seen. Science was in fact a superior method for a culture to learn.” — Kevin Kelly

 

“The average standard of living in London went up 50% from the time of Pericles to 1820.It went up another 50% in one lifespan from 1820 to 1865, and we saw the power of the Industrial Revolution.And now, the standard of living goes up 50% every five years in China.” — Larry Summers

 

“Throughout history, the engine of human progress has been the meeting and mating of ideas to make new ideas. It is our habit of trade, idea-sharing and specialization that has created the collective enterprise which set human living standards on a rising trend. The human race will prosper mightily in the years ahead, because ideas are having sex with each other as never before.” — Matt Ridley

 

• Urbanization (cities are more innovative per capita)

• Interdisciplinary Disruption (differential immunity is a benefit for disruptors)

• Globalization (global idea sex facilitated by the Internet. Unveiling pockets of isolation)

            

“Computing is undergoing the most remarkable transformation since the invention of the PC. The innovation of the next decade is going to outstrip the innovation of the past three combined.”

– Intel CEO Paul Otellini, Sept. ‘11

 

Donald Judd. 1928-1944. Yellow Wallpiece. 1987. Grenoble. Musée des Beaux Arts.

 

Bref résumé de la Notice : L'art minimaliste a été ainsi défini par Donald Judd en 1965 "Il n'y a plus ni peinture ni sculpture mais des objets situés dans l'espace réel du spectateur".

Les quatre boites superposées par deux sont inscrites dans un carré au centre duquel l'espace laissé libre prend la forme d'une croix. L'intérieur de chaque caisson est occupé par des panneaux en contre-plaqué dont le nombre et la position offrent différents points de vue. Yellow Wallpiece constitue un tout dans l'espace dont chaque élément préserve son autonomie.

 

L'Art de la Notice explicative a remplacé l'art de peindre !

 

Le commentaire du guide du musée est encore plus brillant.

"Judd invite à percevoir les relations entre les différents éléments (notions de sérialité, de variations, de combinatoire)...."

Le livre laisse aussi parler l'auteur : "mon travail est à tort considéré comme objectif et impersonnel". Bien sûr c'est un artiste incompris ! "Je m'intéresse d'abord et surtout à la relation que je peux avoir avec le monde de la nature, tout entier et sous toutes ses formes. Cet intérêt - un intérêt profond- inclut aussi bien ma propre existence que l'existence de chaque chose et aussi l'espace et le temps qui sont créés par les choses qui existent. L'art imite cette création....."

 

Le livre du musée nous précise que Donald Judd s'est fait connaître d'abord comme critique d'art avant de l'être comme peintre. C'est un parcours exemplaire dans l'Art Contemporain. Il est en effet absolument nécessaire, indispensable, de savoir discourir sur l'oeuvre, bien avant de savoir en créer une !

 

Brief Summary of Notice: The minimalist art was well defined by Donald Judd in 1965 "There is neither painting nor sculpture but objects in real space of the viewer."

The four by two overlapping boxes are inscribed in a square in the center of which the space left free takes the form of a cross. Inside each box is occupied by plywood panels against which the number and position offer different views. Yellow Wallpiece constitutes a whole in space, each element preserves its autonomy.

 

The Art of the Information Folder replaced the art of painting!

  

The commentary of the guide of the Museum is even more brilliant.

"Judd invited to perceive relationships between different elements (notions of seriality, variations, combinatorial) ...."

The book also let the author speak: "My work is wrongly considered to be objective and impersonal." Of course it is a misunderstood artist! "First I'm interested and especially to the relationship I can have with the world of nature all around and all forms That interest. - Interest deep down- include my own existence as well as the existence of everything and also the space and time that are created by things that exist. Art imitates this creation ..... "

The book of the museum tells us that Donald Judd became known first as an art critic before being a painter. It is an exemplary career in the Contemporary Art. It is indeed absolutely necessary, indispensableto know discourse on the work, well before knowing to create one!

   

I don't love make-up. I prefer a woman without make-up. I will like make-up if I cannot perceive it.

 

In the same vein, I rarely allow myself to go beyond certain limits when processing a photo; I could define that limit as "one should not perceive the trick". This is an exception to this rule, of course*. I wanted to explore further this beautiful Medici lion - how artificial lightning played with its surfaces - so I have decided to go well beyond myself and post this experiment in Rodilius.

  

* One could wonder "What about your texturization work?". It is not a bad question, and a tantalizing starting point for a reflection on the matter.

Texturization is quite another thing, to me. The heart of the matter is: Rodilius is an effect, i.e. a set of mathematical functions with a user interface allowing the user to change several parameters of the functions themselves. The result of the processing depends partly upon the processed image, and mostly upon how the parameters are set. The results of a processing of this kind is somehow deterministic - in principle, one could produce a (vast) combinatorial catalogue of the results of Rodilius processing upon an image by varying one by one the parameters. And this reasoning is valid for all the effects one can find in raster graphics editors like Photoshop, The Gimp, Corel PhotoPaint, etc.

 

Texturization is a kind of processing more akin to an art of its own. How can the mood or general atmosphere of a photo change owing to a texturization processing? who can say! It is entirely at the creativity of the artist. The result of texturization depends mostly upon the chosen texture/textures, the modes of blending, the choice of masking parts of a texture to cancel its effects on part of the image... This is clearly a creative process in the most traditional meaning.

There are even those who texturize other people's photos and think of the texturized image as a work of theirs - which is not so strange, if one considers the ancient and perfectly legitimate practice of parody in music. A composer could take a theme from one of her previous works or by another composer and rework it in a completely different way, typically by a different kind of contrapunctal elaboration. Bach and Palestrina, to give a couple of very big names, usually practised parody; but here I would give as an example a case from my experience as a singer. In Vivaldi's Gloria in D major RV 589 the last section, Cum Sancto Spiritu, which I have had a lucky chance to sing, is a parody from the corresponding section of Giovanni Maria Ruggieri's 1708 Gloria. Vivaldi's elaboration of Ruggieri's theme results in a beautiful, powerful fugue a 4 which can make you weep of joy, when you are immersed in its texture as a singer. There is simply no way to claim that Vivaldi "copied" or "stole" or "plagiarize" Ruggieri, since Vivaldi's elaboration of the theme is an entirely original creation.

I think that the French secular song L'homme armé holds the record for the most parodized melody: over 40 compositions, mostly Masses, has been derived from its tune in the Renaissance period; and every one of these compositions is an individual work with a life of its own.

So I regard texturization as an art akin to parody in music; and I think that one should not define it simply as a way of "processing" an image, but rather as a "re-creation" of the image itself.

 

 

You can find details about the sculpture and the HDR processing of the original image in Beware of the kitten

Rings of function

Rich structure theory

Combinatorial decompositions

 

The Silicon Graphics head in my office was my muse. I just finished reading a fascinating summary by Lin & Tegmark of the tie between the power of neural networks / deep learning and the peculiar physics of our universe. The mystery of why they work so well may be resolved by seeing the resonant homology across the information-accumulating substrate of our universe, from the base simplicity of our physics to the constrained nature of the evolved and grown artifacts all around us. The data in our natural world is the product of a hierarchy of iterative algorithms, and the computational simplification embedded within a deep learning network is also a hierarchy of iteration. Since neural networks are symbolic abstractions of how the human cortex works, perhaps it should not be a surprise that the brain has evolved structures that are computationally tuned to tease apart the complexity of our world.

 

Does anyone know about other explorations into these topics?

 

Here is a collection of interesting plain text points I extracted from the math in Lin & Tegmark’s article:

 

"The exceptional simplicity of physics-based functions hinges on properties such as symmetry, locality, compositionality and polynomial log-probability, and we explore how these properties translate into exceptionally simple neural networks approximating both natural phenomena such as images and abstract representations thereof such as drawings. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. Various “no-flattening theorems” show when these efficient deep networks cannot be accurately approximated by shallow ones without efficiency loss."

 

This last point reminds me of something I wrote in 2006: "Stephen Wolfram’s theory of computational equivalence suggests that simple, formulaic shortcuts for understanding evolution (and neural networks) may never be discovered. We can only run the iterative algorithm forward to see the results, and the various computational steps cannot be skipped. Thus, if we evolve a complex system, it is a black box defined by its interfaces. We cannot easily apply our design intuition to the improvement of its inner workings. We can’t even partition its subsystems without a serious effort at reverse-engineering." — 2006 MIT Tech Review

 

Back to quotes from the paper:

Neural networks perform a combinatorial swindle, replacing exponentiation by multiplication: if there are say n = 106 inputs taking v = 256 values each, this swindle cuts the number of parameters from v^n to v×n times some constant factor. We will show that this success of this swindle depends fundamentally on physics: although neural networks only work well for an exponentially tiny fraction of all possible inputs, the laws of physics are such that the data sets we care about for machine learning (natural images, sounds, drawings, text, etc.) are also drawn from an exponentially tiny fraction of all imaginable data sets. Moreover, we will see that these two tiny subsets are remarkably similar, enabling deep learning to work well in practice.

 

Increasing the depth of a neural network can provide polynomial or exponential efficiency gains even though it adds nothing in terms of expressivity.

 

Both physics and machine learning tend to favor Hamiltonians that are polynomials — indeed, often ones that are sparse, symmetric and low-order.

 

1. Low polynomial order

For reasons that are still not fully understood, our universe can be accurately described by polynomial Hamiltonians of low order d. At a fundamental level, the Hamiltonian of the standard model of particle physics has d = 4. There are many approximations of this quartic Hamiltonian that are accurate in specific regimes, for example the Maxwell equations governing electromagnetism, the Navier-Stokes equations governing fluid dynamics, the Alv ́en equations governing magnetohydrodynamics and various Ising models governing magnetization — all of these approximations have Hamiltonians that are polynomials in the field variables, of degree d ranging from 2 to 4.

 

2. Locality

One of the deepest principles of physics is locality: that things directly affect only what is in their immediate vicinity. When physical systems are simulated on a computer by discretizing space onto a rectangular lattice, locality manifests itself by allowing only nearest-neighbor interaction.

 

3. Symmetry

Whenever the Hamiltonian obeys some symmetry (is invariant under some transformation), the number of independent parameters required to describe it is further reduced. For instance, many probability distributions in both physics and machine learning are invariant under translation and rotation.

 

Why Deep?

What properties of real-world probability distributions cause efficiency to further improve when networks are made deeper? This question has been extensively studied from a mathematical point of view, but mathematics alone cannot fully answer it, because part of the answer involves physics. We will argue that the answer involves the hierarchical/compositional structure of generative processes together with inability to efficiently “flatten” neural networks reflecting this structure.

 

A. Hierarchical processes

One of the most striking features of the physical world is its hierarchical structure. Spatially, it is an object hierarchy: elementary particles form atoms which in turn form molecules, cells, organisms, planets, solar systems, galaxies, etc. Causally, complex structures are frequently created through a distinct sequence of simpler steps.

 

We can write the combined effect of the entire generative process as a matrix product.

 

If a given data set is generated by a (classical) statistical physics process, it must be described by an equation in the form of [a matrix product], since dynamics in classical physics is fundamentally Markovian: classical equations of motion are always first order differential equations in the Hamiltonian formalism. This technically covers essentially all data of interest in the machine learning community, although the fundamental Markovian nature of the generative process of the data may be an in-efficient description.

 

Summary

The success of shallow neural networks hinges on symmetry, locality, and polynomial log-probability in data from or inspired by the natural world, which favors sparse low-order polynomial Hamiltonians that can be efficiently approximated. Whereas previous universality theorems guarantee that there exists a neural network that approximates any smooth function to within an error ε, they cannot guarantee that the size of the neural network does not grow to infinity with shrinking ε or that the activation function σ does not become pathological. We show constructively that given a multivariate polynomial and any generic non-linearity, a neural network with a fixed size and a generic smooth activation function can indeed approximate the polynomial highly efficiently.

 

The success of deep learning depends on the ubiquity of hierarchical and compositional generative processes in physics and other machine-learning applications.

 

And thanks to Tech Review for the pointer to this article:

 

Quantum annealing

Quantum physics-based metaheuristic for optimization problems

For other uses, see Annealing (disambiguation).

Quantum annealing (QA) is an optimization process for finding the global minimum of a given objective function over a given set of candidate solutions (candidate states), by a process using quantum fluctuations.[1] Quantum annealing is used mainly for problems where the search space is discrete (combinatorial optimization problems) with many local minima, such as finding the ground state of a spin glass or solving QUBO problems, which can encode a wide range of problems like Max-Cut, graph coloring, SAT or the traveling salesman problem.[2] The term "quantum annealing" was first proposed in 1988 by B. Apolloni, N. Cesa Bianchi and D. De Falco as a quantum-inspired classical algorithm.[3][4] It was formulated in its present form by T. Kadowaki and H. Nishimori (ja) in 1998,[5] though an imaginary-time variant without quantum coherence had been discussed by A. B. Finnila, M. A. Gomez, C. Sebenik and J. D. Doll in 1994.[6]

 

Quantum annealing starts from a quantum-mechanical superposition of all possible states (candidate states) with equal weights. Then the system evolves following the time-dependent Schrödinger equation, a natural quantum-mechanical evolution of physical systems. The amplitudes of all candidate states keep changing, realizing a quantum parallelism, according to the time-dependent strength of the transverse field, which causes quantum tunneling between states or essentially tunneling through peaks. If the rate of change of the transverse field is slow enough, the system stays close to the ground state of the instantaneous Hamiltonian (also see adiabatic quantum computation).[7] If the rate of change of the transverse field is accelerated, the system may leave the ground state temporarily but produce a higher likelihood of concluding in the ground state of the final problem Hamiltonian, i.e., Diabatic quantum computation.[8][9] The transverse field is finally switched off, and the system is expected to have reached the ground state of the classical Ising model that corresponds to the solution to the original optimization problem. An experimental demonstration of the success of quantum annealing for random magnets was reported immediately after the initial theoretical proposal.[10] Quantum annealing has also been proven to provide a fast Grover oracle for the square-root speedup in solving many NP-complete problems.[11]

 

Comparison to simulated annealing

Quantum annealing can be compared to simulated annealing, whose "temperature" parameter plays a similar role to quantum annealing's tunneling field strength. In simulated annealing, the temperature determines the probability of moving to a state of higher "energy" from a single current state. In quantum annealing, the strength of transverse field determines the quantum-mechanical probability to change the amplitudes of all states in parallel. Analytical[12] and numerical[13] evidence suggests that quantum annealing outperforms simulated annealing under certain conditions (see Heim et al[14] and see Yan and Sinitsyn[15] for a fully solvable model of quantum annealing to arbitrary target Hamiltonian and comparison of different computation approaches).

 

Quantum mechanics: analogy and advantage

Simple analogy describing the difference between Simulated Annealing and Quantum Annealing.

Quantum Annealing (blue line) efficiently traverses energy landscapes by leveraging quantum tunneling to find the global minimum. Quantum annealing offers a significant performance advantage over Simulated Annealing (magenta line), unlocking the potential to solve massive optimization problems previously thought to be impossible.

The tunneling field is basically a kinetic energy term that does not commute with the classical potential energy part of the original glass. The whole process can be simulated in a computer using quantum Monte Carlo (or other stochastic technique), and thus obtain a heuristic algorithm for finding the ground state of the classical glass.

 

In the case of annealing a purely mathematical objective function, one may consider the variables in the problem to be classical degrees of freedom, and the cost functions to be the potential energy function (classical Hamiltonian). Then a suitable term consisting of non-commuting variable(s) (i.e. variables that have non-zero commutator with the variables of the original mathematical problem) has to be introduced artificially in the Hamiltonian to play the role of the tunneling field (kinetic part). Then one may carry out the simulation with the quantum Hamiltonian thus constructed (the original function + non-commuting part) just as described above. Here, there is a choice in selecting the non-commuting term and the efficiency of annealing may depend on that.

 

It has been demonstrated experimentally as well as theoretically, that quantum annealing can outperform thermal annealing (simulated annealing) in certain cases, especially where the potential energy (cost) landscape consists of very high but thin barriers surrounding shallow local minima.[16] Since thermal transition probabilities (proportional to

e

Δ

k

B

T

{\displaystyle e^{-{\frac {\Delta }{k_{B}T}}}}, with

T

{\displaystyle T} the temperature and

k

B

{\displaystyle k_{B}} the Boltzmann constant) depend only on the height

Δ

{\displaystyle \Delta } of the barriers, for very high barriers, it is extremely difficult for thermal fluctuations to get the system out from such local minima. However, as argued earlier in 1989 by Ray, Chakrabarti & Chakrabarti,[1] the quantum tunneling probability through the same barrier (considered in isolation) depends not only on the height

Δ

{\displaystyle \Delta } of the barrier, but also on its width

w

{\displaystyle w} and is approximately given by

e

Δ

w

Γ

{\displaystyle e^{-{\frac {{\sqrt {\Delta }}w}{\Gamma }}}}, where

Γ

{\displaystyle \Gamma } is the tunneling field.[17] This additional handle through the width

w

{\displaystyle w}, in presence of quantum tunneling, can be of major help: If the barriers are thin enough (i.e.

w

Δ

{\displaystyle w\ll {\sqrt {\Delta }}}), quantum fluctuations can surely bring the system out of the shallow local minima. For an

N

{\displaystyle N}-spin glass, the barrier height

Δ

{\displaystyle \Delta } becomes of order

N

{\displaystyle N}. For constant value of

w

{\displaystyle w} one gets

τ

{\displaystyle \tau } proportional to

e

N

{\displaystyle e^{\sqrt {N}}} for the annealing time (instead of

τ

{\displaystyle \tau } proportional to

e

N

{\displaystyle e^{N}} for thermal annealing), while

τ

{\displaystyle \tau } can even become

N

{\displaystyle N}-independent for cases where

w

{\displaystyle w} decreases as

1

/

N

{\displaystyle 1/{\sqrt {N}}}.[18][19]

 

It is speculated that in a quantum computer, such simulations would be much more efficient and exact than that done in a classical computer, because it can perform the tunneling directly, rather than needing to add it by hand. Moreover, it may be able to do this without the tight error controls needed to harness the quantum entanglement used in more traditional quantum algorithms. Some confirmation of this is found in exactly solvable models.[20][21]

 

Timeline of ideas related to quantum annealing in Ising spin glasses:

 

1989 Idea was presented that quantum fluctuations could help explore rugged energy landscapes of the classical Ising spin glasses by escaping from local minima (having tall but thin barriers) using tunneling;[1]

1998 Formulation of quantum annealing and numerical test demonstrating its advantages in Ising glass systems;[5]

1999 First experimental demonstration of quantum annealing in LiHoYF Ising glass magnets;[22]

2011 Superconducting-circuit quantum annealing machine built and marketed by D-Wave Systems.[23]

D-Wave implementations

Further information: D-Wave Systems § Computer systems, and D-Wave Two

 

Photograph of a chip constructed by D-Wave Systems, mounted and wire-bonded in a sample holder. The D-Wave One's processor is designed to use 128 superconducting logic elements that exhibit controllable and tunable coupling to perform operations.

In 2011, D-Wave Systems announced the first commercial quantum annealer on the market by the name D-Wave One and published a paper in Nature on its performance.[23] The company claims this system uses a 128 qubit processor chipset.[24] On May 25, 2011, D-Wave announced that Lockheed Martin Corporation entered into an agreement to purchase a D-Wave One system.[25] On October 28, 2011 University of Southern California's (USC) Information Sciences Institute took delivery of Lockheed's D-Wave One.

 

In May 2013, it was announced that a consortium of Google, NASA Ames and the non-profit Universities Space Research Association purchased an adiabatic quantum computer from D-Wave Systems with 512 qubits.[26][27] An extensive study of its performance as quantum annealer, compared to some classical annealing algorithms, is available.[28]

 

In June 2014, D-Wave announced a new quantum applications ecosystem with computational finance firm 1QB Information Technologies (1QBit) and cancer research group DNA-SEQ to focus on solving real-world problems with quantum hardware.[29] As the first company dedicated to producing software applications for commercially available quantum computers, 1QBit's research and development arm has focused on D-Wave's quantum annealing processors and has demonstrated that these processors are suitable for solving real-world applications.[30]

 

With demonstrations of entanglement published,[31] the question of whether or not the D-Wave machine can demonstrate quantum speedup over all classical computers remains unanswered. A study published in Science in June 2014, described as "likely the most thorough and precise study that has been done on the performance of the D-Wave machine"[32] and "the fairest comparison yet", attempted to define and measure quantum speedup. Several definitions were put forward as some may be unverifiable by empirical tests, while others, though falsified, would nonetheless allow for the existence of performance advantages. The study found that the D-Wave chip "produced no quantum speedup" and did not rule out the possibility in future tests.[33] The researchers, led by Matthias Troyer at the Swiss Federal Institute of Technology, found "no quantum speedup" across the entire range of their tests, and only inconclusive results when looking at subsets of the tests. Their work illustrated "the subtle nature of the quantum speedup question". Further work[34] has advanced understanding of these test metrics and their reliance on equilibrated systems, thereby missing any signatures of advantage due to quantum dynamics.

 

There are many open questions regarding quantum speedup. The ETH reference in the previous section is just for one class of benchmark problems. Potentially there may be other classes of problems where quantum speedup might occur. Researchers at Google, LANL, USC, Texas A&M, and D-Wave are working to find such problem classes.[35]

 

In December 2015, Google announced that the D-Wave 2X outperforms both simulated annealing and Quantum Monte Carlo by up to a factor of 100,000,000 on a set of hard optimization problems.[36]

 

D-Wave's architecture differs from traditional quantum computers. It is not known to be polynomially equivalent to a universal quantum computer and, in particular, cannot execute Shor's algorithm because Shor's algorithm requires precise gate operations and quantum Fourier transforms which are currently unavailable in quantum annealing architectures.[37] Shor's algorithm requires a universal quantum computer. During the Qubits 2021 conference held by D-Wave, it was announced[38] that the company is developing their first universal quantum computers, capable of running Shor's algorithm in addition to other gate-model algorithms such as QAOA and VQE.

 

"A cross-disciplinary introduction to quantum annealing-based algorithms"[39] presents an introduction to combinatorial optimization (NP-hard) problems, the general structure of quantum annealing-based algorithms and two examples of this kind of algorithms for solving instances of the max-SAT (maximum satisfiable problem) and Minimum Multicut problems, together with an overview of the quantum annealing systems manufactured by D-Wave Systems. Hybrid quantum-classic algorithms for large-scale discrete-continuous optimization problems were reported to illustrate the quantum advantage.[40][41]

I recently uncovered a trippy little piece I wrote on constructive constructions for the creatives at ARUP:

 

Evolving Cities and Culture

 

Innovation is critical to economic growth, progress, and the fate of the planet. Yet, it seems so random. But patterns emerge in the aggregate, and planners and politicians may be able to promote innovation and growth despite the overall inscrutability of this complex system.

 

One emergent pattern, spanning centuries, is that the pace of innovation is perpetually accelerating, and it is exogenous to the economy. Rather, it is the combinatorial explosion of possible innovation-pairings that creates economic growth. And that is why cities are the crucible of innovation.

 

Geoffrey West of the Santa Fe Institute argues that cities are an autocatalytic attractor and amplifier of innovation. People are more innovative and productive, on average, when they live in a city because ideas can cross-pollinate more easily. Proximity promotes propinquity and the promiscuity of what Matt Ridley calls “ideas having sex”. This positive network effect drives another positive feedback loop - by attracting the best and the brightest to flock to the salon of mind, the memeplex of modernity.

 

Cities are a structural manifestation of the long arc of evolutionary indirection, whereby the vector of improvement has risen steadily up the ladder of abstractions from chemicals to genes to systems to networks. At each step, the pace of progress has leapt forward, making the prior vectors seem glacial in comparison – rather we now see the nature of DNA and even a neuron as a static variable in modern times. Now, it’s all about the ideas - the culture and the networks of humanity. We have moved from genetic to mimetic evolution, and much like the long-spanning neuron (which took us beyond nearest neighbor and broadcast signaling among cells) ushering the Cambrian explosion of differentiated and enormous body plans, the Internet brings long-spanning links between humans, engendering an explosion in idea space, straddling isolated pools of thought.

 

And it’s just beginning. In the next 10 years, four billion minds will come online for the first time to join this global conversation (via Starlink broadband satellites).

 

But why does this drive innovation and accelerating change? Start with Brian Arthur’s observation that all new technologies are combinations of technologies that already exist. Innovation does not occur in a vacuum; it is a combination of ideas from before. In any academic field, the advances today are built on a large edifice of history. This is the foundation of progress, something that was not so evident to the casual observer before the age of science. Science tuned the process parameters for innovation, and became the best method for a culture to learn.

 

From this conceptual base, come the origin of economic growth and accelerating technological change, as the combinatorial explosion of possible idea pairings grows exponentially as new ideas come into the mix (on the order of 2^n of possible groupings per Reed’s Law). It explains the innovative power of urbanization and networked globalization. And it explains why interdisciplinary ideas are so powerfully disruptive; it is like the differential immunity of epidemiology, whereby islands of cognitive isolation (e.g., academic disciplines) are vulnerable to disruptive memes hopping across, much like South America was to smallpox from Cortés and the Conquistadors. If disruption is what you seek, cognitive island-hopping is good place to start, mining the interstices between academic disciplines.

 

So what evidence do we have of accelerating technological change? At Future Ventures, we see it in the diversity and quality of the entrepreneurial ideas arriving each year across our global offices. Scientists do not slow their thinking during recessions.

 

For a good mental model of the pace of innovation, consider Moore’s Law in the abstract – the annual doubling of compute power or data storage. As Ray Kurzweil has plotted, the smooth pace of exponential progress spans from 1890 to today, across countless innovations, technology substrates, and human dramas — with most contributors completely unaware that they were fitting to a curve.

 

Moore’s Law is a primary driver of disruptive innovation – such as the iPod usurping the Sony Walkman franchise – and it drives not only IT and communications, but also now genomics, medical imaging and the life sciences in general. As Moore’s Law crosses critical thresholds, a formerly lab science of trial and error experimentation becomes a simulation science and the pace of progress accelerates dramatically, creating opportunities for new entrants in new industries. And so the industries impacted by the latest wave of tech entrepreneurs are more diverse, and an order of magnitude larger — from automobiles and rockets to energy and chemicals.

 

At the cutting edge of computational capture is biology; we are actively reengineering the information systems of biology and creating synthetic microbes whose DNA was manufactured from bare computer code and an organic chemistry printer. But what to build? So far, we largely copy large tracts of code from nature. But the question spans across all the complex systems that we might wish to build, from cities to designer microbes, to computer intelligence.

 

As these systems transcend human comprehension, will we continue to design them or will we increasingly evolve them? As we design for evolvability, the locus of learning shifts from the artifacts themselves to the process that created them. There is no mathematical shortcut for the decomposition of a neural network or genetic program, no way to "reverse evolve" with the ease that we can reverse engineer the artifacts of purposeful design. The beauty of compounding iterative algorithms (machine learning, evolution, fractals, organic growth, art) derives from their irreducibility.

 

And what about human social systems? The corporation is a complex system that seeks to perpetually innovate. Leadership in these complex organizations shifts from direction setting to a wisdom of crowds. And this “process learning” is a bit counterintuitive to some alpha leaders: cognitive diversity is more important than ability, disagreement is more important than consensus, voting policies and team size are more important than the coherence or comprehensibility of the decisions, and tuning the parameters of communication (frequency and fanout) is more important than charisma.

 

The same could be said for urban planning. How will cities be built and iterated upon? Who will make those decisions and how? We are just starting to see the shimmering refractions of the hive mind of human culture, and now we want to redesign the hives themselves to optimize the emergent complexity within. Perhaps the best we can do is set up the grand co-evolutionary dance and listen carefully for the sociobiology of supra-human sentience.

 

-----------

I first brainstormed about reinventing construction with Astro Teller and Sebastian Thrun when they were forming Google X and looking for the largest markets in the world that look ripe for disruption from advancing information technology and machine learning. The $10 trillion spent each year on buildings certainly qualified, and the global construction industry is growing from 13% of the entire global economy to 15% in 2020. Helix.re became the first Google X spinout, taking a data and software-driven approach to building design and optimization.

 

I have been playing around with the lightroom beta. This is a re-processing of a picture I originally posted 2 years ago.

 

I think in the original I over-cooked the white balance and exposure and prefer this more subtle treatment. I also adjusted the crop. Let me know which you think is better. Oh, you've got to view this on black...

 

Tic-tac-toe

 

From Wikipedia:

Tic-tac-toe, also called noughts and crosses, hugs and kisses, and many other names, is a pencil-and-paper game for two players, O and X, who take turns marking the spaces in a 3×3 grid, usually X going first. The player who succeeds in placing three respective marks in a horizontal, vertical or diagonal row wins the game. This game is won by the first player, X:

Players soon discover that best play from both parties leads to a draw. Hence, tic-tac-toe is most often played by young children; when they have discovered an unbeatable strategy they move on to more sophisticated games such as dots and boxes. This reputation for ease has led to casinos offering gamblers the chance to play tic-tac-toe against trained chickens - though the chicken is advised by a computer program.

The simplicity of tic-tac-toe makes it ideal as a pedagogical tool for teaching the concepts of combinatorial game theory and the branch of artificial intelligence that deals with the searching of game trees. It is straightforward to write a computer program to play tic-tac-toe perfectly, to enumerate the 765 essentially different positions (the state space complexity), or the 26,830 possible games up to rotations and reflections (the game tree complexity) on this space.

The first known video game, OXO (or Noughts and Crosses, 1952) for the EDSAC computer played perfect games of tic-tac-toe against a human opponent.

One example of a Tic-Tac-Toe playing computer is the Tinkertoy computer, developed by MIT students, and made out of Tinker Toys. It only plays Tic-Tac-Toe, and has never lost a game. It is currently on display at the Museum of Science, Boston.

 

Cyanotype, traditional iron salt party mix, combinatorially grappled in a head-shaped tub, brushed onto gelatin-sized vellum, subsequently exposed to Sol for an amount of time -- in the winter Texas air for ten minutes perhaps -- Finally, developed casually, while smoking, in water, vinegar, ammonia and tea-tannins.

Astro Teller, grandson of the hydrogen bomb and Moonshot maven, introducing me at Google. The video of my talk just went up.

---------------

ABSTRACT

 

Many of the interesting challenges in computer science, nanotechnology, and synthetic biology entail the construction of complex systems. As these systems transcend human comprehension, will we continue to design them or will we increasingly evolve them? As we design for evolvability, the locus of learning shifts from the artifacts themselves to the process that created them. There is no mathematical shortcut for the decomposition of a neural network or genetic program, no way to "reverse evolve" with the ease that we can reverse engineer the artifacts of purposeful design. The beauty of compounding iterative algorithms (evolution, fractals, organic growth, art) derives from their irreducibility.

 

Google itself is a complex system that seeks to perpetually innovate. Leadership in complex organizations shifts from direction setting to a wisdom of crowds. The role of upper management is to tune the parameters of communication. Leaders can embrace a process that promotes innovation with emergent predictability more than they can hope to dictate the product of innovation itself.

 

Innovation is critical to economic growth, progress, and the fate of the planet, yet it seems so random. While innovation may appear inscrutable at the atomic level, patterns emerge in the aggregate nonetheless. A critical pattern, spanning centuries, is that the pace of innovation is perpetually accelerating, and it is exogenous to the economy. Rather, it is the combinatorial explosion of possible innovation-pairings that creates economic growth.

---------------

(more on the dichotomy of design and evolutionary search, organizational optimization, and innovation)

 

I arranged the talk to overlap with a SFI brain spa @ Google. Some quotes from that event (without attribution per Chatham House rule):

 

“Unlimited power limits intellectual parsimony.”

 

“With machine learning, we are creating electronic savants. They are happy in a high-dimensional space. They have no desire to reduce. What we want is electronic Keplers that can recognize the ellipse, not savants that can force fit a heliocentric model.”

 

“The target of evolution can’t be more complex than the selection pressure itself. If you can come up with the selection pressure, you might as well design it.”

 

“I don’t think there is any natural process that is incompressible. It’s not random.”

 

[I disagree with the premise of those last two quotes]

Which will come first? Green vs. Grey, as they say.

 

Thanks to Ariel Poler for hosting a SF Salon on the subject with Erik Torenberg of Village Global and Silicon Foundry.

 

I think we will build a superhuman AGI before we understand our own brain well enough to radically improve it or upload it to a silicon substrate. The complex creations of iterative algorithms (like evolution and deep learning) are inherently inscrutable. It is easier to push evolution forward than to reverse engineer the products of evolution.

 

We are in the middle of a sea change in how the vanguard of engineering will be done. Building complex systems that exceed human understanding is more like parenting than programming. The locus of learning shifts from end products to the process of their creation. An ever-growing percentage of software will be grown and an ever-growing percentage of compute will run on infrastructure that resembles the brain (massively parallel, fine grained architectures with in-memory compute and a growing focus on the memory and interconnect elements). This is the path to AGI, IMHO.

 

I’ve been working with a neural plasticity company for 14 years now (Posit Science). One of my concerns with uploading is the extreme plasticity of the sensory cortex and the recruitment of neighboring regions in the face of external changes (like phantom limb pain in amputees). Cut and paste of brain state to a foreign substrate may require a deep understanding of the analog domain, where structural topology and functional spike train variation is immense (there are over 300 types of neurons in neocortex that are structurally and electrically different. And each neuron has ~200 ion channels from a pool of 20-40 variations). Furthermore, our mostly 2D silicon substrates lack the interconnect density for a direct map of the synaptic fan-out of the brain. Without a deep understanding of what elements can be ignored or abstracted, a simulation of brain function explodes in combinatorial complexity.

 

Going back a decade, in talks about AI futures, I was fond of advising to “augment early and often.” I worry that people want to believe in extreme augmentation and uploading, not because it is likely, but because it offers a mental model for “humanity” maintaining the mantle of supremacy, perpetually perched at the pinnacle of evolution. The idea that evolution will eventually progress way beyond us is hard to internalize. We seek transcendence, as the antidote for obsolescence.

 

I’ll be brainstorming more about storming the brain this evening at a follow up salon.

 

My 2006 musings on these topics.

Atheist myths debunked - Abiogenesis - the spontaneous generation of life from sterile matter.

 

Abiogenesis - the atheist and evolutionist belief - that life can spontaneously generate itself from sterile matter, whenever environmental conditions are conducive .... And the belief that this actually happened in the early Earth.

 

Is it possible?

 

IMPOSSIBLE ACCORDING TO INFORMATION THEORY.

Three fundamentals are essential for the material universe to exist: matter - energy - information.

 

Obviously, all theories about how the universe operates, and its origins, must take account of all three. However, every evolutionary, origin of life hypothesis yet devised (primordial soup, hydrothermal vent, etc. etc.) concentrates on the chemistry/physics of life, i.e. the interaction of matter and energy.

 

Atheists and evolutionists have virtually ignored the essential role and origin of information. We should demand to know why? Especially as we are told (through the popular media and education system) that an evolutionary, origin of life scenario, should be regarded as irrefutable, scientific fact.

 

Atheists and evolutionists are well aware that the information required for life cannot just arise of its own accord in a primordial soup. So why do they usually omit this crucial fact from their origin of life story?

 

In order to store information, a storage code is required. Just as the alphabet and language is the code used to store information in the written word, life requires both the information itself, which controls the construction and operation of all living things, and the means of storing that information. DNA is the storage code for living things.

 

No evolutionary, origin of life hypothesis has ever explained either how the DNA storage system was formed, or how the information encoded within that DNA storage system originated. In fact, even to attempt to look for the origin of information in physical matter is to ignore the natural laws about information.

 

Information theory completely rules out the spontaneous generation of life from non-life.

 

Information theory tells us: ANY MODEL FOR THE ORIGIN OF LIFE BASED SOLELY ON PHYSICAL AND/OR CHEMICAL PROCESSES, IS INHERENTLY FALSE. And: THERE IS NO KNOWN LAW OF NATURE, NO KNOWN PROCESS AND NO KNOWN SEQUENCE OF EVENTS, WHICH CAN CAUSE INFORMATION TO ORIGINATE BY ITSELF IN MATTER… So information theory not only rules out all evolutionary hypotheses which cannot explain the origin of information in original life, it also rules out all evolutionary hypotheses which cannot explain the origin of the completely new, increasingly complex information which would be required to be added to a gene pool for progressive evolution to take place in existing life.

 

Because of their zealous and unshakable faith in Darwinian evolution, most evolutionists choose to ignore this. They simply refuse to face this most important question of all, where does the complex information essential for all life come from? The reason seems obvious, it is because there are only two answers which could be compatible with the evolution fable, both are unscientific nonsense which violate information theory. They are: 1. That information can just arise magically out of nowhere. OR 2. That the material universe is an intelligent entity, which can actually create information.

(See more on genetic information and the DNA code later on)

 

Verdict of science - abiogenesis is not possible.

 

IMPOSSIBLE ACCORDING TO THE LAW OF BIOGENESIS.

The Law of Biogenesis rules out the spontaneous generation of life from non-living matter under all known circumstances. All modern scientists now accept this well tested law as valid. In fact, the whole concept of medical sterilisation, hygiene & food preservation is totally dependent on this law.

 

No sensible scientist would dare to claim that spontaneous generation of life ever happens in the world today, and there is no reason whatsoever to believe that this Law (like every natural law) is not always valid, in all places and at all times, within the material universe.

 

Yet, amazingly, in order to support biological evolution, evolutionists are quite prepared to flout this well, established Law and to resurrect the ancient belief in abiogenesis (life arising from non-life). Like latter-day advocates of the ancient Greek belief (that the goddess Gea could make life arise spontaneously from stones), evolutionists and atheists routinely present to the public, the preposterous notion that, original life on earth (and even elsewhere in the universe) just spontaneously generated itself from inert matter. Apparently, all that was required to bypass this well established Law was a chance accumulation of chemicals in some alchemist’s type brew of ‘primordial soup’ combined with raw energy from the sun, lightning or geothermal forces. (Such is their faith in the creative powers of matter). They call this science? Incredible!

 

Verdict of science - abiogenesis is not possible.

 

IMPOSSIBLE ACCORDING TO THE SECOND LAW OF THERMODYNAMICS.

The second Law of Thermodynamics rules out the spontaneous generation of life from non-life as a chance event. Even if we ignore the above reasons why spontaneous generation of life is impossible, the formation and arrangement by chance of all the components required for living cells is also impossible. The arrangement of all the components within the simplest of living cells is extremelprecise; these components cannot just arrange themselves by chance.

 

According to the Second Law of Thermodynamics, when left to themselves, things naturally become more disordered, rather than more ordered. Or in other words, things will naturally go to more probable arrangements and disorder is overwhelmingly more probable than order. Disorder actually increases with the passage of time and also with the application of raw (undirected) energy (for example, heat).

 

Yet we are repeatedly told the evolution fable, that the numerous components required to form a first, self-replicating, living cell just assembled themselves in precise order, by pure chance, over a vast period of time, aided by the random application of raw, undirected energy.

 

Verdict of science - abiogenesis is not possible.

 

IMPOSSIBLE ACCORDING TO THE LAW OF CAUSE AND EFFECT.

A fundamental principle of science is the law of cause and effect. It is a primary law of science, and the very basis of the scientific method.

 

The law of cause and effect tells us that an effect cannot be greater than its cause/s.

 

Life is not an intrinsic property of matter/energy - so it is beyond the capabilities of matter/energy to produce a property (life) it doesn't possess.

 

The interaction of matter and energy cannot produce an effect with properties extra and superior to its own properties, that would violate the law of cause and effect.

 

Can chemistry create biology - which has entirely different properties to its own?

Of course it can't.

 

Biology includes such properties as genetic information, the DNA code, consciousness and intelligence. To believe that chemistry can create biology - means believing that something inanimate can create additional, new properties that it doesn't possess. To exceed the limitations of its own properties would violate the law of cause and effect.

 

For matter/energy to be able to produce life whenever environmental conditions permit, it would have to be inherently predisposed to produce life.

 

It would have to embody an inherent plan/blueprint/instructions for life, as one of its properties. The inevitable question then has to be - where does an inherent predisposition for life come from? It can only signify the existence of purpose in the universe and that is something atheists could never accept.

 

A purpose, order or plan can only come from a planner or intelligent entity. So it is a catch 22 situation for atheists ... the atheist/ evolutionist belief in abiogenesis either violates the law of cause and effect, OR is an admission of purpose in the universe. It can only be one or the other. Atheists cannot possibly accept the existence of purpose in the universe, because that would be the end of atheism. So the atheist belief in abiogenesis violates the law of cause and effect.

 

Verdict of science - abiogenesis is not possible.

 

IMPOSSIBLE ACCORDING TO MATHEMATICS.

Even if we ignore the Law of Biogenesis, Information Theory and the Second Law of Thermodynamics (which all completely rule out the spontaneous generation of a living cell from non-living matter). Mathematical probability also rules out the spontaneous generation of life from non-living matter.

 

The laws of probability are summed up in the Law of Chance. According to this Law, when odds against a chance event are 10 to the power of 15, the chance of that event happening are negligible on a terrestrial scale. At odds of 10 to the power of 50, there is virtually no chance, even on a cosmic scale. The most generous and favourable, mathematical odds against a single living cell appearing in this way by chance are a staggering 10 to the power of 40,000. A more likely calculation would put the odds at an even more awesome 10 to the power of 119,850. Remember odds of 10 to the power of 50 is sufficient to make an event virtually impossible (except, perhaps, by magic!!).

 

Verdict of science - abiogenesis is not possible

 

Fred Hoyle, The Big Bang in Astronomy, New Scientist 19 Nov 1981. p.526. On the origin of life in primeval soup.

“I don’t know how long it is going to be before astronomers generally recognise that the combinatorial arrangement of not even one among the many thousands of biopolymers on which life depends could have been arrived at by natural processes here on the Earth. Astronomers will have a little difficulty at understanding this because they will be assured by biologists that it is not so. The biologists having been assured in their turn by others that it is not so. The “others” are a group of persons who believe, quite openly, in mathematical miracles. They advocate the belief that tucked away in nature, outside of normal physics, there is a law which performs miracles.”

 

“Since science does not have the faintest idea how life on earth originated, it would only be honest to confess this to other scientists, to grantors, and to the public at large. Prominent scientists speaking ex cathedra, should refrain from polarising the minds of students and young productive scientists with statements that are based solely on beliefs.” Bio-informaticist, Hubert P. Yockey. Journal of Theoretical Biology [Vol 91, 1981, p 13].

 

Conclusion: Abiogenesis is impossible - it is just another atheist myth debunked by science.

 

Evolutionists and atheists are quite entitled to abandon the scientific method and all common sense by choosing to believe that all the necessary information for life can just appear in matter, as if by magic. They can also choose to believe that: the Laws of; Biogenesis, Mathematical Probability, Cause and Effect and Second Law of Thermodynamics, were all somehow magically suspended to enable their purported evolution of life from sterile matter to take place. They can believe whatever they like. But they have no right to present such unscientific, flights of fancy through the media and our education system, as though they are supported by science.

 

More about DNA and the origin of life.

The discovery of DNA should have been the death knell for evolution. It is only because atheists and evolutionists tend to manipulate and interpret evidence to suit their own preconceptions that makes them believe DNA is evidence FOR evolution.

 

It is clear that there is no natural mechanism which can produce constructional, biological information, such as that encoded in DNA.

 

Information Theory (and common sense) tells us that the unguided interaction of matter and energy cannot produce constructive information.

 

Do atheists/evolutionists even know where the very first, genetic information in the alleged Primordial Soup came from?

 

Of course they don't, but with the usual bravado, they bluff it out, and regardless, they rashly present the spontaneous generation of life as a scientific fact.

However, a fact, it certainly isn't .... and good science it certainly isn't.

 

Even though atheists/evolutionists have no idea whatsoever about how the first, genetic information originated, they still claim that the spontaneous generation of life (abiogenesis) is an established scientific fact, but this is completely disingenuous. Apart from the fact that abiogenesis violates the Law of Biogenesis, the Law of Cause and Effect and the Second Law of Thermodynamics, it also violates Information Theory.

 

Evolutionists/atheists have an enormous problem with explaining how the DNA code itself originated. However that is not even the major problem. The impression is given to the public by evolutionists that they only have to find an explanation for the origin of DNA by natural processes - and the problem of the origin of genetic information will have been solved.

 

That is a confusion in the minds of many people that evolutionists/atheists cynically exploit,

 

Explaining how DNA was formed by chemical processes, explains only how the information storage medium was formed, it tells us nothing about the origin of the information it carries.

 

To clarify this it helps to compare DNA to other information, storage mediums.

 

For example, if we compare DNA to the written word, we understand that the alphabet is a tangible medium for storing, recording and expressing information, it is not information in itself. The information is recorded in the sequence of letters, forming meaningful words.

 

You could say that the alphabet is the 'hardware' created from paper and ink, and the sequential arrangement of the letters is the software. The software is a mental construct, not a physical one.

 

The same applies to DNA. DNA is not information of itself, just like the alphabet it is the medium for storing and expressing information. It is an amazingly efficient storage medium. However, it is the sequence or arrangement of the amino acids which is the actual information, not the DNA code.

 

So, if evolutionists are ever able to explain how DNA was formed by chemical processes, it would explain only how the information storage medium was formed. It will tell us nothing about the origin of the information it carries.

 

Thus, when atheists and evolutionists tell us it is only a matter of time before 'science' will be able to fill the 'gaps' in our knowledge and explain the origin of genetic information, they are not being honest. Explaining the origin of the 'hardware' by natural processes is an entirely different matter to explaining the origin of the software.

 

Next time you hear evolutionists/atheists skating over the problem of the origin of genetic information with their usual bluff and bluster, and parroting their usual nonsense about science being able to fill such gaps in knowledge in the future, don't be fooled. They cannot explain the origin of genetic information, and never will be able to. The software cannot be created by chemical processes or the interaction of energy and matter, it is not possible. If you don't believe that. then by all means put it to the test, by challenging any evolutionist to explain how genetic information (not DNA) can originate by natural means? I can guarantee they won't be able to do so.

 

Atheists often argue that the energy from the Sun can overcome the problem of entropy enabling an increase in comlexity that the origin of life requires - because the Earth is an open system, but that is clearly erroneous.

We can see entropy happening here and now, it happens everyday on Earth.

We are living in the OPEN system of the Earth, and yet we are well aware of entropy.

We see that the Sun does not halt or reverse entropy, in fact we see the opposite.

The raw energy and heat from the Sun, unless harnessed, does damage, things all around us obey the law - they deteriorate, rot, erode and decay, they do not naturally improve.

If you paint your house, the Sun, and the weather effects caused by the Sun, will eventually damage the paintwork, it will crack and peel after a few years. The hotter the Sun (the greater the energy input) the quicker it will happen.

Secondly, even if it were true that in an open system things can defy the law of entropy, natural laws are laws for the whole universe, and the universe, as a whole, is a closed system.

 

So what can we deduce from this?

Can the effects of entropy ever be reversed of halted? Obviously when you paint your house, you are reversing the bad effects of entropy for a short period, but you have to keep doing it, it is not permanent. Moreover, the energy you are using to repair and temporarily reverse the effects of entropy, is directed and guided by your skill and intelligence.

The atheist argument about the Earth being an open system is clearly not a valid one.

 

There are only 2 ways the effects of entropy can be temporarily decreased, halted or reversed by an input of energy. That is:

1. A directive means guiding the energy input.

OR,

2. A directive or conversion mechanism possessed by the recipient of the energy to utilise it in a constructive way.

 

For their argument to be valid atheists would have to

explain what it is that guides or directs the energy from the Sun to enable it to perform the task of creating order from disorder in the so-called primordial soup? And they are unable to do so.

 

Evolutionism: The Religion That Offers Nothing.

www.youtube.com/watch?v=znXF0S6D_Ts&list=TLqiH-mJoVPB...

  

FOUNDATIONS OF SCIENCE

The Law of Cause and Effect. Dominant Principle of Classical Physics. David L. Bergman and Glen C. Collins

www.thewarfareismental.net/b/wp-content/uploads/2011/02/b...

 

"The Big Bang's Failed Predictions and Failures to Predict: (Updated Aug 3, 2017.) As documented below, trust in the big bang's predictive ability has been misplaced when compared to the actual astronomical observations that were made, in large part, in hopes of affirming the theory."

kgov.com/big-bang-predictions

  

Opening on Amazon:

 

All people can create value—but for that to happen, we need to develop a people-centered, rather than a task-centered, economy. Today, we are very far from that. According to Gallup, of the five billion people on this planet aged fifteen or older, three billion work in some way. Most of them want full-time jobs, but only 1.3 billion have them. Of these, only 13 percent are fully engaged in their work, giving and receiving its full value. This terrible waste of human capacity and mismanagement of people’s desire to create value for each other is more than just very bad business. It is an insult to ourselves and to all human beings.

 

CHAPTER 5. Accelerating Towards a Jobless Future:

The Rise of the Machine and the Human Quest for Meaningful Work by Steve Jurvetson and Mo Islam

 

A New Paradigm

 

Let’s go far enough in the future where no one will debate the sweeping transition of time. There are infinite possible paths to this distant future, but we can imagine reasonable endpoints. This future will look like much of human history prior to the industrial and agricultural revolutions, where serfs and slaves did most of the labor-intensive work in the city-state economies. But while we hope the arc of the moral universe continues to bend towards justice, there will be a new paradigm in master and slave relationship between man and machine. The slaves of the future will be our machines.

 

There won’t be many jobs in the sense that we think of them for most people today. Machines will take over mechanically repetitive tasks. Humans will ever only need to do this type of work if they choose to, but they will not provide the most efficient means to complete these tasks. Even highly skilled workers, such as engineers, doctors, and scientists, will have their professions disrupted by automation and artificial intelligence. We will automate engineering, we will automate diagnosis, and we will automate discovery of scientific principles. In this future, where the marginal cost of labor is zero and where companies have reached new bounds of profit maximization, both the microeconomics of individual companies and the macroeconomics of the global economy will be completely upended. Maslow’s hierarchy of needs—food, shelter, health care, education—will be free for everyone forever. We won’t need to work to achieve the basic building blocks of sustainable civilization. The only important human need that will be amplified in this distant future even more than it is now is the desire for meaning.

 

Humanity’s Compounding Capacity to Compute

 

First, we will lay a framework for understanding why we believe this is a possible future. We are already on the trajectory to get us there—we have been since the dawn of the industrial age. Humanity’s capacity to compute has been constantly compounding. Incredibly, it can be explained through a simple and elegant model that, at first glance, may seem narrow in its explanatory power, but that tells a much deeper story. That model to describe this macrotrend begins with Moore’s Law. Moore’s Law is commonly reported as a doubling of transistor density every eighteen months. But unless you work for a chip company and focus on fab-yield optimization, you do not care about the transistor counts that Gordon Moore originally wrote about. When recast as a computational capability, Moore’s Law is no longer a transistor-centric metric.

 

What Moore observed in the belly of the early integrated circuit industry was a derivative metric, a refraction of a longer-term trend, a trend that begs various philosophical questions and predicts mind-bending futures. Ray Kurzweil’s abstraction of Moore’s Law shows computational power on a logarithmic scale and finds a double exponential curve that holds over 110 years! A straight line would represent a geometrically compounding curve of progress.

 

Figure 1: Ray Kurzweil’s abstraction of Moore’s Law. Each dot is a computer. (older version)

 

Through five paradigm shifts—such as electromechanical calculators and vacuum tube computers—the computational power that $1,000 buys has doubled every two years. For the past thirty years, it has been doubling every year.

 

Each dot is the frontier of computational price performance of the day. One machine was used in the 1890 census; one cracked the Nazi Enigma cipher in World War II; one predicted Eisenhower’s win in the 1956 presidential election. Many of them can be seen in the Computer History Museum. Each dot represents a human drama. Prior to Moore’s seminal paper in 1965, which presented what later became known as Moore’s Law, none of them even knew they were on a predictive curve. Each dot represents an attempt to build the best computer with the tools of the day. Of course, we use these computers to make better design software and manufacturing control algorithms. And so the progress continues.

 

Notice also that the pace of innovation is exogenous to the economy. The Great Depression and the world wars and various recessions do not introduce a meaningful change in the long-term trajectory of Moore’s Law. Certainly, the adoption rates, revenues, profits, and economic fates of the computer companies behind the various dots on the graph may go through wild oscillations, but the long-term trend emerges nevertheless.

 

In the modern era of accelerating change in the tech industry, it is hard to find even five-year trends with any predictive value, let alone trends that span the centuries. We would go further and assert that this is the most important graph ever conceived, and this is why it is so important as a foundation for understanding the future. We humans, regardless of external factors such as war, disease, and failing economies, have over vast periods of time doubled our capabilities to produce new technologies to propel us forward.

 

Accelerating Technological Progress

 

Moore’s law has set the bar for the accelerating pace of computation and innovation. How can we expect it to keep accelerating to get even faster now to the distant future we describe? All new technologies are combinations of technologies that already exist. Innovation does not occur in a vacuum; it is a combination of ideas from before. In any academic field, the advances today are built on a large edifice of history. This is why major innovations tend to be “ripe” and tend to be discovered at nearly the same time by multiple people. The compounding of ideas is the foundation of progress, something that was not so evident to the casual observer before the age of science. Science tuned the process parameters for innovation and became the best method for a culture to learn.

 

From this conceptual base comes the origin of economic growth and acceleration of technological change, as the combinatorial explosion of possible idea pairings grows exponentially as new ideas come into the mix, as dictated by Reed’s Law. It explains the innovative power of urbanization and networked globalization. And it explains why interdisciplinary ideas are so powerfully disruptive; it is like the differential immunity of epidemiology, whereby islands of cognitive isolation (e.g., academic disciplines) are vulnerable to disruptive memes hopping across them, in much the same way that South America was vulnerable to smallpox from Cortés and the Conquistadors. If disruption is what you seek, cognitive island hopping is good place to start, mining the interstices between academic disciplines.

 

It is the combinatorial explosion of possible innovation-pairings that creates economic growth, and it is about to go into overdrive. In recent years, we have begun to see the global innovation effects of a new factor: the Internet. People can exchange ideas as never before. Long ago, people were not communicating across continents; ideas were partitioned, and so the success of nations and regions pivoted on their own innovations. Richard Dawkins states that in biology it is genes which really matter, and we as people are just vessels for the conveyance of genes. It is the same with ideas or “memes.” We are the vessels that hold and communicate ideas, and now that pool of ideas percolates on a global basis more rapidly than ever before.

 

Rise of the Machines

 

Moore’s Law provides the model for us to understand humanity’s continuous compounding capacity to compute—with that we have accelerating technological progress driven by the combinatorial explosion of new ideas by ever-increasing sub-groups of cognitively diverse people becoming connected. However, the ramifications of this longer-term trend will start to become apparent in the very short term. We believe the greatest disruptor for job displacement caused by this accelerating innovation is the self-driving car.

 

In five years, it will be clear that the debate about the rise of the autonomous vehicle will have ended. Everyone will realize its ubiquity, especially as the first city pilots with autonomous vehicles begin rolling out. The Google car has already driven over a million miles without causing an accident. Automotive original equipment manufacturers and new companies are investing massive amounts of capital and engineering manpower to get to market with fully (Level 4) autonomous cars. The commercialization path of these self-driving cars, whether through an Uber-like on-demand service or through direct sales to consumers, is less important than the enormous impact they will have on the global job market. Using global employment data from the International Labour Organization (ILO), we find that by 2019, 5.7 percent of global employment will be in the transport, storage, and communication sector (See Figure 2). Moreover, the distribution of employment status data shows us that globally more than 60 percent of all workers lack any kind of employment contract, with most of them engaged in unpaid or family work in the developing world (See Figure 3). We find that, of workers worldwide who have a paid full-time job (excluding temporary workers), almost 20 percent drive as their form of employment today!

 

And autonomous vehicles are only the tip of the iceberg. As these systems transcend human comprehension, we will shift from traditional engineering to evolutionary algorithms and iterative learning algorithms such as deep learning and machine learning. While these techniques are powerful, the locus of learning shifts from the artifacts themselves to the process that created them. The beauty of compounding iterative algorithms (evolution, fractals, organic growth, art) derives from their irreducibility. And it empowers us to design complex systems that exceed human understanding, which we increasingly need to do at the cutting edge of software engineering. This process presents a plausible path to general artificial intelligence, or what Ray Kurzweil and others refer to as “strong A.I.” Danny Hillis summarizes succinctly in the conclusion from his programming primer The Pattern on the Stone: “We will not engineer an artificial intelligence; rather we will set up the right conditions under which an intelligence can emerge. The greatest achievement of our technology may well be creation of tools that allow us to go beyond engineering—that allow us to create more than we can understand.” Once we build these systems that surpass human understanding and that may even surpass human intelligence, the number of jobs that will be overhauled is unbounded—leading us to a future where no one will have to work.

 

Figure 2: Employment growth by sector, in which transport is one of the fasting growing.

 

Figure 3: Distribution of employment status, showing that only 40 percent of people have full-time jobs

 

Meaningful Work

 

Moore’s Law will drive human innovation forward and the collective global intelligence will create new forms of super artificial intelligence that can surpass human capabilities. This will completely disrupt our notion of jobs. Work is now the very thing that powers our global economy. But what happens when it no longer has to? Or at least, when most humans are no longer the aggregate primary drivers of global work, how will we find meaning in our lives? This existential phenomenon is one that will completely turn the current debate about the race against the machine on its head: the debate will no longer be about machines taking human jobs but instead about humans needing meaning in their work, even though it may no longer be for employment. The nature of jobs as we think about them today will dramatically change in the future, but humans will retain their thirst for deriving purpose from their actions. This is already becoming a major focus for employers now, as millennials entering the job market are interested in more than just salary, benefits, and job security to satisfy their work expectations. They want to be a part of something larger, to fulfill a mission that can really change the world. As we look to this distant future where employment isn’t necessary for most humans, finding meaning through non-traditional forms of work, whether hobbies, research, or entertainment will become paramount to sustaining a thriving civilization.

A pair of elders, in apparent peregrination, decides to encamp to clean his gabbeh. It is in this moment where there appears the one that will be, of now in more narrating delegate of the statement: Gabbeh, he to be told to the pair by the incidents for which it had to cross to obtain the love of his dear rider since his father was opposed to the marriage. With an epic structure, to the way brechtiana, the speech is displeased behind in the time, the film shows the different anecdotes that, we suppose, the pair of old men had to cross to be able to anchor in the present that gives beginning to the film. Across the direct cut and stopping I go on to a voice over, we accede to the life of this community from the point of view of the protagonist.The portrait, which allows the narrator to raise the image to the statute of the documentary - we see how the men shear the sheeps, how the women tiñen the wool, how the carpets are woven across the different episodes - also it allows him to articulate, across the images, a history while such.The film tries to be placed in this moment from which those dispersed impressions get up-to-date in a structure that he them supports and that can realize of one exist. In this moment in which all that that was appearing as weak capture forms, in this specific case the instant in which this pair is recognized, but that can assimilate the moment in which a community registers as such. To realize of this - and here it would enter axis of oppositions with certain narrative models and theories that realize of the same ones - the accent is not put in the orality but in the images. And not only in the images while possibility of reproducing them of the cinematographic device, but in these images that have remained come down to a remaining place like, for example, the motives of the gabbeh.Appealing to the cinematographic thing, to turn this one it restores the question for that one that, in his worry for the technological development and his zeal to transform in an interchangeable object in the logic of the consumption, the cinema is leaving behind: the possibility of possessing blocks of movement - duration.Far from the melancholy, near Achilles` paradox and the tortoise, the accent puts in the resources that come given in the device as such: the possibility of the setting, of the size and the duration of the planes, of the lights and the shades and, fundamentally, the colors.If in The Silence (1998) the worry was happening for the expressive possibilities of the sound and therefore of the ear while sensitive faculty,here the emphasis is put in the visual thing.Across a putting in austere scene, which leases are natural spaces and the integral actors of the same tribe, the film reinforces that one of which still there is possible a cinema that moves away from the way of narrating of the overproductions. Possible neither in the measurement in which presupposed millionaires do not need, nor the last model of chamber to count. In this respect, it is enough to remember the sequence in which the uncle of Gabbeh teaches to the boys of the tribe the primary colors and his combinatorial possibilities: Blue as that of the sky, yellow as that of the sunflowers.Blue with yellow is equal to green, as that of the pastures. Since in a spectacle of magic, the teacher seizes a piece of sky, of pasture, of flower and of out of field the color brings them over to his pupils: we. The invitation, more that to the didactismo, is to the surprise and to the honoring. Since for example to Meliès, and the allusion would not become exhausted there to the origins of the cinema.The images, specially across the long shots of landscapes - together with with the phrases that float on the text: " the life is a color ", " the man is a color ", " the love is a color " ... and the death also is a color, but negro - they fascinate. And this fascination does not exclude the narrator.As in The Passenger (1975) of Antonioni, the chamber settles in these landscapes that while they cannot be awarded reference to a point of view to any personage of the history, and attentive it remains there.In words of the head cameraman of the film: It was this nature the one that was inviting us to place the chamber to such or such a setting, without mattering where! " To end, since we come insisting respect of this New Second Iranian Cinema, every new film enriches to the world. It extends it in the measurement in which it realizes of that in the age of the overproductions and of the apocalyptic predictions I concern of the future of the cinema, still there are images to explore. As he(she) says, floating on the film, an old Iranian proverb: " Seeing is the same thing at that to look ".

 

Una pareja de ancianos, en aparente peregrinación, decide acampar para limpiar su gabbeh. Es en ese momento donde aparece la que será, de ahora en más narradora delegada del relato: Gabbeh, para contarle a la pareja las peripecias por las que tuvo que atravesar para conseguir el amor de su amado jinete pues su padre se oponía al matrimonio. Con una estructura épica, a la manera brechtiana, el discurso es desplazado atrás en el tiempo, el film muestra las diferentes anécdotas que, suponemos, la pareja de viejos tuvo que atravesar para poder anclar en el presente que da comienzo al film. A través del corte directo y dejando paso a una voz over, accedemos a la vida de esta comunidad desde el punto de vista de la protagonista. El retrato, que le permite al narrador elevar la imagen al estatuto del documental -vemos cómo los hombres esquilan las ovejas, cómo las mujeres tiñen la lana, cómo se tejen las alfombras a través de los distintos episodios- también le permite articular, a través de las imágenes, una historia en tanto tal. El film intenta colocarse en ese momento a partir del cual aquellas impresiones dispersas se actualizan en una estructura que las soporta y que puede dar cuenta de un existir. En ese momento en que todo aquello que aparecía como inconsistente toma forma, en este caso específico el instante en que esta pareja se reconoce, pero que puede asimilarse al momento en el cual una comunidad se inscribe como tal. Para dar cuenta de esto -y aquí entraría en eje de oposiciones con ciertos modelos narrativos y teorías que dan cuenta de los mismos- el acento no está puesto en la oralidad sino en las imágenes. Y no sólo en las imágenes en tanto posibilidad de reproducirlas del aparato cinematográfico, sino en esas imágenes que han quedado reducidas a un lugar remanente como, por ejemplo, los motivos de los gabbeh.Apelando a lo cinematográfico, este volver instaura la pregunta por aquello que, en su preocupación por el desarrollo tecnológico y su afán de transformarse en un objeto intercambiable en la lógica del consumo, el cine va dejando atrás: la posibilidad de contar con bloques de movimiento-duración. Lejos de la melancolía, cerca de la paradoja de Aquiles y la tortuga, el acento se pone en los recursos que vienen dados en el dispositivo como tal: la posibilidad del encuadre, del tamaño y la duración de los planos, de las luces y las sombras y, fundamentalmente, los colores. Si en El Silencio (1998) la preocupación pasaba por las posibilidades expresivas del sonido y por tanto del oído en tanto facultad sensible, aquí el énfasis está puesto en lo visual. A través de una puesta en escena austera, cuyas locaciones son espacios naturales y los actores integrantes de la misma tribu, el film refuerza aquello de que aún es posible un cine que se aleje de la forma de narrar de las superproducciones. Posible en la medida en que no se necesitan presupuestos millonarios, ni el último modelo de cámara para contar. En este sentido, basta recordar la secuencia en que el tío de Gabbeh les enseña a los chicos de la tribu los colores primarios y sus posibilidades combinatorias: Azul como el del cielo, amarillo como el de los girasoles. Azul con amarillo es igual a verde, como el de los pastos. Como en un espectáculo de magia, el maestro agarra un pedazo de cielo, de pasto, de flor y del fuera de campo les acerca el color a sus alumnos: nosotros. La invitación, más que al didactismo, es a la sorpresa y al homenaje. Como por ejemplo a Meliès, y no se agotaría allí la alusión a los orígenes del cine. Las imágenes, especialmente a través de los planos generales de paisajes -conjuntamente con las frases que flotan sobre el texto: "la vida es color", "el hombre es color", " el amor es color"... y la muerte también es color, pero negro- fascinan. Y esta fascinación no excluye al narrador. Al igual que en El Pasajero (1975) de Antonioni, la cámara se posa en esos paisajes que en tanto referencia a un punto de vista no pueden ser adjudicados a ningún personaje de la historia, y atenta permanece allí. En palabras del director de fotografía del film: Era esa naturaleza la que nos invitaba a colocar la cámara a tal o tal encuadre, sin importar dónde!" Para terminar, como venimos insistiendo respecto de este Segundo Nuevo Cine Iraní, cada nuevo film enriquece al mundo. Lo amplía en la medida en que da cuenta de que en la era de las superproducciones y de las predicciones apocalípticas respecto del futuro del cine, todavía hay imágenes para explorar. Como dice, flotando sobre el film, un viejo proverbio iraní: "Ver no es lo mismo que mirar".

  

Each of us submitted an essay on innovation and growth in advance for the Gruter Institute Conference on Growth. I’ll append mine below.

 

(photo by John Chisholm. More below).

 

Discussion ensued over lunch, and one of my favorite authors, Matt Ridley wrote a summary for the WSJ “Why Can't Things Get Better Faster (or Slower)?”

 

------------------------------------

Innovation and Growth — Evolving Cities and Culture

By Steve Jurvetson

 

Innovation is critical to economic growth, progress, and the fate of the planet. Yet, it seems so random. But patterns emerge in the aggregate, and planners and politicians may be able to promote innovation and growth despite the overall inscrutability of this complex system. To tap the wisdom of crowds, we should shift the locus of learning from products to process. Leadership is not spotting the next growth industry, but tuning the parameters of human communication.

 

One emergent pattern, spanning centuries, is that the pace of innovation is perpetually accelerating, and it is exogenous to the economy. Rather, it is the combinatorial explosion of possible innovation-pairings that creates economic growth. And that is why cities are the crucible of innovation.

 

Geoffrey West of the Santa Fe Institute argues that cities are an autocatalytic attractor and amplifier of innovation. People are more innovative and productive, on average, when they live in a city because ideas can cross-pollinate more easily. Proximity promotes propinquity and the promiscuity of what Matt Ridley calls “ideas having sex”. This positive network effect drives another positive feedback loop - by attracting the best and the brightest to flock to the salon of mind, the memeplex of modernity.

 

Cities are a structural manifestation of the long arc of evolutionary indirection, whereby the vector of improvement has risen steadily up the ladder of abstractions from chemicals to genes to systems to networks. At each step, the pace of progress has leapt forward, making the prior vectors seem glacial in comparison – rather we now see the nature of DNA and even a neuron as a static variable in modern times. Now, it’s all about the ideas - the culture and the networks of humanity. We have moved from genetic to mimetic evolution, and much like the long-spanning neuron (which took us beyond nearest neighbor and broadcast signaling among cells) ushering the Cambrian explosion of differentiated and enormous body plans, the Internet brings long-spanning links between humans, engendering an explosion in idea space, straddling isolated pools of thought.

 

And it’s just beginning. In the next 10 years, three billion minds will come online for the first time to join this global conversation (Diamandis).

 

But why does this drive innovation and accelerating change? Start with Brian Arthur’s observation that all new technologies are combinations of technologies that already exist. Innovation does not occur in a vacuum; it is a combination of ideas from before. In any academic field, the advances today are built on a large edifice of history. This is the foundation of progress, something that was not so evident to the casual observer before the age of science. Science tuned the process parameters for innovation, and became the best method for a culture to learn.

 

From this conceptual base, come the origin of economic growth and accelerating technological change, as the combinatorial explosion of possible idea pairings grows exponentially as new ideas come into the mix (on the order of 2^n of possible groupings per Reed’s Law). It explains the innovative power of urbanization and networked globalization. And it explains why interdisciplinary ideas are so powerfully disruptive; it is like the differential immunity of epidemiology, whereby islands of cognitive isolation (e.g., academic disciplines) are vulnerable to disruptive memes hopping across, much like South America was to smallpox from Cortés and the Conquistadors. If disruption is what you seek, cognitive island-hopping is good place to start, mining the interstices between academic disciplines.

 

So what evidence do we have of accelerating technological change? At DFJ, we see it in the diversity and quality of the entrepreneurial ideas arriving each year across our global offices. Scientists do not slow their thinking during recessions. For a good mental model of the pace of innovation, consider Moore’s Law in the abstract – the annual doubling of compute power or data storage. As Ray Kurzweil has plotted, the smooth pace of exponential progress spans from 1890 to 2012, across countless innovations, technology substrates, and human dramas — with most contributors completely unaware that they were fitting to a curve.

 

Moore’s Law is a primary driver of disruptive innovation – such as the iPod usurping the Sony Walkman franchise – and it drives not only IT and communications, but also now genomics, medical imaging and the life sciences in general. As Moore’s Law crosses critical thresholds, a formerly lab science of trial and error experimentation becomes a simulation science and the pace of progress accelerates dramatically, creating opportunities for new entrants in new industries. And so the industries impacted by the latest wave of tech entrepreneurs are more diverse, and an order of magnitude larger — from automobiles and rockets to energy and chemicals.

 

At the cutting edge of computational capture is biology; we are actively reengineering the information systems of biology and creating synthetic microbes whose DNA was manufactured from bare computer code and an organic chemistry printer. But what to build? So far, we largely copy large tracts of code from nature. But the question spans across all the complex systems that we might wish to build, from cities to designer microbes, to computer intelligence.

 

As these systems transcend human comprehension, will we continue to design them or will we increasingly evolve them? As we design for evolvability, the locus of learning shifts from the artifacts themselves to the process that created them. There is no mathematical shortcut for the decomposition of a neural network or genetic program, no way to "reverse evolve" with the ease that we can reverse engineer the artifacts of purposeful design. The beauty of compounding iterative algorithms (evolution, fractals, organic growth, art) derives from their irreducibility. (My Google Tech Talk goes into some detail on the dichotomy of design and evolution).

 

The corporation is a complex system that seeks to perpetually innovate. Leadership in these complex organizations shifts from direction setting to a wisdom of crowds. And the process learning is a bit counterintuitive to some alpha leaders: cognitive diversity is more important than ability, disagreement is more important than consensus, voting policies and team size are more important than the coherence or comprehensibility of the decisions, and tuning the parameters of communication (frequency and fanout) is more important than charisma.

 

The same could be said for urban planning. How will cities be built and iterated upon? Who will make those decisions and how? We are just starting to see the shimmering refractions of the hive mind of human culture, and now we want to redesign the hives themselves to optimize the emergent complexity within. Perhaps the best we can do is set up the grand co-evolutionary dance, and listen carefully for the sociobiology of supra-human sentience.

I am at the TTI/Vanguard Next conference (agenda), with a sophisticated audience of tech executives from around the world. Of the topics I covered, the Q&A interest focused on iterative algorithms that will create an AI that exceeds human intelligence, much like biological evolution. (video)

 

Here are some of the related bullets from my slides:

 

Reed's Law applies to combinations of ideas as well as self-forming groups. It's the combinatorial explosion in the mating pool of ideas that creates perpetually accelerating progress.

 

Evolutionary algorithms allow us to build complex systems that exceed human understanding (synthetic biology, AI, innovative organizations), but there are some limitations to this approach:

 

• Subsystem Inscrutability

- Black box defined by its interfaces

- No “reverse evolution” (You can't run that algorithm backwards)

 

• No simple shortcuts across the iterations

- Simulation ~ Reality

- Beauty from irreducibility

 

• Locus of Learning is Process, not Product

 

• Robust, within co-evolutionary islands

 

“The greatest achievement of our technology may well be the creation of tools that allow us to go beyond engineering – that allow us to create more than we can understand.” — Danny Hillis

 

“We actually think quantum machine learning may provide the most creative

problem-solving process under the known laws of physics.” — Google Blog

 

AI implications:

• Cut & Paste Portability?

 

• Locus of learning: Process, not Product

- Would we bother to reverse engineer?

- No hard take off?

 

•Co-evolutionary islands

- accustomed environment (differential immunity)

 

• Path dependence

- algorithm survival

- AI = Alien Intelligence defined by sensory I/O

  

Accelerating Technological Change

- Interdisciplinary Renaissance

- IT innervates $T markets

- More Black Swans

- Perpetual driver of disruption

==> Virtuous cycle for entrepreneurs

==> a great time for the new

  

Comments from others that followed:

 

“The majority of financial reports are now compiled by machines, not people.”

 

“A lot of the great data scientists are born in Russia, and they have the attributes of creativity, tenacity and an ability to code.”

 

“When we asked 1000 people on Mechanical Turk to flip a coin, we got 65% heads, 28% tails, and 7% typos. Many of them clearly did not actually flip a coin.”

 

“Imagine the sociological impact of crowdsourcing – what if you could create IBM for an afternoon and then disperse it? We might get cyber-Taylorism if we don’t think about doing it right.”

 

“Competition will be critical to the wisdom of crowds.”

 

Combinatorial Creativity: “Combinatorial search spaces are vast and the fastest supercomputers can not penetrate too deeply into them. Nevertheless, they may be able to penetrate several levels deeper than any person can, and thereby find superb creative acts that mankind did not or could not think of.”

  

Pointer to CHM video on the history of AI.

 

Photos by Ed Jay

sunset in the basement ruins of the Abbaye of #Cluny

The current Forbes cover reminds me of the longevity confab that Joon Yun pulled together yesterday. I pulled slides together just prior to going on stage, and in retrospect, I might have titled it as I do here for the photo caption. Here’s a FB video from the audience, starting one minute in. Thanks to Asa for the photo.

 

Here are some notes from longevitycrossroads.org before the fire marshall kicked us out.

 

Joon: “any event that becomes a fire hazard is probably worth doing. I have to imagine that that was on the mind of the Burning Man founders.”

 

Elizabeth Blackburn, President, Salk Institute & Nobel Prize winner for discovering the molecular nature of telomeres:

 

Yeast cells divide only 25 times and then stop. Why? What happens? Catastrophic systems failure.

 

Telomere tips protect the cell DNA. The code will no longer replenish the cell. They become like little rotten apple, spitting out inflammatory chemicals. If you clear these undead cells out, then mice stay healthy. This is behind senescence.

 

Our germ line cells know how to generate a fresh new baby with extended telomeres. So, there is hope.

 

The longest living human kept smoking well past 110 years old. She only stopped when she could no longer see well enough to light a cigarette. We don’t know how long we could live.

 

Dr. Eric Verdin, CEO of Buck institute:

 

How aging research will disrupt medicine

C.Elegans. DAF-2 gene modification, 2x lifespan (from 21 day base). Modify same DAF-2 gene and removal of gonad -> 6x lifespan. For those squirming in seat, we’re not thinking about removing gonads in humans. Caloric restriction in mice -> dramatic lifespan extension

 

Okinawa, Japan: People there have the longest life expectancy and the most centenarians. Local saying: “Eat only until 80% full.” Self-imposed caloric restriction.

Life expectancy drops from 82 to 65 years when Okinawans live in Brazil.

TOR and insulin signaling. Related to caloric restriction

Today : focused on diseases and organs

Future: preventative, reparative, centered on Aging pathways, multi-organ

 

Kicking off the D-Wave Board meeting over lunch today at Goldman Sachs… with new news from Google that they demonstrated the use of D-Wave’s quantum computer to deliver photo-driven search (and improve on classical machine learning).

 

Here is a summary from the Google Reseach blog:

 

“Many Google services we offer depend on sophisticated artificial intelligence technologies such as machine learning or pattern recognition. If one takes a closer look at such capabilities one realizes that they often require the solution of what mathematicians call hard combinatorial optimization problems. It turns out that solving the hardest of such problems requires server farms so large that they can never be built. A new type of machine, a so-called quantum computer, can help here.

 

Today, at the Neural Information Processing Systems conference (NIPS 2009), we show the progress we have made. We demonstrate a detector that has learned to spot cars by looking at example pictures. It was trained with adiabatic quantum optimization using a D-Wave C4 Chimera chip. There are still many open questions but in our experiments we observed that this detector performs better than those we had trained using classical solvers running on the computers we have in our data centers today. Besides progress in engineering synthetic intelligence we hope that improved mastery of quantum computing will also increase our appreciation for the structure of reality as described by the laws of quantum physics.”

I experimented with a mid-length exposure here, as it was a blend of day and night shots.

 

OK, ok, enough about rockets.

 

I need to head out and try to think like a techonomist.

 

When we consider the combinatorial explosion of possibly interacting ideas as the fountainhead of innovation, it not only creates the economy and explains accelerating change, it also subsumes biological evolution (raising the primary vector of progress to a higher level of abstraction) and nurtures a rational optimism for the future.

 

(blending Adam Smith, Matt Ridley, Richard Dawkins, Ray Kurzweil & Brian Arthur)

  

And stitched together by some fine dinner conversation with Matt Ridley (just before his TED Talk):

 

“Self-sufficient is another way of saying impoverished.”

 

“Innovation = ideas having sex.”

 

“There is literally nobody on the planet who knows how to make a computer mouse.”

Combinatorial exploration of a leather stereotype.

Omage to Tom, thinking to Andy.

This is a study tool that I came up with while thinking about zonohedra awhile back. Just as planar rhombus tilings can be generated by grids of lines (see this article for more info), convex zonohedra seem to have a relationship with great circles placed on a sphere. I've no idea what makes it work, but just by playing around with some a racquetball and some rubber bands, I made some pretty cool shapes and generated a (hopefully complete) list of all combinatorially distinct 6-sided and 12-sided convex zonohedra, so I'm pretty sure it works.

 

The reason I'm bringing it up now, instead of waiting until I finish folding the model and do a writeup, is that this maquette also happens to be useful in an explanation I posted in response to a question by Byriah Loper.

My article is on p.36 of the Computer History Museum Core.

 

Moore's Law is both a prediction and an abstraction

 

The popular perception of Moore’s Law is that computer chips are compounding in their complexity at near constant per unit cost. This is one of the many abstractions of Moore’s Law, and it relates to the compounding of transistor density in two dimensions. Others relate to speed (the signals have less distance to travel) or computational power (speed x density).

 

Unless you work for a chip company and focus on fab-yield optimization, you do not care about transistor counts. Integrated circuit customers do not buy transistors. Consumers of technology purchase computational speed and data storage density. When recast in these terms, Moore’s Law is no longer a transistor-centric metric, and this abstraction allows for longer-term analysis.

 

What Moore observed in the belly of the early IC industry was a derivative metric, a refracted signal, from a longer-term trend, a trend that begs various philosophical questions and predicts mind-bending futures.

 

Humanity’s compounding capacity to compute.

 

Ray Kurzweil’s abstraction of Moore’s Law shows computational power on a logarithmic scale, and finds a double exponential curve that holds over 110 years! A straight line would represent a geometrically compounding curve of progress.

 

[see graph in first comment below]

 

Through five paradigm shifts – such as electro-mechanical calculators and vacuum tube computers – the computational power that $1000 buys has doubled every two years. For the past 30 years, it has been doubling every year.

 

Each dot is the frontier of computational price performance of the day. One machine was used in the 1890 Census; one cracked the Nazi Enigma cipher in World War II; one predicted Eisenhower’s win in the 1956 Presidential election. Many of them can be seen in the Computer History Museum.

 

Each dot represents a human drama. Prior to Moore’s first paper in 1965, none of them even knew they were on a predictive curve. Each dot represents an attempt to build the best computer with the tools of the day. Of course, we use these computers to make better design software and manufacturing control algorithms. And so the progress continues.

 

Notice that the pace of innovation is exogenous to the economy. The Great Depression and the World Wars and various recessions do not introduce a meaningful change in the long-term trajectory of Moore’s Law. Certainly, the adoption rates, revenue, profits and economic fates of the computer companies behind the various dots on the graph may go though wild oscillations, but the long-term trend emerges nevertheless.

 

Any one technology, such as the CMOS transistor, follows an elongated S-shaped curve of slow progress during initial development, upward progress during a rapid adoption phase, and then slower growth from market saturation over time. But a more generalized capability, such as computation, storage, or bandwidth, tends to follow a pure exponential – bridging across a variety of technologies and their cascade of S-curves.

 

In the modern era of accelerating change in the tech industry, it is hard to find even five-year trends with any predictive value, let alone trends that span the centuries. I would go further and assert that this is the most important graph ever conceived.

 

Why is this the most important graph in human history?

 

A large and growing set of industries depends on continued exponential cost declines in computational power and storage density. Moore’s Law drives electronics, communications and computers and has become a primary driver in drug discovery, biotech and bioinformatics, medical imaging and diagnostics. As Moore’s Law crosses critical thresholds, a formerly lab science of trial and error experimentation becomes a simulation science, and the pace of progress accelerates dramatically, creating opportunities for new entrants in new industries. Boeing used to rely on the wind tunnels to test novel aircraft design performance. Ever since CFD modeling became powerful enough, design moves to the rapid pace of iterative simulations, and the nearby wind tunnels of NASA Ames lie fallow. The engineer can iterate at a rapid rate while simply sitting at their desk.

 

Every industry on our planet is going to become an information business. Consider agriculture. If you ask a farmer in 20 years’ time about how they compete, it will depend on how they use information, from satellite imagery driving robotic field optimization to the code in their seeds. It will have nothing to do with workmanship or labor. That will eventually percolate through every industry as IT innervates the economy.

 

Non-linear shifts in the marketplace are also essential for entrepreneurship and meaningful change. Technology’s exponential pace of progress has been the primary juggernaut of perpetual market disruption, spawning wave after wave of opportunities for new companies. Without disruption, entrepreneurs would not exist.

 

Moore’s Law is not just exogenous to the economy; it is why we have economic growth and an accelerating pace of progress. At Future Ventures, we see that in the growing diversity and global impact of the entrepreneurial ideas that we see each year. The industries impacted by the current wave of tech entrepreneurs are more diverse, and an order of magnitude larger than those of the 90’s — from automobiles and aerospace to energy and chemicals.

 

At the cutting edge of computational capture is biology; we are actively reengineering the information systems of biology and creating synthetic microbes whose DNA is manufactured from bare computer code and an organic chemistry printer. But what to build? So far, we largely copy large tracts of code from nature. But the question spans across all the complex systems that we might wish to build, from cities to designer microbes, to computer intelligence.

 

Reengineering engineering

 

As these systems transcend human comprehension, we will shift from traditional engineering to evolutionary algorithms and iterative learning algorithms like deep learning and machine learning. As we design for evolvability, the locus of learning shifts from the artifacts themselves to the process that created them. There is no mathematical shortcut for the decomposition of a neural network or genetic program, no way to "reverse evolve" with the ease that we can reverse engineer the artifacts of purposeful design. The beauty of compounding iterative algorithms (evolution, fractals, organic growth, art) derives from their irreducibility. And it empowers us to design complex systems that exceed human understanding.

  

Why does progress perpetually accelerate?

 

All new technologies are combinations of technologies that already exist. Innovation does not occur in a vacuum; it is a combination of ideas from before. In any academic field, the advances today are built on a large edifice of history. . This is why major innovations tend to be 'ripe' and tend to be discovered at the nearly the same time by multiple people. The compounding of ideas is the foundation of progress, something that was not so evident to the casual observer before the age of science. Science tuned the process parameters for innovation, and became the best method for a culture to learn.

 

From this conceptual base, come the origin of economic growth and accelerating technological change, as the combinatorial explosion of possible idea pairings grows exponentially as new ideas come into the mix (on the order of 2^n of possible groupings per Reed’s Law). It explains the innovative power of urbanization and networked globalization. And it explains why interdisciplinary ideas are so powerfully disruptive; it is like the differential immunity of epidemiology, whereby islands of cognitive isolation (e.g., academic disciplines) are vulnerable to disruptive memes hopping across, much like South America was to smallpox from Cortés and the Conquistadors. If disruption is what you seek, cognitive island-hopping is good place to start, mining the interstices between academic disciplines.

 

It is the combinatorial explosion of possible innovation-pairings that creates economic growth, and it’s about to go into overdrive. In recent years, we have begun to see the global innovation effects of a new factor: the internet. People can exchange ideas like never before Long ago, people were not communicating across continents; ideas were partitioned, and so the success of nations and regions pivoted on their own innovations. Richard Dawkins states that in biology it is genes which really matter, and we as people are just vessels for the conveyance of genes. It’s the same with ideas or “memes”. We are the vessels that hold and communicate ideas, and now that pool of ideas percolates on a global basis more rapidly than ever before.

 

In the next 6 years, three billion minds will come online for the first time to join this global conversation (via inexpensive smart phones in the developing world). This rapid influx of three billion people to the global economy is unprecedented in human history, and so to, will the pace of idea-pairings and progress.

 

We live in interesting times, at the cusp of the frontiers of the unknown and breathtaking advances. But, it should always feel that way, engendering a perpetual sense of future shock.

The image shows a 3D-rendering (Imaris software) of a live confluent culture of NIH-3T3 cells obtained using confocal microscopy. The cells were co-transduced with 5 fluorescent proteins and with Lentiviral Gene Ontology (LeGO) vectors expressing Cerulean (blue), EGFP (green), Venus (yellow), tdTomato (magenta) or mCherry (red) fluorescent proteins to provide combinatorial colors for progeny tracking. Groups of nearby cells of the same color descended from same stem cells.

 

Credit: Daniela Malide, Jean-Yves Metais, Cynthia Dunbar, National Institutes of Health

 

www.cellimagelibrary.org/images/44151

1 3 4 5 6 7 ••• 15 16