View allAll Photos Tagged deeplearning

❤️🍀🌺💛 AI-created images blend imagination with precision — a fusion of human vision and machine mastery, where dreams take form in light and texture. They’re beauty born from code, yet filled with soul.❤️🍀🌺💛

 

✨ Thank you so much for taking the time to view and comment on my AI art! Your support means a lot.

 

🎨 I really appreciate all the feedback and encouragement—your words inspire me to create more.

 

🙏 Grateful for everyone who stopped by, liked, and shared your thoughts. It keeps me motivated!

 

💫 Thanks for being part of this creative journey. Your comments make it even more rewarding.

❤️🍀🌺💛 AI-created images blend imagination with precision — a fusion of human vision and machine mastery, where dreams take form in light and texture. They’re beauty born from code, yet filled with soul.❤️🍀🌺💛

 

✨ Thank you so much for taking the time to view and comment on my AI art! Your support means a lot.

 

🎨 I really appreciate all the feedback and encouragement—your words inspire me to create more.

 

🙏 Grateful for everyone who stopped by, liked, and shared your thoughts. It keeps me motivated!

 

💫 Thanks for being part of this creative journey. Your comments make it even more rewarding.

 

❤️🍀🌺💛 AI-created images blend imagination with precision — a fusion of human vision and machine mastery, where dreams take form in light and texture. They’re beauty born from code, yet filled with soul.❤️🍀🌺💛

 

✨ Thank you so much for taking the time to view and comment on my AI art! Your support means a lot.

 

🎨 I really appreciate all the feedback and encouragement—your words inspire me to create more.

 

🙏 Grateful for everyone who stopped by, liked, and shared your thoughts. It keeps me motivated!

 

💫 Thanks for being part of this creative journey. Your comments make it even more rewarding.

❤️🍀🌺💛 AI-created images blend imagination with precision — a fusion of human vision and machine mastery, where dreams take form in light and texture. They’re beauty born from code, yet filled with soul.❤️🍀🌺💛

 

✨ Thank you so much for taking the time to view and comment on my AI art! Your support means a lot.

 

🎨 I really appreciate all the feedback and encouragement—your words inspire me to create more.

 

🙏 Grateful for everyone who stopped by, liked, and shared your thoughts. It keeps me motivated!

 

💫 Thanks for being part of this creative journey. Your comments make it even more rewarding.

Experiment with Deep learning AI image generation reveals striking truth. If the AI starts with messaging me with what looks like bottled fish and other biological material, no wonder we all meatbags will end up canned.

❤️🍀🌺💛 AI-created images blend imagination with precision — a fusion of human vision and machine mastery, where dreams take form in light and texture. They’re beauty born from code, yet filled with soul.❤️🍀🌺💛

 

✨ Thank you so much for taking the time to view and comment on my AI art! Your support means a lot.

 

🎨 I really appreciate all the feedback and encouragement—your words inspire me to create more.

 

🙏 Grateful for everyone who stopped by, liked, and shared your thoughts. It keeps me motivated!

 

💫 Thanks for being part of this creative journey. Your comments make it even more rewarding.

 

❤️🍀🌺💛 AI-created images blend imagination with precision — a fusion of human vision and machine mastery, where dreams take form in light and texture. They’re beauty born from code, yet filled with soul.❤️🍀🌺💛

 

✨ Thank you so much for taking the time to view and comment on my AI art! Your support means a lot.

 

🎨 I really appreciate all the feedback and encouragement—your words inspire me to create more.

 

🙏 Grateful for everyone who stopped by, liked, and shared your thoughts. It keeps me motivated!

 

💫 Thanks for being part of this creative journey. Your comments make it even more rewarding.

❤️🍀🌺💛 AI-created images blend imagination with precision — a fusion of human vision and machine mastery, where dreams take form in light and texture. They’re beauty born from code, yet filled with soul.❤️🍀🌺💛

 

✨ Thank you so much for taking the time to view and comment on my AI art! Your support means a lot.

 

🎨 I really appreciate all the feedback and encouragement—your words inspire me to create more.

 

🙏 Grateful for everyone who stopped by, liked, and shared your thoughts. It keeps me motivated!

 

💫 Thanks for being part of this creative journey. Your comments make it even more rewarding.

 

❤️🍀🌺💛 AI-created images blend imagination with precision — a fusion of human vision and machine mastery, where dreams take form in light and texture. They’re beauty born from code, yet filled with soul.❤️🍀🌺💛

 

✨ Thank you so much for taking the time to view and comment on my AI art! Your support means a lot.

 

🎨 I really appreciate all the feedback and encouragement—your words inspire me to create more.

 

🙏 Grateful for everyone who stopped by, liked, and shared your thoughts. It keeps me motivated!

 

💫 Thanks for being part of this creative journey. Your comments make it even more rewarding.

 

© Dan McCabe

 

A Deep Dream Generator rendition of my "Hot Rod - 1941 Willys Coupe". The original can be found here:

flic.kr/p/KhQZ3h

 

© Dan McCabe

 

Processing my "Knotty Mask" with Deep Dream Generator.

 

You can find the original here:

flic.kr/p/29J4Ww4

 

When I look at the original photo, I see a single face. In this image, I see a number of facial parts (eyes, mouth, etc.), but no single face.

Computer technology has advanced to the point that many members of the public possess cyberbrains, technology that allows them to interface their biological brain with various networks. The level of cyberization varies from simple minimal interfaces to almost complete replacement of the brain with cybernetic parts.

© Dan McCabe

 

Another Deep Dream Generator image, this one of my "Zaappp!!!!" photo:

flic.kr/p/ZkGMbC

Made with Deep Dream Generator based on my own photos as a seed image.

Its around noon and the sky looks like this. Red sun caused by Sahara dust raised via a hurricane towards a part Europe

The Silicon Graphics head in my office was my muse. I just finished reading a fascinating summary by Lin & Tegmark of the tie between the power of neural networks / deep learning and the peculiar physics of our universe. The mystery of why they work so well may be resolved by seeing the resonant homology across the information-accumulating substrate of our universe, from the base simplicity of our physics to the constrained nature of the evolved and grown artifacts all around us. The data in our natural world is the product of a hierarchy of iterative algorithms, and the computational simplification embedded within a deep learning network is also a hierarchy of iteration. Since neural networks are symbolic abstractions of how the human cortex works, perhaps it should not be a surprise that the brain has evolved structures that are computationally tuned to tease apart the complexity of our world.

 

Does anyone know about other explorations into these topics?

 

Here is a collection of interesting plain text points I extracted from the math in Lin & Tegmark’s article:

 

"The exceptional simplicity of physics-based functions hinges on properties such as symmetry, locality, compositionality and polynomial log-probability, and we explore how these properties translate into exceptionally simple neural networks approximating both natural phenomena such as images and abstract representations thereof such as drawings. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. Various “no-flattening theorems” show when these efficient deep networks cannot be accurately approximated by shallow ones without efficiency loss."

 

This last point reminds me of something I wrote in 2006: "Stephen Wolfram’s theory of computational equivalence suggests that simple, formulaic shortcuts for understanding evolution (and neural networks) may never be discovered. We can only run the iterative algorithm forward to see the results, and the various computational steps cannot be skipped. Thus, if we evolve a complex system, it is a black box defined by its interfaces. We cannot easily apply our design intuition to the improvement of its inner workings. We can’t even partition its subsystems without a serious effort at reverse-engineering." — 2006 MIT Tech Review

 

Back to quotes from the paper:

Neural networks perform a combinatorial swindle, replacing exponentiation by multiplication: if there are say n = 106 inputs taking v = 256 values each, this swindle cuts the number of parameters from v^n to v×n times some constant factor. We will show that this success of this swindle depends fundamentally on physics: although neural networks only work well for an exponentially tiny fraction of all possible inputs, the laws of physics are such that the data sets we care about for machine learning (natural images, sounds, drawings, text, etc.) are also drawn from an exponentially tiny fraction of all imaginable data sets. Moreover, we will see that these two tiny subsets are remarkably similar, enabling deep learning to work well in practice.

 

Increasing the depth of a neural network can provide polynomial or exponential efficiency gains even though it adds nothing in terms of expressivity.

 

Both physics and machine learning tend to favor Hamiltonians that are polynomials — indeed, often ones that are sparse, symmetric and low-order.

 

1. Low polynomial order

For reasons that are still not fully understood, our universe can be accurately described by polynomial Hamiltonians of low order d. At a fundamental level, the Hamiltonian of the standard model of particle physics has d = 4. There are many approximations of this quartic Hamiltonian that are accurate in specific regimes, for example the Maxwell equations governing electromagnetism, the Navier-Stokes equations governing fluid dynamics, the Alv ́en equations governing magnetohydrodynamics and various Ising models governing magnetization — all of these approximations have Hamiltonians that are polynomials in the field variables, of degree d ranging from 2 to 4.

 

2. Locality

One of the deepest principles of physics is locality: that things directly affect only what is in their immediate vicinity. When physical systems are simulated on a computer by discretizing space onto a rectangular lattice, locality manifests itself by allowing only nearest-neighbor interaction.

 

3. Symmetry

Whenever the Hamiltonian obeys some symmetry (is invariant under some transformation), the number of independent parameters required to describe it is further reduced. For instance, many probability distributions in both physics and machine learning are invariant under translation and rotation.

 

Why Deep?

What properties of real-world probability distributions cause efficiency to further improve when networks are made deeper? This question has been extensively studied from a mathematical point of view, but mathematics alone cannot fully answer it, because part of the answer involves physics. We will argue that the answer involves the hierarchical/compositional structure of generative processes together with inability to efficiently “flatten” neural networks reflecting this structure.

 

A. Hierarchical processes

One of the most striking features of the physical world is its hierarchical structure. Spatially, it is an object hierarchy: elementary particles form atoms which in turn form molecules, cells, organisms, planets, solar systems, galaxies, etc. Causally, complex structures are frequently created through a distinct sequence of simpler steps.

 

We can write the combined effect of the entire generative process as a matrix product.

 

If a given data set is generated by a (classical) statistical physics process, it must be described by an equation in the form of [a matrix product], since dynamics in classical physics is fundamentally Markovian: classical equations of motion are always first order differential equations in the Hamiltonian formalism. This technically covers essentially all data of interest in the machine learning community, although the fundamental Markovian nature of the generative process of the data may be an in-efficient description.

 

Summary

The success of shallow neural networks hinges on symmetry, locality, and polynomial log-probability in data from or inspired by the natural world, which favors sparse low-order polynomial Hamiltonians that can be efficiently approximated. Whereas previous universality theorems guarantee that there exists a neural network that approximates any smooth function to within an error ε, they cannot guarantee that the size of the neural network does not grow to infinity with shrinking ε or that the activation function σ does not become pathological. We show constructively that given a multivariate polynomial and any generic non-linearity, a neural network with a fixed size and a generic smooth activation function can indeed approximate the polynomial highly efficiently.

 

The success of deep learning depends on the ubiquity of hierarchical and compositional generative processes in physics and other machine-learning applications.

 

And thanks to Tech Review for the pointer to this article:

 

Scala eXchange 2016, Thursday, 8th - Friday, 9th December at Business Design Centre, London. skillsmatter.com/conferences/7432-scala-exchange-2016#pro.... Images copyright www.edtelling.com

Scala eXchange 2016, Thursday, 8th - Friday, 9th December at Business Design Centre, London. skillsmatter.com/conferences/7432-scala-exchange-2016#pro.... Images copyright www.edtelling.com

Here it is on Soundcloud.

 

From interviewer Rainer Sternfeld: This is the 20th episode – as you know, every tenth episode we make is a special where I talk to someone who is of Estonian descent yet doesn’t speak the seemingly unintelligible language, or is a big friend of Estonia who is contributing to the success of Estonia.

 

We’re recording this on March 24 2017, and my guest today is an Estonian-American polymath, a world renowned venture capitalist, and the Estonia’s first e-resident outside Europe – Steve Jurvetson. In his day job, he invests in bold human endeavors in quantum computing, deep learning, electric cars, rockets, synthetic biology, genomics, robotics, and other areas.

 

In this podcast you’ll hear us cover a wide variety of brain-stimulating topics:

 

His technology-infused, Estonian-subtext upbringing in Arizona

How chip design and computing is undergoing a fundamental shift using biomimicry?

Why learning 9 programming languages is not as hard as 9 human languages, and what advice does he give to young people starting out in technology?

How does he think about the future of humanity in the light of accelerating rich-poor gap, automation, and why will robots be the slaves, not humans?

and his thoughts on why Estonia is competitive on the world stage.

Fasten your seatbelts!

 

Quotes

“If you didn’t understand evolution, and somebody explains it to you, you have to take your ego down a notch. You have to say: “Wait a minute. So humanity is not the endpoint of purposeful design? Wait – we’re just kind of an accident?!””

 

“I think we are currently in the middle of a major renaissance in how we do computation and how we actually think of engineering in general. I think it is shifting profoundly, almost as profoundly as when we first came up with the concept of the scientific method as a way to accumulate knowledge as a species over time. Something as profound is happening in the field of machine intelligence.”

 

“What fascinates me is that our humanity’s capacity to compute has compounded over 120 years and across multiple technology modes including mechanical devices etc. The main takeaway for me that is so powerful is there is I think a reflection here of a huge phenomenon, even bigger than computers themselves, which is humanity’s information reserve — our knowledge, our learning is compounding.”

 

“In terms of advice, first of all, I think that everyone should learn computer science. Do it young, do it early, do it often. Most importantly, I would encourage people, once they have had any taste of CS, to force themselves to play around with neural networks, whatever they will call it in the future. The core of it is neural networks patterned on the brain.”

 

“It sort of clicked for me that there are power laws in income (meaning it looks in and there are power laws in the number of companies that succeed in the information age businesses. As businesses succeed, they become information-centric and global, it tends to be winner takes all dynamic. Couple that with the notion that I strongly believe every business becomes an information business over time, just at different rates of speed. … The worries around AI should be centered on the concentration of power and I think OpenAI is spot on to say let’s look to Google, should one company be that powerful?”

 

from Memokraat

1 3 4 5 6 7 ••• 79 80