@jesse_hoogland According to Tegmark, the data-generating Hamiltonian is (almost always) simple to represent due to fundamental properties of physics like locality and symmetry https://t.co/wo7qKJisNq (And those hold in our reality because of... anthropics
@AliOilhan @Salih_Durhan @EMehmetKiral @UlusanZeynel @SefikaKuzgun https://t.co/thluEnE4nN Genellemenin neden mümkün olduğu meselesine cevap arayan sayısız makaleden biri, evrenin basit fonksiyonlari tercih ettiği ve hiyerarşik olduğu yargısına varıyor. Ge
@LauraDeming Insofar as 1) humans are ‘harmonic computers’, and 2) harmonics and resonance are simply manifestations of symmetry dynamics, it’s expected we’ll find such physical laws intuitive See also Lin, Tegmark, & Rolnick on neural nets as natural
@BasedBeffJezos @bayeslord the real question is how you perform affine transformations and non-linear operators on vectors with the hardware. i suppose that would be the secret sauce. anyways, this is a neat & relevant paper by Tegmark before he went
@quantian1 lol ..so true. :) Reminds me of this paper from max tegmark and gang ..https://t.co/2ST7tpr4zS
Max Tegmark in 2016 made a case for why deep neural networks “work” — problems we care about are based on compositions of simple, local, symmetric distributions drawn from the physics of the natural world. https://t.co/HxGDdiGyV8
@desipsmith Some of us felt similarly about hype-language in another of Tegmark’s neural net papers https://t.co/ecLKeg0KzY
@tylerraye @KordingLab @tyrell_turing This is one I had in mind, though I'm totally agnostic about it, just remembered the issue of the structure of the data being front and center. https://t.co/qUWBxN9zzO
@docente_errante ok, found it https://t.co/lIL0t8dETl
https://t.co/jx60HtY9rr probably should have read this years ago
@MSlapik @tegmark Check out the abstract! https://t.co/KHxYDXJ6oc
RT @cosmicfibretion: @realGeorgeHotz @block_saw AI and human intelligence could belong to the same universality class. They're abstract ext…
@cosmicfibretion @realGeorgeHotz @block_saw Check out the abstract! https://t.co/KHxYDXJ6oc
@realGeorgeHotz @block_saw AI and human intelligence could belong to the same universality class. They're abstract extensions of the human mind. https://t.co/e5HRL9g0Kr https://t.co/tjxQMhVZJZ https://t.co/otlcLz0But
@IgMosqueira Check out the abstract! https://t.co/KHxYDXJ6oc
@markproxy @pkghosh99 @jachiam0 @GaryMarcus In (most) conventional ML and DL/NN, polynomials are used to obtain the analytic surface (see https://t.co/0kSIJwV84R) whereas grammar is ignored - LLM's only implicitly encode a grammar used during generation.
@KiroSasuke @ilyasut I believe everyone is aware of the classic Hornik paper: https://t.co/SYqHLWFw0v Here is another one which says something more specific about "reversing the generative process": Why does deep and cheap learning work so well? https://t
@linasvepstas @IntuitMachine Check out the abstract! https://t.co/KHxYDXJ6oc
@IntuitMachine This one: Henry W. Lin, Max Tegmark, and David Rolnick "Why does deep and cheap learning work so well?" 3 Aug 2017 https://t.co/0BVMEfr65W
@ameasure @DrewBHeadley @EdwardRaffML @micahgoldblum Check out the abstract! https://t.co/KHxYDXJ6oc
@davidwhogg @ylecun what did you think of this one https://t.co/6z7sjQPyYz
前述の論文を読んで気になっている疑問を解決するには一つの手がかりとしてリンク先の論文を読むのがいいのかもしれないなと思ってます。情報理論と繰り込み群を使って深層学習の成功の背後にあるメカニズムを理解しようとしている論文です。https://t.co/7o0nuDzUIn
@asdf506f @svpino Check out the abstract! https://t.co/KHxYDXJ6oc
@svpino Even it is possible to approximate any function by a single layer (flat) network (by Kolmogorov theorem), the cost of this approximation could be rediculous. Like multiplication of n variables requires 2^n neurons in the hidden layer of the flat ne
RT @asdf506f: Интересно почему deep learning networks (DNN) так хороши, особенно для определенного класса задач (классификация и экстраполя…
Интересно почему deep learning networks (DNN) так хороши, особенно для определенного класса задач (классификация и экстраполяция изображений). На это есть технические ответы, вроде что многоуровневая структура дает "хороший" класс аппроксимируемых функций
Relevant: https://t.co/of37kBggtv
@nabavinoushin Check out the abstract! https://t.co/KHxYDXJ6oc
Why does #deep and #cheap learning work so well? https://t.co/7Ssp4ow3ER
Some claim Deep Learning is just blackbox and is not grounded in theory. This is clearly not a fair statement. This is (old) work from Lin et al. showing a profound connection between Deep Learning and the renormalization group in statistical physics. htt
Henry W. Lin and Max Tegmark, Why does deep and cheap learning work so well?, arXiv:1608.08225 https://t.co/huKmMklmno
RT @saqibali_ca: [1608.08225 ] Why does deep and cheap learning work so well ? https://t.co/dYAK1u35YU H/T: @andypotato
[1608.08225 ] Why does deep and cheap learning work so well ? https://t.co/dYAK1u35YU H/T: @andypotato
Why does #deeplearning work so well? Because the universe is low-order polynomial. So inspired by this article from physicists. Feel like this may explain why we have double descend that’s discovered recently https://t.co/eZNlficQMs
@benlandis Neural nets and differential equations may work well because of symmetries and locality in physics and global dependencies in biology. Limited to certain complexity classes. https://t.co/DXxWN8UFVM 2/2
@MatjazLeonardis @dela3499 @ernsterlanson @imothyjones ...Why Does Deep and Cheap Learning Work So Well? https://t.co/B8yt6DaoUa
RT @TalKachman: for the deep learning fans, please please read the paper "Why does deep and cheap learning work so well?" https://t.co/TunS…
for the deep learning fans, please please read the paper "Why does deep and cheap learning work so well?" https://t.co/TunSebNl5Z
@jacobmbuckman @yudapearl re: NN prefer the functions prevalent in nature @tegmark & @david_rolnick wrote paper on this: https://t.co/UMgJdNGfWU
@LauraDeming Occam's Razor / Minimum Kolmogorov/algorithmic Complexity. Higher order polynomials are more complex, and algorithmic compression/simplicity is the name of the game There's also an interesting hypothesized link between this phenomenon and neu
@DaleInnis BTW, here's the kinds of things I'm looking for. https://t.co/WsQiH9jAJF https://t.co/UW8J6F8DMY
@IntuitMachine As you can tell from my Twitter handle, I like that paper. Another one you might want to look at (and references) : https://t.co/B65RgxzrLl
@nabes82 https://t.co/dnypTnfY5Q ーくりこみ群とRBMの対応 https://t.co/0rXr6LFMym 情報ボトルネック原理 https://t.co/pGKpY3Y3HU https://t.co/hxll3O53O5 ーテンソルネットワークとRBMの対応, ホログラフィ原理🐈
Why does deep and cheap learning work so well? https://t.co/8bMwoWPHjJ
RT @cbennett10: Why does #deeplearning work so well? A #physics inspired approach by Lin & Tegmark [PDF] https://t.co/ivIk3r2Rs8 #ml #co…
@SaudenoBR Há justificativa matemáticas e físicas: https://t.co/8Y8pfNeSmz
RT @peio: Физиката предлага обяснение защо невронните мрежи се доближават до универсално решение на задачи, които математически трябва да с…
Физиката предлага обяснение защо невронните мрежи се доближават до универсално решение на задачи, които математически трябва да са много по-сложни: https://t.co/toUY54gRq1 Напомни ми и как Файнман говори за разликите между физика и математика: https://t.
Interesting paper... Happy New Year! https://t.co/eDIas4hOCY
Why does deep and cheap learning work so well? https://t.co/CFkBvulY7d
RT @PeterCochrane: How many times in the history Maths and Physics have we discovered domains where complexity in one dimension translates…
RT @PeterCochrane: How many times in the history Maths and Physics have we discovered domains where complexity in one dimension translates…
[1608.08225] Why does deep and cheap learning work so well? https://t.co/Go1NuAVuQv
How many times in the history Maths and Physics have we discovered domains where complexity in one dimension translates into simplicity in another....and now in Deep Learning Neural Nets! https://t.co/iIdA8A9laK https://t.co/DM8KbTmmcP
'We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks.' https://t.co/GGub3lGVpj
Really interesting article on deep learning and physic correlation. Why does deep and cheap learning work so well ? https://t.co/ANVd2XsDSQ
@davidmanheim @reasonisfun @DavidDeutschOxf @_FitCrit @dchackethal @dela3499 I don't think this is a good way to think of ML models due to the no free lunch theorem. There are no universal models of ML. However, have you seen: https://t.co/kWTzyieTW1
Why does deep and cheap learning work so well? https://t.co/crvhrA3Dvg
Does this idea explain the claim of DNNs fitting even the random noise and still generalizing well? In any case, the paper needs to be read https://t.co/0NjUhpgz3N
Neural networks do not need to approximate an infinitude of possible mathematical functions but only a tiny subset of the simplest ones https://t.co/YeYNnwwsRT https://t.co/KrRszkvJKo
Why Does Deep and Cheap Learning Work So Well? #deeplearning https://t.co/h6yBT8NzLL https://t.co/op4P5od6nm
Best sunday morning literature since a long time answering my question, why does Deep Learning actually work so well... thanks @MIT and @Harvard! #AI #DeepLearning https://t.co/nl3z1kd7OO
RT @danicuki: Why does deep and cheap learning work so well? Properties frequently encountered in physics such as symmetry, locality, compo…
RT @Avatrode: Tesla’s neural network makes Autopilot better with every update. An article from MIT helped me understand why. “Deep learni…
Why does deep and cheap learning work so well? #NeuralNetworks #MachineLearning https://t.co/NhSY4WUsOT
RT @Avatrode: Tesla’s neural network makes Autopilot better with every update. An article from MIT helped me understand why. “Deep learni…
RT @Avatrode: Tesla’s neural network makes Autopilot better with every update. An article from MIT helped me understand why. “Deep learni…
Tesla’s neural network makes Autopilot better with every update. An article from MIT helped me understand why. “Deep learning ... has helped dramatically improve... visual object recognition…” https://t.co/WtnEH61uF6 @tesla @elonmusk @Teslarati
Why does deep and cheap learning work so well? #deeplearning #neuralnetwork #cheaplearningwork https://t.co/uDAlJFPio5
@sigfpe @peligrietzer Btw I think the paper I linked in that old thread (now inaccessible due to my account being banned) was this one https://t.co/zg8UWhk9ia
This is cool indeed. A good read that sheds some light on the applicability of ML to physical problems - https://t.co/voSB27A6yb
@rctatman Totally on board with you! One possibly interesting additional line of evidence/reasoning: a physics-based analysis of why deep learning even works for what it works for: https://t.co/YjoNLeHNWC
Why does deep and cheap learning work so well? Properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks: https://t.co/3zvDyIbHBC
Why does deep learning even work? Because the universe (and contents within) is deeply hierarchical, and so are deep neural networks.: submitted by /u/nickbluth2 [visit reddit] [comments] https://t.co/EqMF6qRIhC
RT @pdhsu: Why does deep learning work so well? According to Max Tegmark, you might instead call it "cheap" learning that gains efficiency…
@EricTopol @geoffreyhinton @nxthompson @Google @Synced_Global @tonypeng_Synced Neural nets reduce computationally to nested systems of differential equations (backprop). This works well because of symmetries and locality in physics and global dependencies
@Daichi__Konno ありがとうございます、この記事とても面白いですよね! ちなみに(ここの議論とは少し離れるかもしれませんが)Max Tegmarkは深層学習を「パラメータ数が『少ない』のにどうして上手くいくのか?」という立場から論じていてそちらも非常に興味深いです。 https://t.co/W0hvhwg2l1
RT @tripdancer0916: Fristonらの階層性を持つ変分ベイズによる推論モデル、テグマークDeep learning論文(https://t.co/W0hvhwg2l1)のこの図を精緻化したものとみなせなくもない・・? https://t.co/Wr35XHh…
Fristonらの階層性を持つ変分ベイズによる推論モデル、テグマークDeep learning論文(https://t.co/W0hvhwg2l1)のこの図を精緻化したものとみなせなくもない・・? https://t.co/Wr35XHhpNk
Şovu kes! --> https://t.co/vgN0YZdxJL
Check out this quote. "https://t.co/3fLwOnesOd" https://t.co/veunUmSsYS
This is where we see mud thrown around the irreducible complexity of neural nets (https://t.co/gTR97vzIWj), human agency etc... I suspect it also reignites the debate between CompSci and Social Scientist, those who believe in the uniqueness of humanity an
An interesting paper when considering the necessity or desirability of interprability for AI ethics
100%
Why does deep and cheap learning work so well? Henry W. Lin (Harvard), Max Tegmark (MIT), David Rolnick (MIT) https://t.co/OgQJYAyxKB
RT @fmkz___: そもそも深層学習がうまくいくのは対象が物理的に単純だから https://t.co/OWaAdrlc2x
そもそも深層学習がうまくいくのは対象が物理的に単純だから https://t.co/OWaAdrlc2x
Why does deep and cheap learning work so well? https://t.co/0ccMk9h0Xv
Reading https://t.co/WfdWOPln7b , https://t.co/ZkXl50rMjL , and https://t.co/CGThAlyyBg Lin et al: "Physical hamiltonians are sparse and low-order; DL working has something to do with this." Mehta et al: "You can renormalize RBMs exactly like the definit
Our universe can be described by polynomial functions of low order and that’s the reason deep learning works and works well. From “Why does deep and cheap learning work so well?” (https://t.co/pBsX1sM1zQ)
renormalization group and DL: https://t.co/ckDMcDKqNk https://t.co/rtmdrRxnLW https://t.co/IfFT5P2bZQ
@zzznah @hardmaru Why does deep and cheap learning work so well? (2017) (Lin, @tegmark, Rolnick) https://t.co/Y5lShFd0gi The most interesting explanation I've seen so far, from a physics perspective. For reasons we don’t fully understand, the laws of ph
RT @tripdancer0916: @brenazmsc マックステグマークのこれとかは「世界はこう記述されている(=こういう普遍的性質が成り立つ)」 から機械学習がうまくいくんだ」って方向性で論じられてるで https://t.co/W0hvhwg2l1
@brenazmsc マックステグマークのこれとかは「世界はこう記述されている(=こういう普遍的性質が成り立つ)」 から機械学習がうまくいくんだ」って方向性で論じられてるで https://t.co/W0hvhwg2l1
Great read on Deep Neural Networks and the power of evolution. Check it out: "Evolution has somehow settled on a brain structure that is ideally suited to teasing apart the complexity of the universe" Link : https://t.co/94MIr305rl Ref: https://t.co/s0