#CUDAN2023 @schichmax follows up with emphasizing the importance of interdisciplinary collaboration. The last graphs are from this recent EPJDS paper of ours btw: https://t.co/oSlZtOKZ9j https://t.co/hiXtWe02Xi
RT @ingmarweber: .@schichmax mentioned this @epj_ds paper as one of his favorite papers in his interview with @suneman at @IC2S2 #IC2S2 ht…
RT @ingmarweber: .@schichmax mentioned this @epj_ds paper as one of his favorite papers in his interview with @suneman at @IC2S2 #IC2S2 ht…
RT @ingmarweber: .@schichmax mentioned this @epj_ds paper as one of his favorite papers in his interview with @suneman at @IC2S2 #IC2S2 ht…
.@schichmax mentioned this @epj_ds paper as one of his favorite papers in his interview with @suneman at @IC2S2 #IC2S2 https://t.co/Kfo0FHtLp7
RT @AndresKarjus: Simple zip compression outperforms word embeddings &huge pretrained BERTs on text classification! (by @ZhiyingJ) We were…
RT @AndresKarjus: Simple zip compression outperforms word embeddings &huge pretrained BERTs on text classification! (by @ZhiyingJ) We were…
RT @AndresKarjus: @ZhiyingJ @LukeGessler Very cool! We recently found smthing similar for images; in our case we were more interested in ex…
@skalskip92 Off the top of my head: - normalize by nearest n pixels (keep aspect ratio ¬ square) - it's very low res, blurs - but most importantly, compression of just the img is an ok baseline but what works better is transform+compression ensemble
@goodside Dropping this here as well - we recently used an analogous approach for images, but didn't do direct comparisons to ML/computer vision models due to different focus - now I'm thinking it would be cool if somebody did try! https://t.co/rGCxpFhkhq
@skalskip92 @goodside It does: https://t.co/rGCxpFhkhq (We didn't do explicit comparison to other methods as that was not the focus, but it works pretty ok for various culturally complex classification tasks)
RT @mcanet: @quasimondo @SchmidhuberAI Compressions have a lot of potential for research, we also in our aesthetic complexity paper, having…
@quasimondo @SchmidhuberAI Compressions have a lot of potential for research, we also in our aesthetic complexity paper, having shown compression can classify art style authorship etc https://t.co/5nPSSskAuR
@ZhiyingJ @LukeGessler Very cool! We recently found smthing similar for images; in our case we were more interested in exploring a multidim compression/complexity space, but also ran few downstream classification tasks where compression (esp w img transfor
Simple zip compression outperforms word embeddings &huge pretrained BERTs on text classification! (by @ZhiyingJ) We were wondering if that'd work for text too in our aesthetic complexity paper, having shown compression can classify art style authorship
Compression ensembles quantify aesthetic complexity and the evolution of visual art. EPJ Data Sci. (2023). https://t.co/gR0CAg7bUw
RT @schichmax: :)
:)
Such a cool use of PCA (and other viz tech) in art by @schichmax and colleagues! We did something related applied to Roman wall paintings with @kmacclin's yet unpublished dissertation "Computationally Modeling Roman Domestic Art and Architecture". https:
RT @net_science: Compression ensembles quantify aesthetic complexity and the evolution of visual art https://t.co/BLbIb6sqRF
super 🤩
Compression ensembles quantify aesthetic complexity and the evolution of visual art https://t.co/BLbIb6sqRF