Unthinking Photography - Latest Content https://unthinking.photography Kirby Thu, 04 Apr 2024 00:00:00 +0100 Unthinking Photography is an online resource that explores photography's increasingly automated, networked life. The Objections of Lady Lovelace: Diffusion Models and the Synthetic Muse https://unthinking.photography/articles/the-objections-of-lady-lovelace Thu, 04 Apr 2024 00:00:00 +0100 On The Fermentation of Digital Media https://unthinking.photography/articles/fermentation-of-digital-media Wed, 03 Apr 2024 00:00:00 +0100 Ways of Machine Seeing (Teachers Activities) https://unthinking.photography/imgexhaust/ways-of-machine-seeing-teachers-activities Wed, 03 Apr 2024 00:00:00 +0100 Models All The Way Down https://unthinking.photography/imgexhaust/models-all-the-way-down Mon, 01 Apr 2024 00:00:00 +0100 Elemental Computing https://unthinking.photography/imgexhaust/elemental-computing Mon, 18 Mar 2024 00:00:00 +0000 Is this the Middle East? https://unthinking.photography/imgexhaust/is-this-the-middle-east Mon, 18 Mar 2024 00:00:00 +0000 Activism https://unthinking.photography/articles/activism-ii Tue, 27 Feb 2024 00:00:00 +0000 Alternative Networks https://unthinking.photography/articles/alternative-networks Tue, 27 Feb 2024 00:00:00 +0000 NUCA Camera https://unthinking.photography/imgexhaust/nuca-camera Sat, 27 Jan 2024 00:00:00 +0000 Perfectly Clear: Interview with Jessica Wilson https://unthinking.photography/articles/interview-with-jessica-wilson Fri, 26 Jan 2024 00:00:00 +0000 Paragraphica https://unthinking.photography/imgexhaust/paragraphica Mon, 18 Dec 2023 00:00:00 +0000 Philosophy of the Digital Image https://unthinking.photography/imgexhaust/philosophy-of-the-digital-image Mon, 04 Dec 2023 00:00:00 +0000 Nightshade https://unthinking.photography/imgexhaust/nightshade Mon, 06 Nov 2023 00:00:00 +0000 Wonder3D https://unthinking.photography/imgexhaust/wonder3d Sat, 04 Nov 2023 00:00:00 +0000 Abundance https://unthinking.photography/articles/abundance Thu, 26 Oct 2023 00:00:00 +0100 Activism https://unthinking.photography/articles/activism Thu, 26 Oct 2023 00:00:00 +0100 Communities https://unthinking.photography/articles/communities Thu, 26 Oct 2023 00:00:00 +0100 History https://unthinking.photography/articles/history Thu, 26 Oct 2023 00:00:00 +0100 Glance Back https://unthinking.photography/imgexhaust/glance-back Wed, 18 Oct 2023 00:00:00 +0100 Seeing Infrastructure https://unthinking.photography/imgexhaust/seeing-infrastructure Tue, 17 Oct 2023 00:00:00 +0100 Aesthetics Wiki https://unthinking.photography/imgexhaust/aesthetics-wiki Tue, 26 Sep 2023 00:00:00 +0100 The Performativity of Ground-Truth Data https://unthinking.photography/articles/the-performativity-of-ground-truth-data Wed, 06 Sep 2023 00:00:00 +0100 Nouf.io https://unthinking.photography/imgexhaust/nouf-io Thu, 17 Aug 2023 00:00:00 +0100 DOT cam https://unthinking.photography/imgexhaust/dot-cam Fri, 11 Aug 2023 00:00:00 +0100 Atlas of the Cloud https://unthinking.photography/imgexhaust/atlas-of-the-cloud Thu, 10 Aug 2023 00:00:00 +0100 New Images: The Ecological Footprint Of Photography https://unthinking.photography/imgexhaust/new-images-the-ecological-footprint-of-photography Wed, 09 Aug 2023 00:00:00 +0100 Candy-Glazed Eyes of Haunted Machines https://unthinking.photography/imgexhaust/candy-glazed-eyes-of-haunted-machines Tue, 08 Aug 2023 00:00:00 +0100 History and environmental impact of digital image formats https://unthinking.photography/articles/history-and-environmental-impact-of-digital-image-formats Fri, 04 Aug 2023 00:00:00 +0100 Machine Unlearning https://unthinking.photography/imgexhaust/machine-unlearning Tue, 01 Aug 2023 00:00:00 +0100 Classifying Humans https://unthinking.photography/imgexhaust/classifying-humans Tue, 25 Jul 2023 00:00:00 +0100 Inside the AI factory https://unthinking.photography/imgexhaust/inside-the-ai-factory Thu, 20 Jul 2023 00:00:00 +0100 First generation women workers in Indian towns and villages employed as annotators https://unthinking.photography/imgexhaust/first-generation-women-workers-in-indian-towns-and-villages-employed-as-annotators Tue, 11 Jul 2023 00:00:00 +0100 Self-driving cars are here and they’re watching you https://unthinking.photography/imgexhaust/self-driving-cars-are-here-and-they-re-watching-you Tue, 04 Jul 2023 00:00:00 +0100 Is AI Killing the Stock Industry? A Data Perspective https://unthinking.photography/imgexhaust/is-ai-killing-the-stock-industry Tue, 27 Jun 2023 00:00:00 +0100 Infant Starter Pack https://unthinking.photography/imgexhaust/infant-starter-pack Mon, 26 Jun 2023 00:00:00 +0100 World Imagining Game https://unthinking.photography/commissions/between-worlds Fri, 23 Jun 2023 00:00:00 +0100 A Vernacular of File Formats https://unthinking.photography/imgexhaust/a-vernacular-of-file-formats Mon, 19 Jun 2023 00:00:00 +0100 Nephogram https://unthinking.photography/imgexhaust/https-andresgaleano-eu-es-nephogram-app Fri, 09 Jun 2023 00:00:00 +0100 Our smartphone cameras are not ready for the effects of climate change https://unthinking.photography/imgexhaust/our-smartphone-cameras-are-not-ready-for-the-effects-of-climate-change Fri, 09 Jun 2023 00:00:00 +0100 Prompt-to-Prompt Image Editing with Cross-Attention Control https://unthinking.photography/imgexhaust/prompt-to-prompt-image-editing-with-cross-attention-control Tue, 30 May 2023 00:00:00 +0100 Drag Your GAN https://unthinking.photography/imgexhaust/drag-your-gan Mon, 29 May 2023 00:00:00 +0100 AI machines aren’t ‘hallucinating’. But their makers are https://unthinking.photography/imgexhaust/ai-machines-aren-t-hallucinating-but-their-makers-are Mon, 08 May 2023 00:00:00 +0100 “You can’t look into my eyes.” The Aesthetics of Small-File Cinema https://unthinking.photography/articles/you-cant-look-into-my-eyes-the-aesthetics-of-small-file-cinema Thu, 04 May 2023 00:00:00 +0100 Archive of Lost Mothers https://unthinking.photography/imgexhaust/archive-of-lost-mothers Thu, 20 Apr 2023 00:00:00 +0100 High-resolution image reconstruction with latent diffusion models from human brain activity https://unthinking.photography/imgexhaust/high-resolution-image-reconstruction-with-latent-diffusion-models-from-human-brain-activity Tue, 18 Apr 2023 00:00:00 +0100 Latent Imaging and Imagining https://unthinking.photography/imgexhaust/latent-imaging-and-imagining Tue, 18 Apr 2023 00:00:00 +0100 On camera - Pocket guide to surveillance in the urban habitat https://unthinking.photography/imgexhaust/pocket-guide-to-surveillance-in-the-urban-habitat Tue, 18 Apr 2023 00:00:00 +0100 How to see like a machine https://unthinking.photography/imgexhaust/how-to-see-like-a-machine Wed, 12 Apr 2023 00:00:00 +0100 Delete By Default https://unthinking.photography/articles/delete-by-default Tue, 11 Apr 2023 00:00:00 +0100 Tamagotchi Pix https://unthinking.photography/imgexhaust/tamagotchi-pix Sun, 09 Apr 2023 00:00:00 +0100 Open-licensed Image Dataset of Surveillance Tech from EFF https://unthinking.photography/imgexhaust/open-licensed-image-dataset-of-surveillance-tech-from-eff Sun, 26 Mar 2023 00:00:00 +0000 Various and Casual Occursions https://unthinking.photography/imgexhaust/various-and-casual-occursions Wed, 22 Mar 2023 00:00:00 +0000 Opting out is not enough https://unthinking.photography/imgexhaust/opting-out-is-not-enough Tue, 14 Mar 2023 00:00:00 +0000 Samsung moon "space zoom" shots are fake https://unthinking.photography/imgexhaust/samsung-moon-space-zoom-shots-are-fake Tue, 14 Mar 2023 00:00:00 +0000 Memo Akten on the TikTok beauty filter [thread] https://unthinking.photography/imgexhaust/memo-akten-on-the-tiktok-beauty-filter-thread Thu, 09 Mar 2023 00:00:00 +0000 ChatGPT Is a Blurry JPEG of the Web https://unthinking.photography/imgexhaust/chatgpt-is-a-blurry-jpeg-of-the-web Mon, 27 Feb 2023 00:00:00 +0000 AI filmmaking with Runway ML https://unthinking.photography/imgexhaust/ai-filmmaking-with-runway-ml Thu, 23 Feb 2023 00:00:00 +0000 Screen Walks - Surveillance Playlist https://unthinking.photography/imgexhaust/screen-walks-surveillance-playlist Thu, 23 Feb 2023 00:00:00 +0000 Critical Topics: AI Images https://unthinking.photography/imgexhaust/critical-topics-ai-images Wed, 22 Feb 2023 00:00:00 +0000 Screen Walks - Intimacy Playlist https://unthinking.photography/imgexhaust/screen-walks-intimacy-playlist Thu, 16 Feb 2023 00:00:00 +0000 Do you speak English? I'm a Street Photographer https://unthinking.photography/imgexhaust/do-you-speak-english Wed, 15 Feb 2023 00:00:00 +0000 Imagewashing https://unthinking.photography/imgexhaust/imagewashing Wed, 15 Feb 2023 00:00:00 +0000 Small File Photo Festival https://unthinking.photography/commissions/small-file-photo-festival Sat, 28 Jan 2023 00:00:00 +0000 Sensing Worlds, a conversation with Jennifer Gabrys https://unthinking.photography/articles/a-coversation-with-jennifer-gabrys Tue, 17 Jan 2023 00:00:00 +0000 Screen Walks - Ecology Playlist https://unthinking.photography/imgexhaust/screen-walks-ecology-playlist Thu, 15 Dec 2022 00:00:00 +0000 Ten Years Of Image Synthesis https://unthinking.photography/imgexhaust/ten-years-of-image-synthesis Thu, 10 Nov 2022 00:00:00 +0000 Screen Walks - History Playlist https://unthinking.photography/imgexhaust/screen-walks-history-playlist Tue, 04 Oct 2022 00:00:00 +0100 A·kin https://unthinking.photography/commissions/a-kin Mon, 03 Oct 2022 00:00:00 +0100 How to read an AI image https://unthinking.photography/imgexhaust/how-to-read-an-ai-image Sun, 02 Oct 2022 00:00:00 +0100 Screen Walks - Virtual Worlds and Games https://unthinking.photography/imgexhaust/screen-walks-virtual-worlds-and-games Fri, 30 Sep 2022 00:00:00 +0100 Private medical record photos in popular AI training data set https://unthinking.photography/imgexhaust/private-medical-record-photos-in-popular-ai-training-data-set Wed, 21 Sep 2022 00:00:00 +0100 Not spam: a conversation with Shelby Shaw https://unthinking.photography/articles/not-spam-a-conversation-with-shelby-shaw Tue, 14 Jun 2022 00:00:00 +0100 Screen Walks - Curating Playlist https://unthinking.photography/imgexhaust/screen-walks-curating-playlist Wed, 09 Mar 2022 00:00:00 +0000 The state of the media on the web in 2021 https://unthinking.photography/imgexhaust/stats-about-images-on-the-web-in-2021 Wed, 15 Dec 2021 00:00:00 +0000 Messaging groups and the digital black-market in Cuba https://unthinking.photography/articles/messaging-groups-and-the-digital-black-market-in-cuba Sat, 11 Dec 2021 00:00:00 +0000 Shadow Growth https://unthinking.photography/commissions/shadow-growth Mon, 08 Nov 2021 00:00:00 +0000 When I image the earth, I imagine another https://unthinking.photography/commissions/when-i-image-the-earth-i-imagine-another Tue, 02 Nov 2021 00:00:00 +0000 Careful Networks https://unthinking.photography/commissions/careful-network Fri, 29 Oct 2021 00:00:00 +0100 Interview with Joana Moll https://unthinking.photography/articles/interview-with-joana-moll Fri, 08 Oct 2021 00:00:00 +0100 Basic Necessities https://unthinking.photography/commissions/basic-necessities Fri, 01 Oct 2021 00:00:00 +0100 Screen Walks - Capitalism Playlist https://unthinking.photography/imgexhaust/screen-walks-capitalism-playlist Wed, 04 Aug 2021 00:00:00 +0100 Flash Fictions on Alternative Networks https://unthinking.photography/commissions/flash-fictions-on-alternative-networks Sun, 01 Aug 2021 00:00:00 +0100 4004 https://unthinking.photography/commissions/4004 Tue, 27 Jul 2021 00:00:00 +0100 Photo App Turns Users Into Unwitting Spies for US Military https://unthinking.photography/imgexhaust/photo-app-turns-users-into-unwitting-spies-for-us-military Fri, 02 Jul 2021 00:00:00 +0100 What does the algorithm see? https://unthinking.photography/imgexhaust/what-does-the-algorithm-see Fri, 25 Jun 2021 00:00:00 +0100

We live in a world full of images made by machine for machines, from facial recognition technologies to automatic license plate readers and AI image categorisation. What’s more, these new ‘ways of seeing’ are coupled to ways of knowing and foment action in the real world. In this panel, which brings together artistic practice and research, we ask, how is machine vision influencing contemporary visual cultures? What kinds of social differences are produced or reproduced by these imaging systems? How might we begin to understand the technological substrate of standards, codecs, formats, training data sets and algorithms that make up the new seeing machines? And how might artistic practice provide a space for seeing differently?

Discussion between Rosa Menkman, Joanna Zylinska and Dr Rachel O’Dwyer

Watch at https://www.youtube.com/watch?v=kVHQu41cTlU

]]>
Screen Walks - Non-Human Playlist https://unthinking.photography/imgexhaust/screen-walks-non-human-playlist Sat, 19 Jun 2021 00:00:00 +0100 Alan Warburton - FGBFAQ https://unthinking.photography/imgexhaust/alan-warburton-fgbfaq Fri, 18 Jun 2021 00:00:00 +0100 Alan Warburton - FGBFAQ

2020

https://alanwarburton.co.uk/rgbfaq

Synthetic data is increasingly sought after as a ’clean’ alternative to real world data sets, which are often biased, unethically sourced or expensive to create. And while CGI data seems to avoid many of these pitfalls, my argument aims from the outset to consider whether the virtual world is as clean and steady as we think. I try to catalogue the ‘hacks’ used to construct the foundations of simulated worlds and suggest that the solutions of early computer graphics create a technical debt that might be less than ideal material on which to build the foundations of yet another generation of technology.  

]]>
Constraint Systems https://unthinking.photography/imgexhaust/constraint-systems Fri, 18 Jun 2021 00:00:00 +0100 Screen Walks - Marketplaces Playlist https://unthinking.photography/imgexhaust/screen-walks-marketplaces Fri, 18 Jun 2021 00:00:00 +0100 These creepy fake humans herald a new age in AI https://unthinking.photography/imgexhaust/these-creepy-fake-humans-herald-a-new-age-in-ai Fri, 18 Jun 2021 00:00:00 +0100

“To generate its synthetic humans, Datagen first scans actual humans. It partners with vendors who pay people to step inside giant full-body scanners that capture every detail from their irises to their skin texture to the curvature of their fingers. The startup then takes the raw data and pumps it through a series of algorithms, which develop 3D representations of a person’s body, face, eyes, and hands.”

]]>
Zoom In: An Introduction to Circuits https://unthinking.photography/imgexhaust/zoom-in-an-introduction-to-circuits Fri, 18 Jun 2021 00:00:00 +0100 Shedding More Light on How Instagram Works https://unthinking.photography/imgexhaust/shedding-more-light-on-how-instagram-works Thu, 10 Jun 2021 00:00:00 +0100 Screen Walks - Circulation Playlist https://unthinking.photography/imgexhaust/screen-walks-playlist Wed, 09 Jun 2021 00:00:00 +0100 Synthetic Messenger https://unthinking.photography/imgexhaust/synthetic-messenger Tue, 08 Jun 2021 00:00:00 +0100 Synthetic Messenger

Tega Brain and Sam Lavigne.

botnet that artificially inflates the value of climate news. Everyday it searches the internet for news articles covering climate change. Then 100 bots visit each article and click on every ad they can find.

In an algorithmic media landscape the value of news is determined by engagement statistics. Media outlets rely on advertising revenue earned through page visits and ad clicks. These engagement signals produce patterns of value that influence what stories and topics get future coverage. … At a time when our action or inaction has distinct atmospheric effects, the news we see and the narratives that shape our beliefs also directly shape the climate. What if media itself were a form of climate engineering, a space where narrative becomes ecology?

]]>
RadiTube — A Search Engine for Radical Content on YouTube https://unthinking.photography/imgexhaust/raditube-a-search-engine-for-radical-content-on-youtube Mon, 07 Jun 2021 00:00:00 +0100 source: https://www.reddit.com/r/interestingasfuck/duplicates/nh55at/german_ems_uses_qr_technology_to_discourage/ / https://www.e... https://unthinking.photography/imgexhaust/source-https-www-reddit-com-r-interestingasfuck-duplicates-nh55at-german-ems-uses-qr-technology-to-discourage-https-www-e Mon, 07 Jun 2021 00:00:00 +0100 source: https://www.reddit.com/r/interestingasfuck/duplicates/nh55at/german_ems_uses_qr_technology_to_discourage/ / https://www.ems1.com/scene-safety/articles/german-ems-uses-qr-technology-to-discourage-illegal-photography-at-emergency-scenes-nSnu8S5qUcqZYIf4/

]]>
Mapped: A Detailed Map of the Online World in Incredible Detail https://unthinking.photography/imgexhaust/mapped-a-detailed-map-of-the-online-world-in-incredible-detail Sun, 06 Jun 2021 00:00:00 +0100 Screen Walks - Aesthetics Playlist https://unthinking.photography/imgexhaust/screen-walks-aesthetics-playlist Sun, 06 Jun 2021 00:00:00 +0100 Shane Huntley on Twitter https://unthinking.photography/imgexhaust/shane-huntley-on-twitter Sun, 06 Jun 2021 00:00:00 +0100 Most everyone in the critical-studies and eco-critical pantheon — from Socrates to Marx to Arendt to Derrida; from Carolyn... https://unthinking.photography/imgexhaust/most-everyone-in-the-critical-studies-and-eco-critical-pantheon-from-socrates-to-marx-to-arendt-to-derrida-from-carolyn Fri, 28 May 2021 00:00:00 +0100 Most everyone in the critical-studies and eco-critical pantheon — from Socrates to Marx to Arendt to Derrida; from Carolyn Merchant to Rachel Carson, Bakhtin to Bookchin — worried about this kind of calculative abstraction, taking us away from the material, real effects of what it is we are quantifying, counting, calculating about.Can Hope be Calculated? Multiplying and Dividing Carbon, before and after Corona by Caroline Sinders & Jamie Allen
https://a-nourishing-network.radical-openness.org/can-hope-be-calculated-multiplying-and-dividing-carbon-before-and-after-corona.html
]]>
Evolved Virtual Creatures https://unthinking.photography/imgexhaust/evolved-virtual-creatures Thu, 27 May 2021 00:00:00 +0100 https://karlsims.com/evolved-virtual-creatures.html

Evolved Virtual Creatures

Karl Sims,  1994

This video shows results from a research project involving simulated Darwinian evolutions of virtual block creatures. A population of several hundred creatures is created within a supercomputer, and each creature is tested for their ability to perform a given task, such the ability to swim in a simulated water environment. Those that are most successful survive, and their virtual genes containing coded instructions for their growth, are copied, combined, and mutated to make offspring for a new population. The new creatures are again tested, and some may be improvements on their parents. As this cycle of variation and selection continues, creatures with more and more successful behaviors can emerge. The creatures shown are results from many independent simulations in which they were selected for swimming, walking, jumping, following, and competing for control of a green cube.

]]>
google-research/google-research https://unthinking.photography/imgexhaust/google-research-google-research Thu, 27 May 2021 00:00:00 +0100 Documenting During Internet Shutdowns https://unthinking.photography/imgexhaust/documenting-during-internet-shutdowns Wed, 26 May 2021 00:00:00 +0100 What Does the Algorithm See 2? https://unthinking.photography/imgexhaust/what-does-the-algorithm-see-2 Tue, 25 May 2021 00:00:00 +0100 Within The Terms And Conditions https://unthinking.photography/commissions/within-the-terms-and-conditions Wed, 12 May 2021 00:00:00 +0100 Screen Walks - Machine Learning Playlist https://unthinking.photography/imgexhaust/screen-walks-machine-learning-playlist Fri, 07 May 2021 00:00:00 +0100 Screen Walks - Performance Playlist https://unthinking.photography/imgexhaust/screen-walks-performance-playlist Thu, 15 Apr 2021 00:00:00 +0100 Screen Walks - Social Media Playlist https://unthinking.photography/imgexhaust/screen-walks-social-media-playlist Wed, 14 Apr 2021 00:00:00 +0100 Screen Walks - Web Maps Playlist https://unthinking.photography/imgexhaust/screen-walks-web-maps-playlist Thu, 01 Apr 2021 00:00:00 +0100 Screen Walks - Colonialism Playlist https://unthinking.photography/imgexhaust/screen-walks-colonialism-playlist Sat, 27 Mar 2021 00:00:00 +0000 Screen Walks - Research Playlist https://unthinking.photography/imgexhaust/screen-walks-research-playlist Thu, 11 Mar 2021 00:00:00 +0000 Letterlocking https://unthinking.photography/imgexhaust/letterlocking Tue, 09 Mar 2021 00:00:00 +0000 Letterlocking

Unlocking history through automated virtual unfolding of sealed documents imaged by X-ray microtomography.

Research from Jana Dambrogio, Amanda Ghassaei, Daniel Starza Smith, Holly Jackson, Martin L. Demaine, Graham Davis, David Mills, Rebekah Ahrendt, Nadine Akkerman, David van der Linden & Erik D. Demaine

We present a fully automatic computational approach for reconstructing and virtually unfolding volumetric scans of a locked letter with complex internal folding, producing legible images of the letter’s contents and crease pattern while preserving letterlocking evidence. 

https://www.nature.com/articles/s41467-021-21326-w

]]>
MyHeritage Deep Nostalgia™, deep learning technology to animate the faces in still family photos - MyHeritage https://unthinking.photography/imgexhaust/myheritage-deep-nostalgia-deep-learning-technology-to-animate-the-faces-in-still-family-photos-myheritage Tue, 09 Mar 2021 00:00:00 +0000 We read the paper that forced Timnit Gebru out of Google. Here’s what it says. https://unthinking.photography/imgexhaust/we-read-the-paper-that-forced-timnit-gebru-out-of-google-heres-what-it-says Mon, 15 Feb 2021 00:00:00 +0000 Machine Vision Knowledge Base | Machine Vision https://unthinking.photography/imgexhaust/machine-vision-knowledge-base-machine-vision Sun, 14 Feb 2021 00:00:00 +0000 Glossary https://unthinking.photography/imgexhaust/glossary Thu, 11 Feb 2021 00:00:00 +0000 From WORK HARD! PLAY HARD!, a collective self-organised platform dealing with the issues of knowledge production, cooperation, work, leisure, technology and acceleration through various performative, participatory and discursive formats.

Descriptions include Extractive Capitalism, Psychodata and Intimate Interfaces

]]>
POST GROWTH TOOLKIT https://unthinking.photography/imgexhaust/post-growth-toolkit Thu, 11 Feb 2021 00:00:00 +0000 Screen Walks - Materiality Playlist https://unthinking.photography/imgexhaust/screen-walks-materiality-playlist Sun, 31 Jan 2021 00:00:00 +0000 Maintaining Composure: An Interview with Tamiko Thiel https://unthinking.photography/articles/maintaining-composure-an-interview-with-tamiko-thiel Mon, 18 Jan 2021 00:00:00 +0000 Go Fake Yourself! https://unthinking.photography/commissions/go-fake-yourself Tue, 12 Jan 2021 00:00:00 +0000 Adversarial.io is an easy-to-use webapp for altering image material, in order to make it machine-unreadable. https://unthinking.photography/imgexhaust/adversarial-io-is-an-easy-to-use-webapp-for-altering-image-material-in-order-to-make-it-machine-unreadable Mon, 04 Jan 2021 00:00:00 +0000 ]]> Lines of Sight https://unthinking.photography/imgexhaust/lines-of-sight Mon, 04 Jan 2021 00:00:00 +0000 Working with Faces https://unthinking.photography/imgexhaust/working-with-faces Mon, 04 Jan 2021 00:00:00 +0000 Interview with Nestor Siré [Part I] https://unthinking.photography/articles/interview-with-nestor-sire Wed, 16 Dec 2020 00:00:00 +0000 Interview with Nestor Siré [Part II] https://unthinking.photography/articles/interview-with-nestor-sire-part-ii Wed, 16 Dec 2020 00:00:00 +0000 Screen Walks - Identity Playlist https://unthinking.photography/imgexhaust/screen-walks-identity-playlist Tue, 03 Nov 2020 00:00:00 +0000 SedaG on Twitter https://unthinking.photography/imgexhaust/sedag-on-twitter Wed, 07 Oct 2020 00:00:00 +0100 Capture - Profiling Faces of French Police Officers https://unthinking.photography/imgexhaust/capture-profiling-faces-of-french-police-officers Mon, 05 Oct 2020 00:00:00 +0100 source: interview between Kay Watson and Rebecca Allen (Serpentine... https://unthinking.photography/imgexhaust/source-interview-between-kay-watson-and-rebecca-allen-serpentine Tue, 22 Sep 2020 00:00:00 +0100 source: interview between Kay Watson and Rebecca Allen (Serpentine R&D) https://www.serpentinegalleries.org/art-and-ideas/rebecca-allen/

]]>
[Read the thread and replies] https://arxiv.org/pdf/1801.05787.pdf https://unthinking.photography/imgexhaust/read-the-thread-and-replies-https-arxiv-org-pdf-1801-05787-pdf Sun, 20 Sep 2020 00:00:00 +0100 https://t.umblr.com/redirect?z=https%3A%2F%2Ftwitter.com%2Fbascule%2Fstatus%2F1307440596668182528&t=NjI3MzgyMGUxMjYzZTU3NjUyNDA4YzhkMThiNWE0ZWI1NDg3MmExZiw2MTE4MjAzZjZlN2Q3OTBiYWJkY2M1OGYxNTc1ODkxOTJkY2NhNjY1&ts=1600591480

[Read the thread and replies]

https://arxiv.org/pdf/1801.05787.pdf

]]>
Screentime https://unthinking.photography/imgexhaust/screentime Sat, 12 Sep 2020 00:00:00 +0100 Javier Lloret Pardo - Annotators View https://unthinking.photography/imgexhaust/javier-lloret-pardo-annotators-view Fri, 11 Sep 2020 00:00:00 +0100


Javier Lloret Pardo - Annotators View

Image Annotators constitute the hidden labour of AI vision. The current ubiquitous techniques of image classification, segmentation and scene description wouldn’t be possible without the manual labour performed by image annotators. They are the ones teaching AI-based computer vision to “see”. Recent scene annotation techniques ask annotators to point at what they are looking at while describing the scene in a given image.

“Annotators’ View” displays the way image annotators see. It progressively reveals only the areas of the image that annotators’ look and point at. The site creates an endless loop by randomly selecting images that have been annotated with this technique from different datasets. These images overwrite each other, creating an ever-changing visual collage.

https://www.annotatorsview.online

]]>
Localized Narratives https://unthinking.photography/imgexhaust/localized-narratives Fri, 11 Sep 2020 00:00:00 +0100 source:... https://unthinking.photography/imgexhaust/source Wed, 09 Sep 2020 00:00:00 +0100 source: https://www.designboom.com/art/josh-begley-every-new-york-times-front-page-since-1852-video-03-14-2017/ /  https://joshbegley.com

]]>
HUMAN COMPUTERS https://unthinking.photography/imgexhaust/human-computers Sun, 06 Sep 2020 00:00:00 +0100 HUMAN COMPUTERS is a project that investigates the relationships between computing and labor organisation.

The project is a media archaeology research that starts with the simulacra of a computing machine, the Mechanical Turk, by Von Kempelen, to the actual Mechanical Turk service opearted by Amazon, that provides workers that trains AI and machine learning softwares.

On a deeper layer, engaging the very origin of computing with the computation factory model conceived by Gaspard de Prony in 1793, the project tries to understand the bonds that tie together economics, labor division and computing.

Human Computers is a collaborative work by RYBN.ORG and Marie Lechner.
Supported by PACT Zollverein and PAMAL (ESA Avignon). 

Source: http://rybn.org/human_computers/

image
]]>
Fluid behaving like solid - slow-motion footage https://unthinking.photography/imgexhaust/fluid-behaving-like-solid-slow-motion-footage Sat, 05 Sep 2020 00:00:00 +0100 Planet Earth From Above https://unthinking.photography/imgexhaust/planet-earth-from-above Sat, 05 Sep 2020 00:00:00 +0100 lewisandquark:

Melbourne, Australia: home of kangaroos, botanical gardens, and a surreal monolith, jutting impossibly tall and narrow above its unassuming neighbors.


A plane flies toward a 212-story building rising above the otherwise flat city of Melbourne. It would be unremarkable as a real photo if not for the uncanny monolith.

A small two-person plane on the roof of a 212-story building surrounded by a 1-story suburb. View looks steeply down toward the ground. The top of the building seems to be only about an eighth of a city block in size

[images from a video by reddit user fulltimespy, in a successful completion of the Monolith Challenge]

This is the virtual Melbourne of Microsoft Flight Simulator 2020, where players are flocking to see the weird building and, naturally, land on its roof, before the game is patched and the monolith disappears.

How did this happen? Microsoft Flight Simulator 2020 uses AI to fill in building details from a combination of satellite images and crowdsourced data from Open Street Maps. And at one point, someone who was entering building height data for Melbourne made a typo, accidentally changing a building height of 2 stories to 212. Monoliths are gradually being discovered in other places as well.

Even barring typos, the task of reconstructing every building in the world from height data and satellite photos is really tough. A roof’s details might give clues about whether a structure is a historic villa or an office block, but it’s easy to make mistakes if, for example, you don’t know what the Washington Monument is.


A plane flying over DC past the Washington Monument, which, instead of a pointy white obelisk, is a tall skinny office building

[screenshot posted by Reddit user NightReaper3210]

Because a nondescript office building is a reasonable default guess given a square building pad and a many-story height, the AI will tend to populate the planet with them unless specifically told otherwise. The Statue of Liberty, the Taj Mahal, and the Eiffel Tower are all lavishly hand-modeled in 3D. But The Motherland Calls statue of Volgograd is a condo high-rise, Buckingham Palace is an apartment complex, the Leaning Tower of Pisa is a vertical concrete silo, and the Pyramid of the Sun is a nondescript warehouse with a hilarious tiny dome on its roof.


Top: The Pyramid of the Sun, an ancient stepped pyramid with a square base.

Bottom: The Fight Simulator version, now a square warehouse with a tiny weird dome on part of its roof.

[screenshot, posted on reddit by l4adventure]

The AI is also making its best guesses when it comes to traffic patterns. It knew that this Boston street intersected a building somehow but didn’t know that the road passed through the building via tunnel. So it had the traffic drive up the side of the building.



Other terrain glitches force the traffic to do even weirder things. If the road is suddenly tilted vertically along the wall of a newly created canyon in northwest Iowa, the traffic will still drive on the road, just… sideways.



Water levels in particular seem prone to being incorrect, sometimes drastically so. The Pingualuit Impact Crater of northern Canada was apparently inverted by one of these glitches.


An incredibly steep-sided mesa is topped with a mirrorlike lake stretching out to its edge. The mesa walls are probably thousands of feet tall.

[image by reddit user NovaSilisko]

Bergen, Norway, has been transformed by this bug into canyonlike terrain, its buildings forced to adapt to the suddenly steep ground, their roofs rising like mushrooms for dozens of stories. It’s otherworldly, unrecognizable.


Screenshot of Bergen, Norway - the city is now dotted with impossibly steep narrow hills, and the houses clinging to them end up stretching down from their roofs for maybe 10 or 50 stories to reach the suddenly-distant ground below. The city is spooky, unearthly - it’s so darn cool.

[screenshot by Mikael Privatby]

Greenland, on the other hand, is terrifying. The available terrain and satellite data is less precise, so pixels are sometimes visible as square-edged neighborhood-sized patches of gravel. The far north is marked by 20,000 foot ice walls, improbable ice spikes, and strange shimmering rifts. The geographic North Pole itself is unreachable; players report that any attempt to descend below 2,000 feet results in the player being rocketed skyward by a strange repulsion force.


A tiny plane is dwarfed beside a curving wall of sheer-sided ice, ringed by ice spikes, and topped with a flat frozen lake

[20,000 foot ice wall image: reddit user unrelentingdespair]


Flight Simulator screenshot showing a plane flying at around 15,500 feet, over a strangely scalloped icy landscape. Rising out of the ice is an impossibly pointy mountain that's at least 20,000 feet tall.

[screenshot near the north pole: reddit user Feydakin_G]

Some Microsoft Flight Simulator 2020 players are thrilled with the unusual terrain, while others are disappointed when the photorealism is broken, and/or when their city’s distinctive architecture and most beloved landmarks are replaced by nondescript concrete jungle. The AI itself isn’t going to be able to reconstruct the world’s weirdness from satellite photos, so already people are crowdsourcing hand-modeled landmarks. You can install an add-on to convert Stonehenge, for example, from a miniature flattened Spinal Tap version to a full-sized 3D model.

As the developers tweak their algorithms and fix other things by hand, slowly the weirdness will be ironed out, the rivers and lakes set back in their beds, the statues restored to their detailed glory. Many will be disappointed when it happens - I’ll particularly miss the Melbourne Monolith. It would be nice to have a weirdness slider that goes from normal to Ragnarok, amplifying terrain chaos, perhaps adding the occasional floating mountain range or lava lake.

Bonus content: I prompted GPT-3 to write an Atlas Obscura entry for the Melbourne Monolith. It added entries for a few other Melbourne landmarks, like the Artificial Gardens of Loria and The Very Pickled Centurion (did you know that the Lost Bar, like the Australian rules football lounge, is 10,000 light years away from the city it’s located in?) Get your bonus content here!


A panoramic view of a wide mountain landscape in Alaska’s Denali National Park. A river winds through the valley, on its own steep-sided platform elevated thousands of feet high.

[aqueducts of Denali, screenshot by daveonthenet]

My book on AI, You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why it’s Making the World a Weirder Place, is available wherever books are sold: Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s - Boulder Bookstore

]]>
What virtual-reality animal experiments are revealing about the brain. ... https://unthinking.photography/imgexhaust/what-virtual-reality-animal-experiments-are-revealing-about-the-brain Sat, 05 Sep 2020 00:00:00 +0100 What virtual-reality animal experiments are revealing about the brain.

Source: https://www.nature.com/articles/d41586-019-00791-w

]]>
Image “Cloaking” for Personal Privacy https://unthinking.photography/imgexhaust/image-cloaking-for-personal-privacy Wed, 12 Aug 2020 00:00:00 +0100 Image “Cloaking” for Personal Privacy

The SAND Lab at University of Chicago has developed Fawkes1, an algorithm and software tool (running locally on your computer) that gives individuals the ability to limit how their own images can be used to track them. At a high level, Fawkes takes your personal images and makes tiny, pixel-level changes that are invisible to the human eye, in a process we call image cloaking. You can then use these “cloaked” photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable by humans or machines and will not cause errors in model training. However, when someone tries to identify you by presenting an unaltered, “uncloaked” image of you (e.g. a photo taken in public) to the model, the model will fail to recognize you.

Fawkes has been tested extensively and proven effective in a variety of environments and is 100% effective against state-of-the-art facial recognition models (Microsoft Azure Face API, Amazon Rekognition, and Face++). We are in the process of adding more material here to explain how and why Fawkes works. For now, please see the link below to our technical paper, which will be presented at the upcoming USENIX Security Symposium, to be held on August 12 to 14.

The Fawkes project is led by two PhD students at SAND Lab, Emily Wenger and Shawn Shan, with important contributions from Jiayun Zhang (SAND Lab visitor and current PhD student at UC San Diego) and Huiying Li, also a SAND Lab PhD student. The faculty advisors are SAND Lab co-directors and Neubauer Professors Ben Zhao and Heather Zheng.

Source: https://gizmodo.com/this-algorithm-might-make-facial-recognition-obsolete-1844591686

]]>
The Iconic Image on Social Media https://unthinking.photography/imgexhaust/the-iconic-image-on-social-media Wed, 12 Aug 2020 00:00:00 +0100 Image-GPT https://unthinking.photography/imgexhaust/image-gpt Wed, 22 Jul 2020 00:00:00 +0100 On Lacework: watching an entire machine-learning dataset https://unthinking.photography/articles/on-lacework Mon, 20 Jul 2020 00:00:00 +0100 Beetle-mounted camera streams insect adventures https://unthinking.photography/imgexhaust/beetle-mounted-camera-streams-insect-adventures Mon, 20 Jul 2020 00:00:00 +0100 Creative AI Lab https://unthinking.photography/imgexhaust/creative-ai-lab Mon, 20 Jul 2020 00:00:00 +0100 MaxBittker/shaderbooth https://unthinking.photography/imgexhaust/maxbittker-shaderbooth Thu, 09 Jul 2020 00:00:00 +0100 Artificial imagination: Deepfakes from latent space https://unthinking.photography/imgexhaust/artificial-imagination-deepfakes-from-latent-space Wed, 08 Jul 2020 00:00:00 +0100 80 Million Tiny Images https://unthinking.photography/imgexhaust/80-million-tiny-images Tue, 07 Jul 2020 00:00:00 +0100 Pulse https://unthinking.photography/imgexhaust/pulse Tue, 07 Jul 2020 00:00:00 +0100 PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

Research paper: https://arxiv.org/pdf/2003.03808.pdf

Tester: https://colab.research.google.com/github/tg-bomze/Face-Depixelizer/blob/master/Face_Depixelizer_Eng.ipynb

https://github.com/adamian98/pulse#what-does-it-do

Responses:

https://twitter.com/VidushiMarda/status/1274744956142280704

https://twitter.com/Chicken3gg/status/1274314622447820801

https://twitter.com/quasimondo/status/1274636495941500928

https://twitter.com/adamhrv/status/1275438660716879872

https://twitter.com/hellocatfood/status/1275734895202050049

]]>
We propose Localized Narratives, an efficient way to collect image captions with dense visual grounding. We ask annotators to... https://unthinking.photography/imgexhaust/we-propose-localized-narratives-an-efficient-way-to-collect-image-captions-with-dense-visual-grounding-we-ask-annotators-to Tue, 07 Jul 2020 00:00:00 +0100

We propose Localized Narratives, an efficient way to collect image captions with dense visual grounding. We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing. Since the voice and the mouse pointer are synchronized, we can localize every single word in the description. This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data. We annotate 628k images with Localized Narratives: the whole COCO dataset and 504k images of the Open Images dataset, which can be downloaded below. We provide an extensive analysis of these annotations and demonstrate their utility on two applications which benefit from our mouse trace: controlled image captioning and image generation.

Caption: “In this image I can see a painting and above it I can see few numbers are written. I can see the painting is of water, few birds, few people, few trees and few buildings.”

Metadata: Image source: C & O Canal Mural - 3000 M Street NW Georgetown Washington (DC) August 2014. Author: Ron Cogswell. Image license.
Dataset: Open Images. ID: 28ad453294ca98ce. Recording file.

Research: https://arxiv.org/pdf/1912.03098.pdf by Jordi Pont-Tuset, Jasper Uijlings, Soravit Changpinyo, Radu Soricut, and Vittorio Ferrari
arXiv:1912.03098, 2019

Dataset: https://google.github.io/localized-narratives/

]]>
Automated Photography https://unthinking.photography/imgexhaust/automated-photography Mon, 06 Jul 2020 00:00:00 +0100 There is an explicit kinship between plantation slavery, colonial predation and contemporary forms of resource extraction and... https://unthinking.photography/imgexhaust/there-is-an-explicit-kinship-between-plantation-slavery-colonial-predation-and-contemporary-forms-of-resource-extraction-and Mon, 06 Jul 2020 00:00:00 +0100

There is an explicit kinship between plantation slavery, colonial predation and contemporary forms of resource extraction and appropriation. In each of these instances, there is a constitutive denial of the fact that we, the humans, coevolve with the biosphere, depend on it, are defined with and through it and owe each other a debt of responsibility and care.

An important difference is the technological escalation that has led to the emergence of computational capitalism in our times. We are no longer in the era of the machine but in the age of the algorithm. Technological escalation, in turn, is threatening to turn us all into artefacts – what I have called elsewhere “the becoming-black-of-the world” – and to make redundant a huge chunk of the muscular power capitalism relied upon for a long time. It follows that today, although its main target remains the human body and earthly matter, domination and exploitation are becoming increasingly abstract and reticular. As a repository of our desires and emotions, dreams, fears and fantasies, our mind and psychic life have become the main raw material which digital capitalism aims at capturing and commodifying.

Achille Mbembe, 2019

Source: https://www.newframe.com/thoughts-on-the-planetary-an-interview-with-achille-mbembe/

]]>
On MIT’s Moments in Time (and Being Dead-Alive) https://unthinking.photography/articles/on-mits-moments-in-time-and-being-dead-alive Tue, 30 Jun 2020 00:00:00 +0100 Lacework https://unthinking.photography/commissions/lacework Mon, 29 Jun 2020 00:00:00 +0100 Screen Walks - Mixed Reality Playlist https://unthinking.photography/imgexhaust/screen-walks-mixed-reality-playlist Mon, 01 Jun 2020 00:00:00 +0100 evaandfrancomattes  Very excited to announce that we just released to public domain the hires image of our sculpture ‘Ceiling... https://unthinking.photography/imgexhaust/evaandfrancomattes-very-excited-to-announce-that-we-just-released-to-public-domain-the-hires-image-of-our-sculpture-ceiling Tue, 26 May 2020 00:00:00 +0100

evaandfrancomattes 
Very excited to announce that we just released to public domain the hires image of our sculpture ‘Ceiling Cat’, through @wikipedia

We made an agreement with @sfmoma - who acquired the work - to give up ownership of the photo, so that anybody can copy it, or use it for whatever purpose, free of charge.You’re all welcome to download it and use it!

In the last few years we’ve been fascinated by memes, these weird mutating creatures who populate the Internet.Memes are very popular images that circulate virally and generate endless versions.

Memes defy all art parameters: they’re created collectively, they’re anonymous and free.Ceiling Cat is a sculpture based on an “old” internet meme. It’s a taxidermy cat, peeking through a hole in the ceiling, always watching us, watching it.

It’s cute and scary at the same time, some see it as a symbol of the internet itself: a global surveillance system in an appealing fuzzy wrap.

#ceilingcat #SFMOMA #wikimedia #creativecommons #lolcat 🐈

Source: https://www.instagram.com/p/CAptFEuFIr1/

]]>
Neural Network Cultures  https://unthinking.photography/imgexhaust/neural-network-cultures Sun, 17 May 2020 00:00:00 +0100 Transmediale 2020

End to End Exchange #5 
Panel discussion

Neural Network Cultures 

with Tega Brain, Stephanie Dick, Katharine Jarmul, Fabian Offert and Matteo Pasquinelli


]]>
The Oil of the 21st Century https://unthinking.photography/imgexhaust/the-oil-of-the-21st-century Sun, 17 May 2020 00:00:00 +0100

The 0xdb, developed as part of the “Oil of the 21st Century” project, is a proposal for a new type of cultural database, build on top of file-sharing networks — and a practical intervention in the ongoing conflict between the protection of intellectual property and the exercise of fair use rights.

The Oil of the 21st Century

Perspectives on Intellectual Property

“Intellectual Property is the oil of the 21st century” - this quote by Mark Getty, chairman of Getty Images, one of the world’s largest Intellectual Proprietors, offers a unique perspective on the current conflicts around copyrights, patents and trademarks. Not only does it open up the complete panorama of conceptual confusion that surrounds this relatively new and rather hallucinatory form of property - it must also be understood as a direct declaration of war.

The “War Against Piracy” - a preventive, permanent and increasingly panic-driven battle that defies the traditional logic of warfare - is only one of the many strange and contradictory crusades that currently take place at the new frontier of Intellectual Property. Under the banner of the “Information Society”, a cartel of corporate knowledge distributors struggle to maintain their exclusive right to the exploitation and commodification of the informational resources of the world. With their campaign for “Digital Rights Management”, the copyright industries attempt to simultaneously outlaw the Universal Computer, revoke the Internet and suspend the fundamental laws of information. Under the pretext of the “Creative Commons”, an emerging middle class of Intellectual Proprietors fights an uphill battle against the new and increasingly popular forms of networked production that threaten the regimes of individual authorship and legal control. And as it envisions itself drilling for “the oil of the 21st century”, the venture capital that fuels the quest for properties yet undiscovered has no choice but to extend the battlefield even further, far beyond the realm of the immaterial, deep into the world of machines, the human body, and the biosphere.

But while Intellectual Property struggles to conquer our hearts and minds, ideas still improve, and technology participates in the improvement. On all fronts, the enormous effort towards expropriation and privatization of public property is met with a strange kind of almost automatic resistance. If piracy - the spontaneously organized, massively distributed and not necessarily noble reappropriation and redistribution of the Commons - seems necessary today, then because technological progress implies it.

Technological progress - from the Printing Press to the BitTorrent protocol - is what essentially drives cultural development and social change, what makes it possible to share ideas, embrace expressions, improve inventions and correct the works of the past. Human history is the history of copying, and the entirely defensive and desperate attempt to stall its advancement by the means of Intellectual Property - the proposition to resurrect the dead as rights holders and turn the living into their licensees - only indicates how profoundly recent advancements in copying technology, the adaptability and scalability they have attained, the ideas and habits they are creating, are about to change the order of things. What lies at the core of the conflict is the emergence of new modes of subjectivation that escape the globally dominant mode of production. The spectre that is haunting Intellectual Proprietors world-wide is no longer just the much-lamented “death of the author”, but the becoming-producer and becoming-distributor of the capitalist consumer.

The world has irrevocably entered the age of digital reproduction, and it is time to revisit the questions that Walter Benjamin raised in the light of photography and film: how to reaffirm the positive potential and promise that lies in today’s means of reproduction, how to refuse the artificial scarcity that is being created as an attempt to contain the uncontrolled circulation of cultural commodities, how to resist the rhetoric of warfare that only articulates the discrepancy between the wealth of technical possibilities and the poverty of their use, and how to renew the people’s legitimate claim to copy, to be copied, and to change property relations.

In order to deconstruct - and to develop radically different perspectives on - the “oil of the 21st century”, there is an urgent need for approaches that provide fewer answers and more questions, produce less opinion and more curiosity. The coils of the serpent are even more complex than the burrows of the molehill, and the task is to trace, with the same bewilderment that befell Franz Kafka at the advent of the modern juridical bureaucracies, the monstrous, absurd and often outright hilarious legal procedures and protocols of the Intellectual Property Era.
]]>
Corona literacy, or Inoculating the pandemic https://unthinking.photography/imgexhaust/corona-literacy-or-inoculating-the-pandemic Sat, 16 May 2020 00:00:00 +0100 A short text on visual literacy, and media creation of public health videos for Covid-19 

Knowledge of the creative environments and pedagogical concerns behind this audio-visual material on the crisis would be an interesting entry point for discussing their respective virtues and failings. Generally speaking, such videos are an attempt to simplify the scientific and policy discourse to the degree by which the non-expert and a child audience are able to make sense of it and act accordingly. These videos therefore are means of public perlocutionary speech, they interpolate the subject in its pandemized condition, and have to be read and evaluated thusly, i.e. as ideological media or mediatized ideology.

What interests me in the field of text/image online tutorials and instructional videos most, however, are the problems inherent in the reduction and modulation of the complexity of epidemiological fact. Moreover, the importance drawing and illustration art are granted for this purpose, and the background and experience from which such instructional visuals on the coronavirus crisis are produced, inform this translation and reduction that aims at didactic efficiency. The choices for style, technique, look etc. assume considerable relevance, particularly as they often tend to be neglected and ignored, simply taken for granted and thus beyond perceptive attention or out of the crosshairs of any critical perspective.

The current production of learning material targeting a young audience that is largely home-based and home-schooled these weeks did not come into existence without knowledge of preceding visual materials created for teaching and guiding through epidemics. Their efficiency in inoculating the pandemic by pedagogical means has depended by necessity on literacies developed prior to the current crisis.

Source: https://www.harun-farocki-institut.org/en/2020/04/16/corona-literacy-or-inoculating-the-pandemic/

]]>
I recreated my local pub in VR https://unthinking.photography/imgexhaust/i-recreated-my-local-pub-in-vr Sat, 16 May 2020 00:00:00 +0100 Perhaps most notoriously, a few years ago, AI researchers Xiaolin Wu and Xi Zhang claimed to have trained an algorithm to... https://unthinking.photography/imgexhaust/perhaps-most-notoriously-a-few-years-ago-ai-researchers-xiaolin-wu-and-xi-zhang-claimed-to-have-trained-an-algorithm-to Sat, 16 May 2020 00:00:00 +0100 Perhaps most notoriously, a few years ago, AI researchers Xiaolin Wu and Xi Zhang claimed to have trained an algorithm to identify criminals based on the shape of their faces, with an accuracy of 89.5 per cent. They didn’t go so far as to endorse some of the ideas about physiognomy and character that circulated in the 19th century, notably from the work of the Italian criminologist Cesare Lombroso: that criminals are underevolved, subhuman beasts, recognisable from their sloping foreheads and hawk-like noses. However, the recent study’s seemingly high-tech attempt to pick out facial features associated with criminality borrows directly from the ‘photographic composite method’ developed by the Victorian jack-of-all-trades Francis Galton – which involved overlaying the faces of multiple people in a certain category to find the features indicative of qualities like health, disease, beauty and criminality.Algorithms associating appearance and criminality have a dark past. Catherine Stinson / https://aeon.co/ideas/algorithms-associating-appearance-and-criminality-have-a-dark-past
]]>
Using AI to search lungs for signs of Covid-19 https://unthinking.photography/imgexhaust/using-ai-tosearch-lungs-for-signs-of-covid-19 Sat, 16 May 2020 00:00:00 +0100

However, thanks to the pandemic, a few British hospitals are now rolling out AI tools to help medical staff interpret chest X-rays more quickly. For instance, staff at the Royal Bolton Hospital, are using AI that has been trained on more than 2.5 million chest X-rays, including around 500 confirmed Covid-19 cases.

It has been running automatically on every chest X-ray the hospital has carried out for about a week, says Rizwan Malik, a radiology consultant at the hospital. This means more than 100 patients will have had X-rays analysed by the system to date, he estimates. In this case, the algorithm is designed to look for possible signs of Covid-19, such as patterns of opacity in the lungs.

“It basically gives clinicians another tool to help them make decisions - for example, which patients they’ll admit, which they’ll send home,” says Dr Malik, who notes that patient data is processed entirely within the hospital’s own network. The software itself was developed by Mumbai-based Qure.ai.

File under AI Promises (See also https://imgexhaust.tumblr.com/post/617475804392210432/googles-medical-ai-was-super-accurate-in-a-lab)

Source: https://www.bbc.co.uk/news/business-52483082

]]>
WeChat surveils foreign accounts to help censor what Chinese users see: study https://unthinking.photography/imgexhaust/wechat-surveils-foreign-accounts-to-help-censor-what-chinese-users-see-study Sat, 16 May 2020 00:00:00 +0100 Recovering Lost Narratives in Epic Kitchens https://unthinking.photography/articles/recovering-lost-narratives-in-epic-kitchens Tue, 12 May 2020 00:00:00 +0100 Epic Hand Washing in the Time of Lost Narratives https://unthinking.photography/commissions/epic-hand-washing-in-the-time-of-lost-narratives Tue, 12 May 2020 00:00:00 +0100 Amazon turns to Chinese firm on U.S. blacklist to meet thermal camera needs https://unthinking.photography/imgexhaust/amazon-turns-to-chinese-firm-on-u-s-blacklist-to-meet-thermal-camera-needs Thu, 07 May 2020 00:00:00 +0100 Amazon turns to Chinese firm on U.S. blacklist to meet thermal camera needs

Amazon.com Inc has bought cameras to take temperatures of workers during the coronavirus pandemic from a firm the United States blacklisted over allegations it helped China detain and monitor Uighurs and other Muslim minorities, three people familiar with the matter told Reuters.

Source: https://www.reuters.com/article/us-health-coronavirus-amazon-com-cameras-idUSKBN22B1AL. Image: A Dahua thermal camera takes a man’s temperature during a demonstration of the technology in San Francisco, California, U.S. April 24, 2020. Lewis Surveillance/Handout via REUTERS.

]]>
Google’s medical AI was super accurate in a lab. Real life was a different story. https://unthinking.photography/imgexhaust/googles-medical-ai-was-super-accurate-in-a-lab-real-life-was-a-different-story Thu, 07 May 2020 00:00:00 +0100 Google’s medical AI was super accurate in a lab. Real life was a different story.

The AI developed by Google Health can identify signs of diabetic retinopathy from an eye scan with more than 90% accuracy—which the team calls “human specialist level”—and, in principle, give a result in less than 10 minutes. The system analyzes images for telltale indicators of the condition, such as blocked or leaking blood vessels. …

When it worked well, the AI did speed things up. But it sometimes failed to give a result at all. Like most image recognition systems, the deep-learning model had been trained on high-quality scans; to ensure accuracy, it was designed to reject images that fell below a certain threshold of quality. With nurses scanning dozens of patients an hour and often taking the photos in poor lighting conditions, more than a fifth of the images were rejected.

source: https://www.technologyreview.com/2020/04/27/1000658/google-medical-ai-accurate-lab-real-life-clinic-covid-diabetes-retina-disease

]]>
STEPHANIE SINCLAIR, Plaintiff,  against ZIFF DAVIS, LLC, and MASHABLE, INC. https://unthinking.photography/imgexhaust/stephanie-sinclair-plaintiff-against-ziff-davis-llc-and-mashable-inc Thu, 07 May 2020 00:00:00 +0100 How neural nets are really just looking at textures https://unthinking.photography/imgexhaust/how-neural-nets-are-really-just-looking-at-textures Wed, 06 May 2020 00:00:00 +0100 A paper submitted to this year’s International Conference on Learning Representations (ICLR) may explain why. Researchers from the University of Tübingen in Germany found that CNNs trained on ImageNet identify objects by their texture rather than shape.

Source: 
https://openreview.net/forum?id=Bygh9j09KX
https://www.theregister.co.uk/2019/02/13/ai_image_texture/ / https://twitter.com/mikarv/status/1095770134260731904

image
]]>
Screen Walks - Memes Playlist https://unthinking.photography/imgexhaust/screen-walks-memes-playlist Mon, 04 May 2020 00:00:00 +0100 Tunnel Vision https://unthinking.photography/articles/tunnel-vision Tue, 28 Apr 2020 00:00:00 +0100 Declassifier https://unthinking.photography/commissions/declassifier Tue, 28 Apr 2020 00:00:00 +0100 The idea for Workers' Forum developed from Takala’s experience as a micro-tasker in the United States, working for a service... https://unthinking.photography/imgexhaust/the-idea-for-workers-forum-developed-from-takalas-experience-as-a-micro-tasker-in-the-united-states-working-for-a-service Fri, 17 Apr 2020 00:00:00 +0100

The idea for Workers’ Forum developed from Takala’s experience as a micro-tasker in the United States, working for a service where users pay to have a pretend girlfriend or boyfriend texting them. Through a crowdsourcing platform, the artist responded to the task ‘Write a text message that is positive, engaging and convincingly written in the voice of someone texting a significant other.’ Takala was fascinated by the potential within the fictional space created in the text message exchange, but like many of the other workers, she was frustrated by the inconsistencies and lack of quality of the service. Many felt compelled to invest extra time and effort into the emotional labour aspect of the role, despite being underpaid and working within a system that is designed to minimise human connection and make caring as difficult as possible. Workers’ Forum is based on conversations that took place on a discussion forum between the workers, trying to figure out together how to be an invisible partner.

Supported by Helsinki Contemporary

]]>
The face of your voice 3D, from the verbal to the physiognomic https://unthinking.photography/imgexhaust/the-face-of-your-voice-3d-from-the-verbal-to-the-physiognomic Thu, 16 Apr 2020 00:00:00 +0100 The face of your voice 3D, from the verbal to the physiognomic

Contemporary life seems to be an endless game of data quantification, moving across different cultural domains. The former is an articulation of the digitalisation process, which feeds the current obsession with ‘machine learning’ strategies. The latter can spin these calculations in unexpected directions. ”The face of your voice 3D” by Frederik de Wilde in collaboration with Tae-Hyun Oh (MIT), uses data from a short audio recording of a person speaking to reconstruct their facial image. Trained through millions of Internet/YouTube videos, the neural network learns the correlation between a face and a voice, with age, gender and ethnicity. The reconstructed faces are then the conceptual mutations of the acquired data, showing the critical ‘right’ and ‘wrong’ directions that the machine autonomously takes.

More info:

http://neural.it/2020/02/the-face-of-your-voice-3d-from-the-verbal-to-the-physiognomic/

https://frederik-de-wilde.com

]]>
The Earth Archive https://unthinking.photography/imgexhaust/the-earth-archive Wed, 15 Apr 2020 00:00:00 +0100 The Earth Archive

The Earth Archive is both a program of scanning focused on endangered landscapes and an open-source collection of LiDAR scans accessible to scientists around the world.

Our Mission: 

1. Create a baseline record of the earth as it is today to more effectively mitigate the climate crisis.
2. Build a virtual, open-source planet accessible to all scientists so we can better understand our world.
3. Preserve a record of the Earth for our grandchildren’s grandchildren so they can study & recreate our lost heritage.

This comes with certain obstacles, not the least the price tag: a scan of the Amazon rainforest would take six years and cost $15 million.

Read more:

http://www.openculture.com/2020/03/the-earth-archive-will-3d-scan-the-entire-world.html

https://www.theeartharchive.com

]]>
the critical dictionary of southeast asia (cdosea), begins with a question: what constitutes the unity of southeast asia — a... https://unthinking.photography/imgexhaust/the-critical-dictionary-of-southeast-asia-cdosea-begins-with-a-question-what-constitutes-the-unity-of-southeast-asia-a Tue, 14 Apr 2020 00:00:00 +0100 the critical dictionary of southeast asia (cdosea), begins with a question: what constitutes the unity of southeast asia — a region never unified by language, religion or political power?cdosea proceeds by proposing 26 terms — one for each letter of the english / latin alphabet. each term is a concept, a motif, or a biography, and together they are threads weaving together a torn and tattered tapestry of southeast asia.

https://aaa.cdosea.org/#video/r

]]>
Calm Technology https://unthinking.photography/imgexhaust/calm-technology Sat, 11 Apr 2020 00:00:00 +0100 Calm Technology

IMPAKT Festival 2019 Panel discussion: Calm Technology Speakers: Olia Lialina, David Benqué & Cristina Cochior. Moderator: Annet Dekker

In the 90s the concept of calm technology was developed, ‘that which informs but doesn’t demand our focus or attention’. At that time the technical possibilities were limited. Today the idea of technology as a quiet servant has turned into an industry. Who is being served?

https://impakt.nl/events/panel-discussion/calm-technology/

]]>
Datasets are large collections of digital information that are used to train AI. They might contain anything from weather data,... https://unthinking.photography/imgexhaust/datasets-are-large-collections-of-digital-information-that-are-used-to-train-ai-they-might-contain-anything-from-weather-data Sat, 11 Apr 2020 00:00:00 +0100

Datasets are large collections of digital information that are used to train AI. They might contain anything from weather data, such as air pressure and temperature, to photos, music, or indeed anything else that helps an AI system with the task it has been assigned.
Datasets are like textbooks for computers.

AI design teams have to carefully consider the data they choose to train their AI with, and may build in parameters that help the system make sense of the information it’s given.

Due to their scale and complexity, these collections can be very challenging to build and refine — whether they consist of a few hundred audio samples or extensive maps covering the whole of the known solar system.

For this reason, AI design teams often share datasets for the benefit of the wider scientific community, making it easier to collaborate and build on each other’s research.

Definition of Dataset from the A to Z of AI by Google and Oxford Internet Institute: https://atozofai.withgoogle.com/intl/en-US/datasets/

image
]]>
Looking to champion the graduates of HackYourFuture, a not-for-profit coding school for refugees, in a way that resonated with... https://unthinking.photography/imgexhaust/looking-to-champion-the-graduates-of-hackyourfuture-a-not-for-profit-coding-school-for-refugees-in-a-way-that-resonated-with Sat, 11 Apr 2020 00:00:00 +0100

Looking to champion the graduates of HackYourFuture, a not-for-profit coding school for refugees, in a way that resonated with its participants, agency 72andSunny Amsterdam has hidden portraits of them in the back-ends of the homepages of major companies such as eBay and Accenture. The campaign, called Behind the Source, features seven portraits built in code that are available for anyone to see if they know where to look – by clicking “view page source”.

It’s not as sneaky as it might sound, however, as the companies are complicit – in fact, the graduates now work as web developers at the companies taking part. The campaign is celebrating their success, in a self-proclaimed “geeky” way that hopes to reframe discourse around refugees and the contribution they bring to the countries they move to, while highlighting how learning code can help to change people’s lives

Further information: https://www.itsnicethat.com/news/72andsunny-behind-the-source-hack-your-future-digital-120220 / https://www.hackyourfuture.net

]]>
The trouble comes when we rely, as we increasingly do, on digital representation for most – or even all – of our knowledge about... https://unthinking.photography/imgexhaust/the-trouble-comes-when-we-rely-as-we-increasingly-do-on-digital-representation-for-most-or-even-all-of-our-knowledge-about Sat, 11 Apr 2020 00:00:00 +0100 The trouble comes when we rely, as we increasingly do, on digital representation for most – or even all – of our knowledge about art. This deepens the ‘googlification’ of contemporary life, the mediation of everything from our whereabouts to our spelling by corporations. Though a product like Google Arts & Culture makes documentation of thousands of artworks available to anyone who uses Google, seemingly ‘democratising’ our access to art, it is not neutral; it selects for us and entrenches an overwhelmingly European narrative of art history, as well as a contemporary culture that has no incentive to question this. In this respect, Google is another gatekeeper of culture, shaping discourse just as museums, curators, collectors, and patrons have for centuries. The difference between Google and a museum, however, is that Google is at your fingertips. Its influence is far wider than that of any museum, no matter how popular.

Scripted Engagement.  Anthea Buys 31.03.20 

https://kunstkritikk.com/scripted-engagement/

]]>
“After Dark” brought people together from all over the world to experience the forbidden pleasure of sneaking around Tate... https://unthinking.photography/imgexhaust/after-dark-brought-people-together-from-all-over-the-world-to-experience-the-forbidden-pleasure-of-sneaking-around-tate Fri, 10 Apr 2020 00:00:00 +0100

“After Dark” brought people together from all over the world to experience the forbidden pleasure of sneaking around Tate Britain, one of the world’s best-known art galleries, late at night. The live broadcast event, held over five consecutive nights, enabled over 100,000 people to control and “see” through the eyes of one of four custom-built, video-streaming robots from a web browser. While the robots allowed visitors to “see” and explore, four art experts provided live commentary over the video streams, creating another level of engagement and experience that would be impossible during daylight hours.

https://theworkers.net/after-dark/

https://www.tate.org.uk/whats-on/tate-britain/special-event/ik-prize-2014-after-dark

]]>
Clearview https://unthinking.photography/imgexhaust/clearview Fri, 10 Apr 2020 00:00:00 +0100 Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html
 (18 Jan)

The Metropolitan Police Service announced on Friday, 24 January, that it will begin the operational use of Live Facial Recognition (LFR) technology.The use of live facial recognition technology will be intelligence-led and deployed to specific locations in London. This will help tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and help protect the vulnerable.
http://news.met.police.uk/news/met-begins-operational-use-of-live-facial-recognition-lfr-technology-392451 (24 Jan)

Clearview has been touting a “rapid international expansion” to prospective clients using a map that highlights how it either has expanded, or plans to expand, to at least 22 more countries, some of which have committed human rights abuses. 
https://yro.slashdot.org/story/20/02/08/0225232/clearview-ai-wants-to-sell-its-facial-recognition-to-authoritarian-regimes (8 Feb)

What’s more, Clearview’s system suffers the same shortcomings as other facial recognition systems: It’s not as good at interpreting black and brown faces as it is for whites. The company claims that its search is accurate across “all demographic groups,” but the ACLU vehemently disagrees. 
https://www.engadget.com/2020-02-12-clearview-ai-police-surveillance-explained.html (22 Feb)

image

This Is the Ad Clearview AI Used to Sell Your Face to Police
https://onezero.medium.com/this-is-the-ad-clearview-ai-used-to-sell-your-face-to-police-8997c2a6f0a8
 (11 Mar)

Clearview met with individuals from many of Silicon Valley’s most notable firms, among them Kleiner Perkins and Greylock Partners. A Greylock Partners spokesperson said a firm staffer met Ton-That at “a defense industry dinner” in late 2018 and was given a demo account that was sparsely used. Greylock is not an investor in the company, they said. A Kleiner Perkins spokesperson did not respond to an email request for comment.
https://www.buzzfeednews.com/article/ryanmac/clearview-ai-trump-investors-friend-facial-recognition
 (11 Mar)

Cam-Hoan Ton-That, as well as several people who have done work for the company, have deep, longstanding ties to far-right extremists. Some members of this alt-right cabal went on to work for Ton-That.
https://www.huffingtonpost.co.uk/entry/clearview-ai-facial-recognition-alt-right_n_5e7d028bc5b6cb08a92a5c48 (7 Apr)

]]>
Report from February 2020: “Only 11% of the organisations surveyed were using social media to attract younger audiences”. I... https://unthinking.photography/imgexhaust/report-from-february-2020-only-11-of-the-organisations-surveyed-were-using-social-media-to-attract-younger-audiences-i Fri, 10 Apr 2020 00:00:00 +0100 Report from February 2020: “Only 11% of the organisations surveyed were using social media to attract younger audiences”. I wonder how #CovidCulture affects this research?

https://www.artsprofessional.co.uk/news/digital-technology-isnt-improving-audience-outreach

]]>
Smithsonian Open Access https://unthinking.photography/imgexhaust/smithsonian-open-access Fri, 10 Apr 2020 00:00:00 +0100 Smithsonian Open Access

Download, share, and reuse millions of the Smithsonian’s images—right now, without asking. With new platforms and tools, you have easier access to nearly 3 million 2D and 3D digital items from our collections—with many more to come. This includes images and data from across the Smithsonian’s 19 museums, nine research centers, libraries, archives, and the National Zoo.

The Smithsonian Open Access Initiative supports and responds to the Institution’s purpose of increasing and diffusing knowledge through the following core values:

  • Stewardship and Trust. The Smithsonian Open Access Initiative is committed to ensuring the accuracy and accessibility of its collections and data as stewards of the nation’s collections, and it is committed to engaging with individuals, groups, and communities in the responsible pursuit of an increased diffusion of knowledge.
  • Diversity and Inclusion. The Smithsonian Open Access Initiative recognizes, embraces, and supports, as a core tenet and strength, the diversity of its research, collections, communities, and audiences.
  • Dignity and Respect. The Smithsonian Open Access Initiative recognizes the dignity and respect of those who created or are represented in the Smithsonian’s collections and content.
image

The Wikimedia Foundation, which oversees Wikipedia, applauded the Smithsonian’s decision, citing in particular hopes that having so much hi-res art and mineable research data available online will help better balance representation.

More info:

https://www.si.edu/openaccess
https://www.engadget.com/2020-02-25-smithsonian-open-access-collection-images.html
https://www.wired.com/story/smithsonian-puts-2-8-million-images-public-domain/

]]>
Download Folding@home https://unthinking.photography/imgexhaust/download-folding-home Thu, 09 Apr 2020 00:00:00 +0100

Folding@home (FAH or F@h) is a distributed computing project for simulating protein dynamics, including the process of protein folding and the movements of proteins implicated in a variety of diseases. It brings together citizen scientists who volunteer to run simulations of protein dynamics on their personal computers. Insights from this data are helping scientists to better understand biology, and providing new opportunities for developing therapeutics.

Download Folding@home

Partial opening of the mouth of the COVID-19 Demogorgon (aka spike) captured by our simulations. The three colors are the three proteins that come together to form the spike. Each is made of a linear chain of chemicals called amino acids. The ribbons trace out each chain. The transparent surface is the surface of the COVID-19 Demogorgon. The three proteins that make up the Demogorgon must spread apart to reveal the ACE2 binding site, which initiates infection by attaching to a protein called ACE2 on the surface of human cells. This movie captures part of that opening motion.

]]>
PALM is an ongoing project, initiated during the founding year of U5 more than ten years ago. When the members of U5 started... https://unthinking.photography/imgexhaust/palm-is-an-ongoing-project-initiated-during-the-founding-year-of-u5-more-than-ten-years-ago-when-the-members-of-u5-started Thu, 09 Apr 2020 00:00:00 +0100

PALM is an ongoing project, initiated during the founding year of U5 more than ten years ago. When the members of U5 started their collaborative work a camera was installed in their basement studio. The camera surveying the studio space sent a live stream to a webpage which gave each member the opportunity to observe and respond to each other’s way of working. The camera accompanied the collective from their first workspace at the Zurich University of the Arts to all their following studios. The project has developed very slowly over the years. Nevertheless, an incredible amount of data has already accumulated during this time.

Increasingly, we perceive our environment through the detour of digital media. PALM automates this detour without generating clearly usable data. PALM deals with questions of contemporary photography, everyday documentation, control and chance, surveillance and self-portrayal. PALM images are haikus. PALM is the catharsis of social media. PALM is a scientific instrument. 

PALM processes images in the order of several hundred thousand per day. If enough cameras are in operation, a kind of fragmentary, poetic puzzle emerges. Such a data collection could also be used scientifically with suitable means. Through image processing, information from individual images could be brought into context. For example, it could be possible to reconstruct places for anthropological or forensic purposes, to create maps or trace a process.

All images streamed are archived. This non-public archive contains more than 100 millions of images.

Read more:

https://u5.92u.ch/index.php?title=PALM

http://palm.92u.ch/palm/index.php/Special:U5Palm

]]>
Unevenly Distributed https://unthinking.photography/articles/unevenly-distributed Fri, 06 Mar 2020 00:00:00 +0000 Given that a person’s gender cannot be inferred by appearance, we have decided to remove these labels in order to align with the... https://unthinking.photography/imgexhaust/given-that-a-persons-gender-cannot-be-inferred-by-appearance-we-have-decided-to-remove-these-labels-in-order-to-align-with-the Wed, 04 Mar 2020 00:00:00 +0000 Given that a person’s gender cannot be inferred by appearance, we have decided to remove these labels in order to align with the Artificial Intelligence Principles at Google, specifically Principle #2: Avoid creating or reinforcing unfair bias.

https://www.businessinsider.com/google-cloud-vision-api-wont-tag-images-by-gender-2020-2?r=US&IR=T

image
]]>
A popular self-driving car dataset is missing labels for hundreds of pedestrians https://unthinking.photography/imgexhaust/a-popular-self-driving-car-dataset-is-missing-labels-for-hundreds-of-pedestrians Wed, 12 Feb 2020 00:00:00 +0000 A popular self-driving car dataset is missing labels for hundreds of pedestrians

https://blog.roboflow.ai/self-driving-car-dataset-missing-pedestrians/

image
]]>
Simon Weckert - Google Maps Hacks Performance & Installation, 2020 " 99 second hand smartphones are transported in a handcart to... https://unthinking.photography/imgexhaust/simon-weckert-google-maps-hacks-performance-installation-2020-99-second-hand-smartphones-are-transported-in-a-handcart-to Wed, 05 Feb 2020 00:00:00 +0000

Simon Weckert - Google Maps Hacks

Performance & Installation, 2020

“ 99 second hand smartphones are transported in a handcart to generate virtual traffic jam in Google Maps.Through this activity, it is possible to turn a green street red which has an impact in the physical world by navigating cars on another route to avoid being stuck in traffic. ”

#googlemapshacks

The advent of Google’s Geo Tools began in 2005 with Maps and Earth, followed by Street View in 2007. They have since become enormously more technologically advanced. Google’s virtual maps have little in common with classical analogue maps. The most significant difference is that Google’s maps are interactive  – scrollable, searchable and zoomable. Google’s map service has fundamentally changed our understanding of what a map is, how we interact with maps, their technological limitations, and how they look aesthetically.

In this fashion, Google Maps makes virtual changes to the real city. Applications such as ›Airbnb‹ and ›Carsharing‹ have an immense impact on cities: on their housing market and mobility culture, for instance. There is also a major impact on how we find a romantic partner, thanks to dating platforms such as ›Tinder‹, and on our self-quantifying behaviour, thanks to the ›nike‹ jogging app. Or map-based food delivery-app like ›deliveroo‹ or ›foodora‹. All of these apps function via interfaces with Google Maps and create new forms of digital capitalism and commodification. Without these maps, car sharing systems, new taxi apps, bike rental systems and online transport agency services such as ›Uber‹ would be unthinkable. An additional mapping market is provided by self-driving cars; again, Google has already established a position for itself.

With its Geo Tools, Google has created a platform that allows users and businesses to interact with maps in a novel way. This means that questions relating to power in the discourse of cartography have to be reformulated. But what is the relationship between the art of enabling and techniques of supervision, control and regulation in Google’s maps? Do these maps function as dispositive nets that determine the behaviour, opinions and images of living beings, exercising power and controlling knowledge? Maps, which themselves are the product of a combination of states of knowledge and states of power, have an inscribed power dispositive. Google’s simulation-based map and world models determine the actuality and perception of physical spaces and the development of action models.

text by Moritz Alhert - The Power of Virtual Maps

Source: http://www.simonweckert.com/googlemapshacks.html

]]>
The Next Biennial Should be Curated by a Machine https://unthinking.photography/imgexhaust/the-next-biennial-should-be-curated-by-a-machine Thu, 30 Jan 2020 00:00:00 +0000 Early Modern Computer Vision - Leonardo Impett https://unthinking.photography/imgexhaust/early-modern-computer-vision-leonardo-impett Wed, 29 Jan 2020 00:00:00 +0000 Early Modern Computer Vision - Leonardo Impett

https://docs.google.com/document/d/1LKs82uKkSgQ-4wGUQ4Dwzxgnerx2e6zbHf4iGIHuJmI/edit#heading=h.60chgdizcy6h

]]>
Egor Tsvetkov - Your Face Is Big Data https://unthinking.photography/imgexhaust/egor-tsvetkov-your-face-is-big-data Wed, 29 Jan 2020 00:00:00 +0000 Egor Tsvetkov - Your Face Is Big Data

The next time you ride the subway in St. Petersburg, watch out for 21-year-old photographer Egor Tsvetkov. He recently unveiled a new project called “YOUR FACE IS BIG DATA,” which he created by semi-secretly photographing passengers seated across from him on the city’s metro, without asking their permission.

Using these pictures, Tsvetkov turned to an online service called FindFace, which allows you to upload random photographs of a person’s face. Using facial recognition technology, FindFace then searches the image database of Vkontakte, Russia’s most popular social network. If it finds a match, it does its best to identify the person in the picture. Any photo posted and tagged on Vkontakte can be found this way, provided that the user has opted to make his or her postings public. (Most users do.)

FindFace was launched this February by the Moscow-based company N-Tech.lab, which says the service was designed to facilitate online dating, allowing users to photograph anyone on the street, locate their social media profile, and learn more about them, before trying to get acquainted directly.

Tsvetkov says FindFace was able to find social media accounts for 70 percent of the young people he photographed, though it worked only half as often with pictures of older individuals.

source
artists website

]]>
A team of Janelia researchers reports hitting a critical milestone: they’ve traced the path of every neuron in a portion of the... https://unthinking.photography/imgexhaust/a-team-of-janelia-researchers-reports-hitting-a-critical-milestone-theyve-traced-the-path-of-every-neuron-in-a-portion-of-the Tue, 28 Jan 2020 00:00:00 +0000 A team of Janelia researchers reports hitting a critical milestone: they’ve traced the path of every neuron in a portion of the female fruit fly brain they’ve dubbed the “hemibrain.” The map encompasses 25,000 neurons – roughly a third of the fly brain, by volume – but its impact is outsized. It includes regions of keen interest to scientists – those that control functions like learning, memory, smell, and navigation. With more than 20 million neural connections pinpointed so far, it’s the biggest and most detailed map of the fly brain ever completed.

Link to the viewer
Link to the research
Source

]]>
annotation-crowd sourcing-surveillance-artists projects https://unthinking.photography/imgexhaust/annotation-crowd-sourcing-surveillance-artists-projects Tue, 28 Jan 2020 00:00:00 +0000 Further watching: Watched and Being Watched on Social Media https://www.youtube.com/watch?v=kWPOyuoF7oA https://unthinking.photography/imgexhaust/further-watching-watched-and-being-watched-on-social-mediahttps-www-youtube-com-watch-v-kwpoyuof7oa Tue, 28 Jan 2020 00:00:00 +0000 Further watching: Watched and Being Watched on Social Media https://www.youtube.com/watch?v=kWPOyuoF7oA

]]>
Reconstructing 3D human shape and pose from a monocular image Reconstructing 3D human shape and pose from a monocular image is... https://unthinking.photography/imgexhaust/reconstructing-3d-human-shape-and-pose-from-a-monocular-image-reconstructing-3d-human-shape-and-pose-from-a-monocular-image-is Tue, 28 Jan 2020 00:00:00 +0000 Reconstructing 3D human shape and pose from a monocular image

Reconstructing 3D human shape and pose from a monocular image is challenging despite the promising results achieved by the most recent learning-based methods. The commonly occurred misalignment comes from the facts that the mapping from images to the model space is highly non-linear and the rotation-based pose representation of the body model is prone to result in the drift of joint positions. In this work, we investigate learning 3D human shape and pose from dense correspondences of body parts and propose a Decompose-and-aggregate Network (DaNet) to address these issues. DaNet adopts the dense correspondence maps, which densely build a bridge between 2D pixels and 3D vertexes, as intermediate representations to facilitate the learning of 2D-to-3D mapping. The prediction modules of DaNet are decomposed into one global stream and multiple local streams to enable global and fine-grained perceptions for the shape and pose predictions, respectively. Messages from local streams are further aggregated to enhance the robust prediction of the rotation-based poses, where a position-aided rotation feature refinement strategy is proposed to exploit spatial relationships between body joints. Moreover, a Part-based Dropout (PartDrop) strategy is introduced to drop out dense information from intermediate representations during training, encouraging the network to focus on more complementary body parts as well as adjacent position features. The effectiveness of our method is validated on both in-door and real-world datasets including the Human3.6M, UP3D, and DensePose-COCO datasets. Experimental results show that the proposed method significantly improves the reconstruction performance in comparison with previous state-of-the-art methods. Our code will be made publicly available at https://hongwenzhang.github.io/dense2mesh

Source: https://mobile.twitter.com/golan/status/1212258827321647106

]]>
Christopher Meerdo - Bundle Umbra https://unthinking.photography/imgexhaust/christopher-meerdo-bundle-umbra Mon, 27 Jan 2020 00:00:00 +0000 Christopher Meerdo - Bundle Umbra

Document, 2020

Drawing from hacker and whistleblower file caches sourced on the dark web, the exhibition considers what is seen and remains invisible within international information systems.

Routed through 3D displacement mapping, segments of the archives take the form of thermoformed and perforated plastic collage works. Each function as a light emission panel through the artist’s use of custom electronics and electroluminescent paint.  

The work draws from secondary and idiosyncratic symbols, handwritten notes, tables, and graphics located within the archives. The source material emerges from various activities from the past 30 years. This includes documents from Chinese secret prisons, Fraternal Order of Police confidential contracts, papers seized from the US embassy takeover in Tehran in the late 1970’s, and recently revealed banking disclosures from the Cayman Islands, among others.  

https://www.meerd.ooo/

]]>
There is one other aspect to Prineville that made it an ideal location for Facebook’s facility. In 1911, when railroads were... https://unthinking.photography/imgexhaust/there-is-one-other-aspect-to-prineville-that-made-it-an-ideal-location-for-facebooks-facility-in-1911-when-railroads-were Mon, 27 Jan 2020 00:00:00 +0000 There is one other aspect to Prineville that made it an ideal location for Facebook’s facility. In 1911, when railroads were connecting the rural towns of central Oregon, Prineville seemed slated to be forgotten. Headed south from The Dalles, the main rail line bypassed the municipality (which was as much a death sentence for a town in 1911 as a new interstate route cutting around an old business district is in 2018). In a 1917 election however, Prineville residents voted 355 to 1 to construct a connection to the main rail line 19 miles away. Run by the city, this railroad has served mostly as a commercial link for the lumber industry. More importantly, however, the publicly-owned rail line means that the City of Prineville retained ownership of the land under the rail, a non-interrupted connection to the major industrial lines of the railroad (and later highway) that the Prineville line connects to.
Although the actual paths of fiber-optic cables are considered state and company secrets, it is not unlikely that most or all of the Facebook facility’s data runs along this route. In The Prehistory of the Cloud, Tung-Hui Hu describes the origin of private data service with telecommunications giant Sprint (Southern Pacific Railroad Internal Network), which sold excess fiber-optic bandwidth along train lines to consumers beginning in 1978. He goes on to state that “virtually all traffic on the U.S. Internet runs across the same routes established in the 19th century.”

It was raining in the data center - Everest Pipkin

https://medium.com/s/story/it-was-raining-in-the-data-center-9e1525c37cc3#1c14

]]>
Until recently, Hoan Ton-That’s greatest hits included an obscure iPhone game and an app that let people put Donald Trump’s... https://unthinking.photography/imgexhaust/until-recently-hoan-ton-thats-greatest-hits-included-an-obscure-iphone-game-and-an-app-that-let-people-put-donald-trumps Mon, 27 Jan 2020 00:00:00 +0000

Until recently, Hoan Ton-That’s greatest hits included an obscure iPhone game and an app that let people put Donald Trump’s distinctive yellow hair on their own photos.

Then Mr. Ton-That — an Australian techie and onetime model — did something momentous: He invented a tool that could end your ability to walk down the street anonymously, and provided it to hundreds of law enforcement agencies, ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security.

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants. Full story at NYTimes.

Also related to the Russian facial recognition site SearchFace/FindClone:
https://www.bellingcat.com/resources/how-tos/2019/02/19/using-the-new-russian-facial-recognition-site-searchface-ru/

]]>
This object-recognition dataset stumped the world’s best computer vision models https://unthinking.photography/imgexhaust/this-object-recognition-dataset-stumped-the-worlds-best-computer-vision-models Fri, 13 Dec 2019 00:00:00 +0000 To be preoccupied with the aesthetic properties of digital imagery, as are many theorists and critics, is to evade the... https://unthinking.photography/imgexhaust/to-be-preoccupied-with-the-aesthetic-properties-of-digital-imagery-as-are-many-theorists-and-critics-is-to-evade-the Mon, 09 Dec 2019 00:00:00 +0000 To be preoccupied with the aesthetic properties of digital imagery, as are many theorists and critics, is to evade the subordination of the image to a broad field of non-visual operations and requirementsJonathan Crary - 24/7: Late Capitalism and the Ends of Sleep. Verso, 2014
]]>
From Spectacle to Extraction. And All Over Again. https://unthinking.photography/articles/from-spectacle-to-extraction-and-all-over-again Fri, 29 Nov 2019 00:00:00 +0000 I’m looking at you, looking at me https://unthinking.photography/articles/im-looking-at-you-looking-at-me Fri, 22 Nov 2019 00:00:00 +0000 Where Did ImageNet Come From? https://unthinking.photography/articles/where-did-imagenet-come-from Thu, 21 Nov 2019 00:00:00 +0000 An introductory presentation about Data / Set / Match, a year-long programme seeking new ways to present, visualise and... https://unthinking.photography/imgexhaust/an-introductory-presentation-about-data-set-match-a-year-long-programme-seeking-new-ways-to-present-visualise-and Thu, 21 Nov 2019 00:00:00 +0000 An introductory presentation about Data / Set / Match, a year-long programme seeking new ways to present, visualise and interrogate contemporary image datasets. Departing from traditional 19th and 20th century taxonomies used to organise and store images, we will be looking at the effect that digital technology has had on these systems, and how new categorisations increasingly influence the way humans and machines see and understand the world today.

Computer scientists are highly influential, yet often unacknowledged, creators and collectors of photographic images in the 21st Century. Technologists working in the fields of machine vision and Artificial Intelligence rely on the production and annotation of massive collections of digital images to train machines to ‘see’ and understand the world. These image datasets are typically generated by scraping images off the web (e.g. Labeled Faces in The Wild), or shot by computer scientists themselves (e.g. ATT Faces).

One of the most significant datasets today is ImageNet (2009), a visual database of over 14 million images created by a team of computer scientists led by Dr Fei-Fei Li at Stanford University. A hugely expensive and ambitious undertaking, ImageNet presents an encyclopedic image of the world, where every concept (e.g. ‘cat’, ‘bank’) has been mapped against descriptive images painstakingly annotated and collected off the web. ImageNet was the result of Dr Li’s hypothesis that computer vision would ultimately rely on the quality and scale of the training data - as opposed to the optimisation of algorithms. Its creation informed the explosion of work in the field of Artificial Intelligence and machine learning utilising neural networks.

Over the course of 12 months, Data/Set/Match aims to draw attention to these datasets and explore their creation, influence and uses. At the same time, the programme will connect the image dataset to historical photographic discourses of the archive, truth and power. 

https://tpg.org.uk/dsm

]]>
No, You Don’t Really Look Like That https://unthinking.photography/imgexhaust/no-you-dont-really-look-like-that Thu, 21 Nov 2019 00:00:00 +0000 Unseen Portraits https://unthinking.photography/imgexhaust/unseen-portraits Thu, 21 Nov 2019 00:00:00 +0000 EXCLUSIVE: This Is How the U.S. Military’s Massive Facial Recognition System Works https://unthinking.photography/imgexhaust/exclusive-this-is-how-the-u-s-militarys-massive-facial-recognition-system-works Wed, 20 Nov 2019 00:00:00 +0000 Shanzhai Lyric (@shanzhai_lyric on IG) https://unthinking.photography/imgexhaust/shanzhai-lyric-shanzhai-lyric-on-ig Wed, 20 Nov 2019 00:00:00 +0000 Surveillance Company Hikvision Markets Uyghur Ethnicity Analytics https://unthinking.photography/imgexhaust/surveillance-company-hikvision-markets-uyghur-ethnicity-analytics Wed, 20 Nov 2019 00:00:00 +0000 Hikvision has marketed an AI camera that automatically identifies Uyghurs, on its China website, only covering it up days ago after IPVM questioned them on it.

image

This AI technology allows the PRC to automatically track Uyghur people, one of the world’s most persecuted minorities.

The camera is the DS-2CD7A2XYZ-JM/RX, an AI camera sold in China:

image

Read full story: https://ipvm.com/reports/hikvision-uyghur

image
]]>
An Introduction to Image Datasets https://unthinking.photography/articles/an-introduction-to-image-datasets Fri, 15 Nov 2019 00:00:00 +0000 How Google and Instagram think about the future of photos https://unthinking.photography/imgexhaust/how-google-and-instagram-think-about-the-future-of-photos Sat, 02 Nov 2019 00:00:00 +0000 Dare to share your poop for science—and help change the future of gut health. We’re building the world’s first and largest poop... https://unthinking.photography/imgexhaust/dare-to-share-your-poop-for-scienceand-help-change-the-future-of-gut-health-were-building-the-worlds-first-and-largest-poop Fri, 01 Nov 2019 00:00:00 +0000 Dare to share your poop for science—and help change the future of gut health. We’re building the world’s first and largest poop image database–so we can train an AI to change the future of future health.

https://seed.com/poop/

image
]]>
This X Does Not Exist https://unthinking.photography/imgexhaust/this-x-does-not-exist Fri, 01 Nov 2019 00:00:00 +0000 Using generative adversarial networks (GAN), we can learn how to create realistic-looking fake versions of almost anything, as shown by this collection of sites.

]]>
Thread: https://mobile.twitter.com/AllanXia/status/1168092238770921472 In case you haven’t heard, #ZAO is a Chinese app which... https://unthinking.photography/imgexhaust/thread-https-mobile-twitter-com-allanxia-status-1168092238770921472-in-case-you-havent-heard-zao-is-a-chinese-app-which Fri, 04 Oct 2019 00:00:00 +0100 Thread: https://mobile.twitter.com/AllanXia/status/1168092238770921472

In case you haven’t heard, #ZAO is a Chinese app which completely blew up since Friday. Best application of ‘Deepfake’-style AI facial replacement I’ve ever seen. Here’s an example of me as DiCaprio (generated in under 8 secs from that one photo in the thumbnail) … continues … 

twitter / allan xia | www

]]>
Lil Miquela interviews Ines Alpha about the future of beauty https://unthinking.photography/imgexhaust/lil-miquela-interviews-ines-alpha-about-the-future-of-beauty Thu, 03 Oct 2019 00:00:00 +0100 Lil Miquela interviews Ines Alpha about the future of beauty

Link: https://www.dazeddigital.com/beauty/head/article/43924/1/lil-miquela-ines-alpha-leading-3d-make-up-revolution / @dazeddigital​ / April 2019

]]>
RWM - SON[I]A https://unthinking.photography/imgexhaust/rwm-son-i-a Wed, 02 Oct 2019 00:00:00 +0100 The Power of Face Filters as Augmented Reality Art for the Masses https://unthinking.photography/imgexhaust/the-power-of-face-filters-as-augmented-reality-art-for-the-masses Wed, 02 Oct 2019 00:00:00 +0100 https://mobile.twitter.com/raeBress/status/1169625733750124544 / twitter/showyodiq  https://unthinking.photography/imgexhaust/https-mobile-twitter-com-raebress-status-1169625733750124544-twitter-showyodiq Tue, 01 Oct 2019 00:00:00 +0100 https://mobile.twitter.com/raeBress/status/1169625733750124544 / twitter/showyodiq 

]]>
Generated Photos https://unthinking.photography/imgexhaust/generated-photos Mon, 30 Sep 2019 00:00:00 +0100 Generated Photos

100,000 Faces Generated by AI
Free to Download

These people aren’t real!

We are building the next generation of media through the power of AI (an original machine learning dataset using StyleGAN (an amazing resource by NVIDIA)) to construct a realistic set of 100,000 faces.. Copyrights, distribution rights, and infringement claims will soon be things of the past.To give you a glimpse of what we have been working on we created a free resource of 100k high-quality faces. Every image was generated by our internal AI systems as it continually improves. Use them in your presentations, projects, mockups or wherever — all for just a link back to us!

Browse on Google Drive

Generated Faces are Perfect for:

User Interface Design for Web and Mobile Applications
Educational Projects
Handouts and Worksheets
Emails and Newsletters
Landing Pages
Presentations
User Avatars

image

The dataset has been built by taking 29,000+ photos of 69 different models over the last 2 years in their studio. The photos were taken in a controlled environment (similar lighting and post-processing) to make sure that each face had consistent high output quality. After shooting, the photographs underwent labor-intensive tasks such as tagging and categorizing.

more coverage: https://www.vice.com/en_us/article/mbm3kb/generated-photos-thinks-it-can-solve-diversity-with-100000-fake-ai-faces?fbclid=IwAR1eUYCPu8hQ7A_sAgz_EOFFw4kAMFjguiRxtHPYHWeUpgcHw2iMoSAs9AU

FAQ: https://medium.com/generated-photos/frequently-asked-questions-cc919004de0d

]]>
Three New Cameras https://unthinking.photography/imgexhaust/three-new-cameras Mon, 30 Sep 2019 00:00:00 +0100 Three New Cameras

Camera Restricta - Philipp Schmitt, 2015

Camera Restricta is a speculative design of a new kind of camera. It locates itself via GPS and searches online for photos that have been geotagged nearby. If the camera decides that too many photos have been taken at your location, it retracts the shutter and blocks the viewfinder. You can’t take any more pictures here.

https://philippschmitt.com/work/camera-restricta

Descriptive Camera - Matt Richardson, 2012

Instead of producing an image, this prototype uses crowd sourcing to output a text description of the scene. After the shutter button is pressed, the photo is sent to Mechanical Turk for processing and the camera waits for the results. A yellow LED indicates that the results are still “developing” in a nod to film-based photo technology. With a HIT price of $1.25, results are returned typically within 6 minutes and sometimes as fast as 3 minutes. The thermal printer outputs the resulting text in the style of a Polaroid print. Below are a few samples from the Descriptive Camera:

http://mattrichardson.com/Descriptive-Camera/

Draw This - Dan Macnish, 2018

Draw This is a polaroid camera that draws cartoons. You point, and shoot - and out pops a cartoon; the camera’s best interpretation of what it saw. The camera is a mash up of a neural network for object recognition, the google quickdraw dataset, a thermal printer, and a raspberry pi.

https://danmacnish.com/2018/07/01/draw-this/

]]>
What do you see, YOLO9000? by Taller Estampa | Soy Cámara YOLO9000 is a trained object recognition neuronal network with a... https://unthinking.photography/imgexhaust/what-do-you-see-yolo9000-by-taller-estampa-soy-camara-yolo9000-is-a-trained-object-recognition-neuronal-network-with-a Fri, 27 Sep 2019 00:00:00 +0100 What do you see, YOLO9000? by Taller Estampa | Soy Cámara

YOLO9000 is a trained object recognition neuronal network with a dataset of 9,418 words and millions of images. It is one of the many artificial vision tools being developed, designed for automatic image annotation. ¿Qué es lo que ves, YOLO9000? is a heterodox audiovisual investigation of its mechanisms, its possibilities and its world. Based on the project “The bad student. Critical pedagogy for artificial intelligences”, 2017-2018

]]>
Simutaneous live-streaming on 40 accounts selling womens clothes. #Chinaecommerce (source: twitter/mbrennanchina) https://unthinking.photography/imgexhaust/simutaneous-live-streaming-on-40-accounts-selling-womens-clothes-chinaecommerce-source-twitter-mbrennanchina Tue, 24 Sep 2019 00:00:00 +0100 Simutaneous live-streaming on 40 accounts selling womens clothes. #Chinaecommerce (source: twitter/mbrennanchina)

]]>
DeepPrivacy https://unthinking.photography/imgexhaust/deepprivacy Tue, 17 Sep 2019 00:00:00 +0100 DeepPrivacy

A fully automatic anonymization technique for images.

This repository contains the source code for the paper “DeepPrivacy: A Generative Adversarial Network for Face Anonymization”, published at ISVC 2019.

The DeepPrivacy GAN never sees any privacy sensitive information, ensuring a fully anonymized image. It utilizes bounding box annotation to identify the privacy-sensitive area, and sparse pose information to guide the network in difficult scenarios. 

DeepPrivacy detects faces with state-of-the-art detection methods. Mask R-CNN is used to generate a sparse pose information of the face, and DSFD is used to detect faces in the image. 

image

https://github.com/hukkelas/DeepPrivacy?fbclid=IwAR1amvxxMI9M02ETByw2ZUsphEYEaI2ukNMf4fueRzwFtECJoTqSNbvQ5oI

]]>
Strike (with) a Pose: Neural networks are easily fooled by strange poses of familiar objects https://unthinking.photography/imgexhaust/strike-with-a-pose-neural-networks-are-easily-fooled-by-strange-poses-of-familiar-objects Wed, 11 Sep 2019 00:00:00 +0100 Strike (with) a Pose: Neural networks are easily fooled by strange poses of familiar objects

Despite excellent performance on stationary test sets, deep neural networks (DNNs) can fail to generalize to out-of-distribution (OoD) inputs, including natural, non-adversarial ones, which are common in real-world settings. In this paper, we present a framework for discovering DNN failures that harnesses 3D renderers and 3D models. That is, we estimate the parameters of a 3D renderer that cause a target DNN to misbehave in response to the rendered image. Using our framework and a self-assembled dataset of 3D objects, we investigate the vulnerability of DNNs to OoD poses of well-known objects in ImageNet. For objects that are readily recognized by DNNs in their canonical poses, DNNs incorrectly classify 97% of their poses. In addition, DNNs are highly sensitive to slight pose perturbations (e.g. 8 degree in rotation). Importantly, adversarial poses transfer across models and datasets. We find that 99.9% and 99.4% of the poses misclassified by Inception-v3 also transfer to the AlexNet and ResNet-50 image classifiers trained on the same ImageNet dataset, respectively, and 75.5% transfer to the YOLO-v3 object detector trained on MS COCO.

Michael Alcorn, Qi Li, Zhitao Gong, Chengfei Wang, Long Mai, Wei-shinn Ku, Anh Nguyen, 2019

http://anhnguyen.me/project/strike-with-a-pose/

]]>
(Can’t) Picture This 2: An Analysis of WeChat’s Realtime Image Filtering in Chats By Jeffrey Knockel and Ruohan Xiong We found... https://unthinking.photography/imgexhaust/cant-picture-this-2-an-analysis-of-wechats-realtime-image-filtering-in-chats-by-jeffrey-knockel-and-ruohan-xiong-we-found Thu, 29 Aug 2019 00:00:00 +0100 (Can’t) Picture This 2:
An Analysis of WeChat’s Realtime Image Filtering in Chats

By Jeffrey Knockel and Ruohan Xiong

We found that Tencent implements realtime, automatic censorship of chat images on WeChat based on text contained in images and on an image’s visual similarity to those on a blacklist. Tencent facilitates realtime filtering by maintaining a hash index populated by MD5 hashes of images sent by users of the chat platform. If the MD5 hash of an image sent over the chat platform is not in the hash index, then the image is not filtered. Instead, it is queued for automatic analysis. If it is found to be sensitive, then its MD5 hash is added to the hash index, and it will be filtered the next time a user attempts to send an image with the same hash.

This finding indicates that censorship measurement—like the kind conducted in this report—not only evaluates censorship but can also influence and modify the behaviour of a realtime, automatic censorship system by introducing novel items that can be flagged as sensitive and subsequently censored. This helps us understand previous measurements and has implications for future censorship measurement research.

Read the full report: https://citizenlab.ca/2019/07/cant-picture-this-2-an-analysis-of-wechats-realtime-image-filtering-in-chats/

]]>
An article from the NYT Privacy Project on the The Racist History of Facial Recognition. Starting with early scientific facial... https://unthinking.photography/imgexhaust/an-article-from-the-nyt-privacy-project-on-thethe-racist-history-of-facial-recognition-starting-with-early-scientific-facial Wed, 28 Aug 2019 00:00:00 +0100 An article from the NYT Privacy Project on the The Racist History of Facial Recognition. Starting with early scientific facial analysis in the 19th century trying to locate through “pictorial statistics” the essence of the criminal face, and moving to 21st century forms of classification and surveillance which seem to echo the same problematic and racist ideologies.

“We’ve been here before. Much like the 19th-century technologies of photography and composite portraits lent “objectivity” to pseudoscientific physiognomy, today, computers and artificial intelligence supposedly distance facial analysis from human judgment and prejudice. In reality, algorithms that rely on a flawed understanding of expressions and emotions can just make prejudice more difficult to spot.“

By Sahil Chinoy

https://www.nytimes.com/2019/07/10/opinion/facial-recognition-race.html

]]>
#facebook is embedding tracking data inside photos you download. https://unthinking.photography/imgexhaust/facebook-is-embedding-tracking-data-inside-photos-you-download Wed, 28 Aug 2019 00:00:00 +0100 The success of ImageNet highlighted that in the era of deep learning, data was at least as important as algorithms. Not only did... https://unthinking.photography/imgexhaust/the-success-of-imagenet-highlighted-that-in-the-era-of-deep-learning-data-was-at-least-as-important-as-algorithms-not-only-did Tue, 27 Aug 2019 00:00:00 +0100

The success of ImageNet highlighted that in the era of deep learning, data was at least as important as algorithms. Not only did the ImageNet dataset enable that very important 2012 demonstration of the power of deep learning, but it also allowed a breakthrough of similar importance in transfer learning: researchers soon realized that the weights learned in state of the art models for ImageNet could be used to initialize models for completely other datasets and improve performance significantly.

Pretrained ImageNet models have been used to achieve state-of-the-art results in tasks such as object detection, semantic segmentation, human pose estimation and video recognition. At the same time, they have enabled the application of Computer Vision to domains where the number of training examples is small and annotation is expensive. Transfer learning via pretraining on ImageNet is in fact so effective in CV that not using it is now considered foolhardy (Mahajan et al., 2018)

http://ruder.io/nlp-imagenet/ 
NLP’s ImageNet moment has arrived, Sebastian Ruder 

image
]]>
To mitigate the race bias in these datasets, we construct a novel face image dataset, containing 108,501 images, with an... https://unthinking.photography/imgexhaust/to-mitigate-the-race-bias-in-these-datasets-we-construct-a-novel-face-image-dataset-containing-108-501-images-with-an Thu, 15 Aug 2019 00:00:00 +0100 To mitigate the race bias in these datasets, we construct a novel face image dataset, containing 108,501 images, with an emphasis of balanced race composition in the dataset. We define 7 race groups: White, Black, Indian, East Asian, Southeast Asian, Middle East, and Latino. Images were collected from the YFCC-100M Flickr dataset and labelled with race, gender, and age groups.

https://arxiv.org/abs/1908.04913 
Kimmo Kärkkäinen, Jungseock Joo

image
]]>
We introduce the first visual privacy dataset originating from people who are blind in order to better understand their privacy... https://unthinking.photography/imgexhaust/we-introduce-the-first-visual-privacy-dataset-originating-from-people-who-are-blind-in-order-to-better-understand-their-privacy Fri, 26 Jul 2019 00:00:00 +0100

We introduce the first visual privacy dataset originating from people who are blind in order to better understand their privacy disclosures and to encourage the development of algorithms that can assist in preventing their unintended disclosures.

VizWiz-Priv v1.0 dataset includes:
· non-private images
· private images with private content replaced by the ImageNet home-mean
·private images with private content replaced by fake content
annotations

https://vizwiz.org/tasks-and-datasets/vizwiz-priv/

image
]]>
Visual Dialog requires an AI agent to hold a meaningful dialogue with humans in natural, conversational language about visual... https://unthinking.photography/imgexhaust/visual-dialog-requires-an-ai-agent-to-hold-a-meaningful-dialogue-with-humans-in-natural-conversational-language-about-visual Fri, 19 Jul 2019 00:00:00 +0100

Visual Dialog requires an AI agent to hold a meaningful dialogue with humans in natural, conversational language about visual content. Specifically, given an image, a dialogue history, and a follow-up question about the image, the task is to answer the question.

Dataset stats:
120k images from COCO
1 dialog / image
10 rounds of question-answers /dialogue
Total 1.2M dialogue question-answers

http://demo.visualdialog.org Abhishek Das, Satwik KotturDeshraj YadavPrithvijit ChattopadhyayViraj PrabhuArjun ChandrasekaranNirbhay ModheKhushi GuptaAvi SinghJosé M. F. MouraStefan LeeDevi Parikh, Dhruv Batra

image
]]>
We introduce natural adversarial examples – real-world, unmodified, and naturally occurring examples that cause classifier... https://unthinking.photography/imgexhaust/we-introduce-natural-adversarial-examples-real-world-unmodified-and-naturally-occurring-examples-that-cause-classifier Fri, 19 Jul 2019 00:00:00 +0100 We introduce natural adversarial examples – real-world, unmodified, and naturally occurring examples that cause classifier accuracy to significantly degrade. We curate 7,500 natural adversarial examples and release them in an ImageNet classifier test set that we call IMAGENET-A. This dataset serves as a new way to measure classifier robustness.

https://arxiv.org/abs/1907.07174

Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, Dawn Song

image
]]>
As researchers, we have always wondered: if we scale up the amount of training data 10x, will the accuracy double? https://unthinking.photography/imgexhaust/as-researchers-we-have-always-wondered-if-we-scale-up-the-amount-of-training-data-10x-will-the-accuracy-double Wed, 17 Jul 2019 00:00:00 +0100 As researchers, we have always wondered: if we scale up the amount of training data 10x, will the accuracy double?

https://ai.googleblog.com/2017/07/revisiting-unreasonable-effectiveness.html

“The elephant in the room is where can we obtain a dataset that is 300x larger than ImageNet? At Google, we have been continuously working on building such datasets automatically to improve computer vision algorithms. Specifically, we have built an internal dataset of 300M images that are labeled with 18291 categories, which we call JFT-300M. The images are labeled using an algorithm that uses complex mixture of raw web signals, connections between web-pages and user feedback. This results in over one billion labels for the 300M images (a single image can have multiple labels). Of the billion image labels, approximately 375M are selected via an algorithm that aims to maximize label precision of selected images. However, there is still considerable noise in the labels: approximately 20% of the labels for selected images are noisy. Since there is no exhaustive annotation, we have no way to estimate the recall of the labels.”

“Building a dataset of 300M images should not be a final goal - as a community, we should explore if models continue to improve in a meaningful way in the regime of even larger (1 billion+ image) datasets.“

]]>
High Quality Face Recognition with Deep Metric Learning https://unthinking.photography/imgexhaust/high-quality-face-recognition-with-deep-metric-learning Tue, 16 Jul 2019 00:00:00 +0100 High Quality Face Recognition with Deep Metric Learning

The new example comes with pictures of bald Hollywood action heroes and uses the provided deep metric model to identify how many different people there are and which faces belong to each person. The input images are shown below along with the four automatically identified face clusters.

Just like the other example dlib models, the pre-trained model used by this example program is in the public domain. So you can use it for anything you want. Also, the model has an accuracy of 99.38% on the standard Labeled Faces in the Wild benchmark. This is comparable to other state-of-the-art models and means that, given two face images, it correctly predicts if the images are of the same person 99.38% of the time.

Davis King, 2017
http://blog.dlib.net/2017/02/high-quality-face-recognition-with-deep.html

]]>
While humans pay attention to the shapes of pictured objects, deep learning computer vision algorithms routinely latch on to the... https://unthinking.photography/imgexhaust/while-humans-pay-attention-to-the-shapes-of-pictured-objects-deep-learning-computer-vision-algorithms-routinely-latch-on-to-the Mon, 15 Jul 2019 00:00:00 +0100 While humans pay attention to the shapes of pictured objects, deep learning computer vision algorithms routinely latch on to the objects’ textures instead
image

Image: Robert Geirhos

https://www.quantamagazine.org/where-we-see-shapes-ai-sees-textures-20190701/

]]>
Buttons, Sascha Pohflepp https://unthinking.photography/imgexhaust/buttons-sascha-pohflepp Tue, 09 Jul 2019 00:00:00 +0100 Buttons, Sascha Pohflepp

2006/2010

Between Blinks & Buttons is a twofold thesis project about the camera as a networked object. Through making their photos public on the Internet, individuals create traces of themselves. In addition to their value as a memory, each image contains a multitude of information about the context of its creation.

Through this meta-information, every image is linked to the precise moment in time when it was taken, making it possible to see what happened simultaneously in the world at that instant. This work tries to focus the user’s imagination on that other, to create narratives that run between one’s own memory and a stranger’s moment which happened to coincide in time.

http://www.pohflepp.net/Work/Buttons

]]>
Embodying Others https://unthinking.photography/articles/embodying-others Thu, 20 Jun 2019 00:00:00 +0100 Into the Universe of Rendered Architectural Images https://unthinking.photography/articles/into-the-universe-of-rendered-architectural-images Thu, 20 Jun 2019 00:00:00 +0100 The reason biases against women or people of colour appear in technology are complex. They’re often attributed to data sets... https://unthinking.photography/imgexhaust/the-reason-biases-against-women-or-people-of-colour-appear-in-technology-are-complex-theyre-often-attributed-to-data-sets Thu, 20 Jun 2019 00:00:00 +0100

The reason biases against women or people of colour appear in technology are complex. They’re often attributed to data sets being incomplete and the fact that the technology is often made by people who aren’t from diverse backgrounds. That’s one argument at least – and in a sense, it’s correct. Increasing the diversity of people working in the tech industry is important. Many companies are also collecting more data to make it more representative of the people who use digital technology, in the vain hope of eliminating racist soap dispensers or recruitment bots that exclude women.

The problem is that these are social, not digital, problems. Attempting to solve those problems through more data and better algorithms only serves to hide the underlying causes of inequality. Collecting more data doesn’t actually make people better represented, instead it serves to increase how much they are being surveilled by poorly regulated tech companies. The companies become instruments of classification, categorising people into different groups by gender, ethnicity and economic class, until their database looks balanced and complete.

Doug Specht

Senior Lecturer in Media and Communications, University of Westminster 

https://theconversation.com/tech-companies-collect-our-data-every-day-but-even-the-biggest-datasets-cant-solve-social-issues-118133

]]>
The Entasis of Elon Musk https://unthinking.photography/articles/the-enstasis-of-elon-musk Wed, 19 Jun 2019 00:00:00 +0100 Mark Zuckerberg reveals the truth about Facebook and who really owns the future... https://unthinking.photography/imgexhaust/mark-zuckerberg-reveals-the-truth-about-facebook-and-who-really-owns-the-future Wed, 19 Jun 2019 00:00:00 +0100 Rendering the Desert of The Real https://unthinking.photography/articles/rendering-the-desert-of-the-real Tue, 18 Jun 2019 00:00:00 +0100 ‘Hooded Prisoner’ in 3D – a discussion between Julian Stallabrass and Alan Warburton https://unthinking.photography/articles/hooded-prisoner-in-3d-julian-stallabrass-alan-warburton Fri, 07 Jun 2019 00:00:00 +0100 This labelling job has made me very observant. I have found pictures that made me think “if I had taken such a picture, then I... https://unthinking.photography/imgexhaust/this-labelling-job-has-made-me-very-observant-i-have-found-pictures-that-made-me-think-if-i-had-taken-such-a-picture-then-i Wed, 05 Jun 2019 00:00:00 +0100

This labelling job has made me very observant. I have found pictures that made me think “if I had taken such a picture, then I would know what is everything.”  For instance, in a picture of a landscape, sometimes I do not know if I am seeing small trees or large bushes. The road to my home (I live far away from the city) passes by a landscape with lots of trees, wineries and even a small river. Now I look at this landscape in a different light because I want to recognise every tree, every bush and I try to think how each of those elements would look in side a photograph. Now I do not just look at things, I am also interested in knowing their names because in this job it is not good enough to give a description of what they are useful for.

https://arxiv.org/pdf/1210.3448.pdf

Notes from an annotator by Adela Barriuso
From Notes on image annotation
Adela Barriuso, Antonio Torralba, 2012
Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology

]]>
Image Tracer https://unthinking.photography/imgexhaust/image-tracer Tue, 04 Jun 2019 00:00:00 +0100 Image Tracer

The Image Tracer is a collaborative project between Tsila Hassine and De Geuzen. It evolved out of our interests in media images and the way their significance and presence fluctuate in the ecology of the world wide web. Currently, in its beta phase, the Tracer is a research tool that archives Google image searches for the purposes of tracking their url, appearance, disappearance and rank.

Operating from a local hard disk, when an image search is performed, a python script structures the query results and saves them in a file, which is then uploaded into a database and converted into html or viewable pages. These html pages function like a snapshot in a given moment. When you re-perform the operation, another snapshot is layered over the previous capture, creating an archeology of data.

http://www.geuzen.org/tracer_v1.7/Blair/index.html

]]>
We present a method and application for animating a human subject from a single photo. E.g., the character can walk out, run,... https://unthinking.photography/imgexhaust/we-present-a-method-and-application-for-animating-a-human-subject-from-a-single-photo-e-g-the-character-can-walk-out-run Thu, 30 May 2019 00:00:00 +0100

We present a method and application for animating a human subject from a single photo. E.g., the character can walk out, run, sit, or jump in 3D. The key contributions of this paper are: 1) an application of viewing and animating humans in single photos in 3D, 2) a novel 2D warping method to deform a posable template body model to fit the person’s complex silhouette to create an animatable mesh, and 3) a method for handling partial self occlusions. We compare to state-of-the-art related methods and evaluate results with human studies. Further, we present an interactive interface that allows re-posing the person in 3D, and an augmented reality setup where the animated 3D person can emerge from the photo into the real world. We demonstrate the method on photos, posters, and art. 

https://grail.cs.washington.edu/projects/wakeup/

“General animation from video has led to many creative effects over the years.  The seminal “Video Textures” work shows how to create a video of infinite length starting from a single video.   Human-specific video textures were produced  from  motion  capture  videos  via  motion  graphs. explore multi-view captures for human motion animation,  and  demonstrate  that  clothing  can  be  de-formed in user videos guided by body skeleton and videos of models wearing the same clothing. Cinemagraphs or Cliplets create a still with small motion in some part of the still, by segmenting part of a given video in time and space. Relevant also are animations created from big data sets of  images,  e.g.,  personal  photo  collections  of  a  person where  the  animation  shows  a  transformation  of  a  face through years, or Internet photos to animate transformation of a location in the world through years, e.g.,how flowers grow on Lombard street in San Francisco, or the change of glaciers over a decade“

https://arxiv.org/abs/1812.02246 (p2)

Chung-Yi Weng, Brian Curless, Ira Kemelmacher-Shlizerman

]]>
Fortunately we are smart people and have found a way out of this predicament. Instead of relying on algorithms, which we can be... https://unthinking.photography/imgexhaust/fortunately-we-are-smart-people-and-have-found-a-way-out-of-this-predicament-instead-of-relying-on-algorithms-which-we-can-be Wed, 29 May 2019 00:00:00 +0100 Fortunately we are smart people and have found a way out of this predicament. Instead of relying on algorithms, which we can be accused of manipulating for our benefit, we have turned to machine learning, an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias. It’s a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don’t lie.

https://idlewords.com/talks/sase_panel.htm


image

https://ganbreeder.app/ ?

]]>
Zachary Norman’s Endangered Data uses the cryptographic method known as steganography to store data from the Carbon Dioxide Information Analysis Center (CDIAC) within the pixels of visual images. The images can be shared, thus surreptitiously sharing encrypted data across communication systems that might be surveilled. The addition of data is itself visualized in a change in an image’s colors, which can be controlled by the user. https://unthinking.photography/imgexhaust/zachary-normans-endangered-data-uses-the-cryptographic-method-known-as-steganography-to-store-data-from-the-carbon-dioxide-infor Tue, 28 May 2019 00:00:00 +0100

Zachary Norman’s Endangered Data uses the cryptographic method known as steganography to store data from the Carbon Dioxide Information Analysis Center (CDIAC) within the pixels of visual images. The images can be shared, thus surreptitiously sharing encrypted data across communication systems that might be surveilled. The addition of data is itself visualized in a change in an image’s colors, which can be controlled by the user.

In this project, the color changes signify a dis-colorization of environments. Soft white clouds in blue skies transform into dayglo-colored explosions. Mountains covered in greenery suddenly seem to sprout toxic molds. Waves that once reflected only light now glisten with neon-colored films that convey toxicity. CDIAC datasets document carbon cycles in the atmosphere, oceans, and land. They enumerate trace gases and aerosols, CO2 emissions, and vegetation responses to CO2 and climate. It represents the science the global warming-denial attempts to discredit, delegitimize, and disappear.“

Image: 14.23°S, 170.56°W
Cape Matatula, Samoa
Single Channel Video

Measurement data from Cape Matatula, Samoa (14.23°S, 170.56°W) measurement station encrypted into image of location. Increase in pixels used to store data proportionate to increase of Methane (CH4) in atmosphere between 1996 and 2016. Data can be viewed here.

http://www.zacharydeannorman.com/#ED

]]>
a_brief_history_of_the_digital_image_volume1 https://unthinking.photography/imgexhaust/a-brief-history-of-the-digital-image-volume1 Fri, 24 May 2019 00:00:00 +0100 We knew we needed to collect a data set that has far more images than we have ever had before, perhaps thousands of times more,... https://unthinking.photography/imgexhaust/we-knew-we-needed-to-collect-a-data-set-that-has-far-more-images-than-we-have-ever-had-before-perhaps-thousands-of-times-more Fri, 24 May 2019 00:00:00 +0100 We knew we needed to collect a data set that has far more images than we have ever had before, perhaps thousands of times more, and together with Professor Kai Li at Princeton University, we launched the ImageNet project in 2007. Luckily, we didn’t have to mount a camera on our head and wait for many years. We went to the Internet, the biggest treasure trove of pictures that humans have ever created. We downloaded nearly a billion images and used crowdsourcing technology like the Amazon Mechanical Turk platform to help us to label these images. At its peak, ImageNet was one of the biggest employers of the Amazon Mechanical Turk workers: together, almost 50,000 workers from 167 countries around the world helped us to clean, sort and label nearly a billion candidate images. That was how much effort it took to capture even a fraction of the imagery a child’s mind takes in in the early developmental years.Li Fei Fei, How we’re teaching computers to understand pictures]]> It’s easy to take for granted that you can send a picture to a friend without worrying about what device, browser, or operating... https://unthinking.photography/imgexhaust/its-easy-to-take-for-granted-that-you-can-send-a-picture-to-a-friend-without-worrying-about-what-device-browser-or-operating Thu, 23 May 2019 00:00:00 +0100

It’s easy to take for granted that you can send a picture to a friend without worrying about what device, browser, or operating system they’re using, but things weren’t always this way. By the early 1980s, computers could store and display digital images, but there were many competing ideas about how best to do that. You couldn’t just send an image from one computer to another and expect it to work.

To solve this problem, the Joint Photographic Experts Group (JPEG), a committee of experts from all over the world, was established in 1986 as a joint effort by the ISO (International Organization for Standardization) and the IEC (International Electrotechnical Commission)—two international standards organizations headquartered in Geneva, Switzerland.

JPEG, the group of people, created JPEG, a standard for digital image compression, in 1992. Anyone who’s ever used the internet has probably seen a JPEG-encoded image. It is by far the most ubiquitous way of encoding, sending and storing images. From web pages to email to social media, JPEG is used billions of times a day—almost every time we view or send images online. Without JPEG, the web would be a little less colorful, a lot slower, and probably have far fewer cat pictures!

Unraveling the JPEG by Omar Shehata for Parametric Press

]]>
Victor Wang (王宗孚) on Instagram: “Artist Trevor Paglen’s installation ‘From Apple to Kleptomaniac’ (2019) and Haroon Mirza’s Installation ‘Beyond the Wave Epoch’, (2019) .…” https://unthinking.photography/imgexhaust/victor-wang-on-instagram-artist-trevor-paglens-installation-from-apple-to-kleptomaniac-2019-and-haroon-mirzas-installation-beyon Thu, 23 May 2019 00:00:00 +0100 Fun with Snapchat's Gender Swapping Filter https://unthinking.photography/imgexhaust/fun-with-snapchat-s-gender-swapping-filter Wed, 22 May 2019 00:00:00 +0100 Name That Dataset!!! https://unthinking.photography/imgexhaust/name-that-dataset Tue, 14 May 2019 00:00:00 +0100 Let’s play 

Name That Dataset!!!

https://people.csail.mit.edu/torralba/research/bias/

]]>
i will tell you everything https://unthinking.photography/imgexhaust/i-will-tell-you-everything Mon, 29 Apr 2019 00:00:00 +0100 i will tell you everything

Manetta Berends, 2015

training set = “contemporary encyclopaedia”

In the process of making an encyclopaedia, categories are decided on wherein various objects are placed. It is search for an universal system to describe the world.

Training sets are used to train data-mining algorithms for pattern recognition. These training sets are the contemporary version of the traditional encyclopaedia. From the famous “Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers” initiated by Diderot and D'Alembert in 1751, the encyclopaedia was a body of information to share knowledge from human to human. But the contemporary encyclopaedia’s are constructed to rather share structures and information of humans with machines. As the researcher of the SUN dataset phrase in their concluding paper:

we need datasets that encompass the richness and varieties of environmental scenes and knowledge about how scene categories are organised and distinguished from each other.

In order to automate processes of recognition purposes, researchers are again triggered to reconsider their categorisation structures and to question the classification of objects in these categories. The training sets give a glimpse on the process of constructing such simplified model of reality, and reveals the difficulties that appear along the way.

steps of constructing such encyclopaedia:

1. The material of the SUN group training set is collected by running queries in search engines. Result: the training set is constructed with typical digital images of low quality that are common to appear on the web.

2. The SUN group decides on category merges, drops, and occurrings, by judging their visual and semantic strength.

3. The SUN group asks ‘mechanical turks’ to annotate the images, by vectorising the objects that appear in a scene. Small objects disappear, common objects become the common objects that will be recognised.

Also, the most often annotated scene is 'living room’, followed by 'bedroom’ and 'kitchen’. The probability that 'living room’ will be the outcome of the recognition-algorithm is therefore much higher than for scenes that are not annotated that often. These results are hence also types of categories in themselves. Although not directly decided upon by the research group, this hierarchy originates thanks to the selection of scenes that the annotators worked on.

4. Start data-mining: an unknown set of images is given to the algorithm. The algorithm returns its output in how much % an image seems to be a certain scene.

5. The results of the algorithm are being transformed into a level accurateness. If needed, the training set is adjusted in order to reach better and thus more accurate results. Categories are merged, dropped or invented.

]]>
Colorful Image Colorization https://unthinking.photography/imgexhaust/colorful-image-colorization Fri, 26 Apr 2019 00:00:00 +0100 Colorful Image Colorization

Richard Zhang, Phillip Isola, Alexei A. Efros, 2016

Given  a  grayscale  photograph  as  input,  this  paper  attacks the problem of hallucinating a plausible colour version of the photograph.This problem is clearly under constrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorisation. We propose a fully automatic approach that produces vibrant and realistic colorisation. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colours in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million colour images. 

https://arxiv.org/pdf/1603.08511.pdf

]]>
Do neural nets dream of electric sheep? https://unthinking.photography/imgexhaust/do-neural-nets-dream-of-electric-sheep Thu, 25 Apr 2019 00:00:00 +0100 image]]> FaceUp - @hervisions_ https://unthinking.photography/imgexhaust/faceup-hervisions Wed, 24 Apr 2019 00:00:00 +0100 Light Pattern (Daniel Temkin, 2014) is a programming language where one communicates with the computer through photographs... https://unthinking.photography/imgexhaust/light-pattern-daniel-temkin-2014-is-a-programming-language-where-one-communicates-with-the-computer-through-photographs Wed, 24 Apr 2019 00:00:00 +0100

Light Pattern (Daniel Temkin, 2014) is a programming language where one communicates with the computer through photographs instead of text. Light Pattern, like other programming languages, is a list of rules, a grammar to communicate with a compiler. In Light Pattern, this communication happens through source code made up of photographs. Instead of using words to communicate to the machine, it uses changes in color and exposure from one image to the next.

http://lightpattern.info

]]>
R/Place, reddit users, 2017 https://draemm.li/various/place-atlas/ https://www.reddit.com/r/place ... https://unthinking.photography/imgexhaust/r-place-reddit-users-2017-https-draemm-li-various-place-atlas-https-www-reddit-com-r-place Tue, 23 Apr 2019 00:00:00 +0100 R/Place, reddit users, 2017

https://draemm.li/various/place-atlas/

https://www.reddit.com/r/place

https://www.reddit.com/r/place/comments/66bwda/rplace_has_been_archived/

Similar to the Million Dollar Homepage, Alex Tew 2005

image

Gif by Fred Benenson

]]>
VFRAMEAdam Harvey https://unthinking.photography/imgexhaust/vframeadam-harvey Tue, 16 Apr 2019 00:00:00 +0100 VFRAME
Adam Harvey

A collection of open-source computer vision software tools designed specifically for human rights investigations that rely on large datasets of visual media.

Specifically VFRAME is developing computer vision tools to process million-scale video collections, scene summarization algorithms to reduce processing times, object detection models to detect illegal munitions, synthetic datasets to train object detection models, a visual search engine to locate similar images of interest, and an custom annotation system to create training data. Read more

https://vframe.io/

]]>
Layers of Abstraction: A Pixel at the Heart of Identity https://unthinking.photography/imgexhaust/layers-of-abstraction-a-pixel-at-the-heart-of-identity Mon, 15 Apr 2019 00:00:00 +0100 Layers of Abstraction: A Pixel at the Heart of Identity

Shinji Toya and Murad Khan, 2019

This project centres around a critical examination of the limits of categorisation in machine learning systems. Specifically, we’re interested in the institution and function of racial categories in image-based facial recognition algorithms; the project questions how social structures and technical systems intertwine to intensify existing conditions of bias and inequity, and how this bias operates as a substrate of the algorithmic image.

Layers of Abstraction: A Pixel at the Heart of Identity is a visual indication of research-in-progress, presenting the mixed technological method we used to uncover a single pixel that represents the limit between ‘Asian’ and ‘White’ racial categories in a commercial facial recognition system. By passing the image through more than one image-based algorithm, we begin to distil the locus of difference as read by the machine-eye.

The pixel acts as a threshold through which the system reconfigures itself and, in doing so, gives rise to identities hidden amidst arrays of colour channel data. The minute, discrete, and tangible nature of the pixel as a racialized limit point between categories points to a social problem at the heart of abstraction in machine learning systems.

In the exhibition at SPACE, the method is outlined through a tutorial video coupled with an animated example of the output of our research.

https://shinjitoya.com/space-art-tech/

]]>
Aarati Akkapeddi https://unthinking.photography/imgexhaust/aarati-akkapeddi Fri, 12 Apr 2019 00:00:00 +0100 Aarati Akkapeddi

Encoding

An experiment in the aesthetic deconstruction of identity through the visual interpretation/abstraction of biometric data pulled from family photographs as an exploration of data visualization and archiving as both a reductive and productive process. And as a process that has complex relationships with notions of collective and individual identities.

http://aarati.me/project.html?project=project-encoding

]]>
A Complete Taxonomy of Internet Chum https://unthinking.photography/imgexhaust/a-complete-taxonomy-of-internet-chum Wed, 10 Apr 2019 00:00:00 +0100 A Complete Taxonomy of Internet Chum

by John Mahoney

This is a chumbox. It is a variation on the banner ad which takes the form of a grid of advertisements that sits at the bottom of a web page underneath the main content. It can be found on the sites of many leading publishers, including nymag.com, dailymail.co.uk, usatoday.com, and theawl.com (where it was “an experiment that has since ended.”)

The chumboxes were placed there by one of several chumvendors — Taboola, Outbrain, RevContent, Adblade, and my favorite, Content.ad — who design them to seamlessly slip into a particular design convention established early within the publishing web, a grid of links to appealing, perhaps-related content at the bottom of the content you intentionally came to consume. In return, publishers who deploy chumboxes receive money, traffic, or both. Typically, these publishers collect a percentage of the rates that the chumvendors charge advertisers to be placed inside the grids. These gains can be pocketed, or re-invested into purchasing the publisher’s own placements in similar grids on thousands of other sites amongst the chummy sea, reaping bulk traffic straight from the reeking depths of chumville.

Like everything else on the internet, traffic flowing through chumboxes must be tracked in order for everyone to be paid. Each box in the grid’s performance can be tracked both individually and in context of its neighbors. This allows them to be highly optimized; some chum is clearly better than others. As a byproduct of this optimization, an aesthetic has arisen. An effective chumbox clearly plays on reflex and the subconscious. The chumbox aesthetic broadcasts our most basic, libidinal, electrical desires back at us. And gets us to click.

Clicking on a chumlink — even one on the site of a relatively high-class chummer, like nymag.com — is a guaranteed way to find more, weirder, grosser chum. The boxes are daisy-chained together in an increasingly cynical, gross funnel; quickly, the open ocean becomes a sewer of chum.

https://www.theawl.com/2015/06/a-complete-taxonomy-of-internet-chum/

]]>
http://thesecatsdonotexist.com/ https://unthinking.photography/imgexhaust/http-thesecatsdonotexist-com Wed, 13 Mar 2019 00:00:00 +0000 http://thesecatsdonotexist.com/

]]>
ImageNet Roulette (Trevor Paglen, 2019) uses a neural network trained on the “people” categories from the ImageNet dataset to... https://unthinking.photography/imgexhaust/imagenet-roulette-trevor-paglen-2019-uses-a-neural-network-trained-on-the-people-categories-from-the-imagenet-dataset-to Wed, 13 Mar 2019 00:00:00 +0000

ImageNet Roulette (Trevor Paglen, 2019) uses a neural network trained on the “people” categories from the ImageNet dataset to classify pictures of people. It’s meant to be a peek into how artificial intelligence systems classify people, and a warning about how quickly AI becomes horrible when the assumptions built into it aren’t continually and exhaustively questioned.

NOTES AND WARNING:
ImageNet Roulette regularly classifies people in racist, misogynistic, cruel, and otherwise horrible ways. This is because the underlying training data contains those categories (and pictures of people that have been labelled with those categories). We did not make the underlying training data which is responsible for these classifications. It comes from a popular data set called ImageNet, which was created at Stanford University and which is a standard benchmark used in image classification and object detection.
       
ImageNet Roulette is meant in part to demonstrate how bad politics propagate through technical systems, often without the creators of those systems even being aware of it.

]]>
What can algorithms know? https://unthinking.photography/imgexhaust/what-can-algorithms-know Tue, 12 Mar 2019 00:00:00 +0000 What can algorithms know?

The present power of algorithms is fueled by another entity: that of data. Generally referred to as big data, large data sets, whose technical history has been well described by Kevin Driscoll (2012), contest the ruling of algorithms. Experts agree that without data to process, the algorithm remains inert (Berry 2011, 33; Cheney-Lippold 2011; Manovich 2013). The effectiveness of algorithms is strongly related to the data sets they compute, and computer scientists (Domingos 2012) as well as businessmen (Croll and Yoskovitz 2013) ponder if more data beat better algorithms, or if it is the other way round. Thus, it is of no surprise that within the humanities, algorithms are not only reflected upon but also used to analyze cultural data.

To analyze culture in a new way, large data sets are compiled and digitalized material is visualized. The cultural analytic Lev Manovich (2013) worked on visualization tools to dissect videos on the level of specific frames. This approach which has been carried further by the artist and hacker Robert M Ochshorn, who uses algorithms to persuasively read out films as if they were text. The literary scholar Franco Moretti (2005) uses quantitative methods to generate a graph of the fast rise and dramatic fall of British novelistic
 genres (44 genres over 160 years), or maps the radically changing geography in village narratives. Those new ways of generating cultural knowledge by exploring quantitative methods using data sets and algorithms are referred to as “Digital Humanities” (cf. Berry 2012; Lunenfeld et al. 2012). Here, algorithms become a way of “knowing knowledge” (Berry 2012, 6) which has triggered off new debates. Reducing algorithms to a sheer instrumental tool has been strongly criticized, for example in Alan Liu’s (2012) call to rethink the idea of instrumentality.

-Mercedez Bunz: Define Algorithm

]]>
The shift to influencer content traces a similar pattern, which Brud’s deployment of avatars consolidates and threatens to... https://unthinking.photography/imgexhaust/the-shift-to-influencer-content-traces-a-similar-pattern-which-bruds-deployment-of-avatars-consolidates-and-threatens-to Mon, 11 Mar 2019 00:00:00 +0000

The shift to influencer content traces a similar pattern, which Brud’s deployment of avatars consolidates and threatens to extend, closing off the alternatives beyond dichotomies that avatars might otherwise enable. For example, Katherine Angel Cross, in “The New Laboratory of Dreams: Role-Playing Games as Resistance”, describes using a World of Warcraft avatar to explore her identity as a transgender woman in a way that did not require body modification. Brud’s Miquela, by contrast, uses avataristic form, references to progressive politics, and an ethnically ambiguous appearance as veneers for a marketing approach that seeks to colonize more of the space of communication.

For all of Miquela’s talk about human-robot cooperation and the post-identity politics that seems to evoke, her conventionally attractive appearance and heteronormative behavior reinforce dualistic conceptions of male and female. Bermuda’s presence similarly reinscribes politics as a simplistic right-left binary. Such prescriptive models, whether it’s Facebook’s requisite identity boxes or the fantasized bodies that Miku and Miquela represent, inhibit any of us from moving beyond the categorical. Their avatars’ digital selves, connoting hybridity and fluidity, reinforce rigidity.

It Girls: Marketing entities like Miquela undermine the radical potential of avatars - Kerry Doran

image
]]>
Mushy from Everest Pipkin is a free asset pack of 824 neural network-generated isometric tiles residing in the creative commons.... https://unthinking.photography/imgexhaust/mushy-from-everest-pipkin-is-a-free-asset-pack-of-824-neural-network-generated-isometric-tiles-residing-in-the-creative-commons Wed, 06 Mar 2019 00:00:00 +0000 Mushy from Everest Pipkin is a free asset pack of 824 neural network-generated isometric tiles residing in the creative commons. The sets ave been trained on; plants, building materials, flooring, ground, water, rocks, objects, and blocks. 

]]>
A Net Artist Takes Over the Google Image Search of “Frieze Los Angeles” https://unthinking.photography/imgexhaust/a-net-artist-takes-over-the-google-image-search-of-frieze-los-angeles Tue, 05 Mar 2019 00:00:00 +0000 This Person Does Not Exist https://unthinking.photography/imgexhaust/this-person-does-not-exist Sun, 03 Mar 2019 00:00:00 +0000 A cold draught in a hot medium https://unthinking.photography/articles/a-cold-draught-in-a-hot-medium Thu, 21 Feb 2019 00:00:00 +0000 The Windshield and the Screen https://unthinking.photography/articles/the-windshield-and-the-screen Wed, 20 Feb 2019 00:00:00 +0000 Nonhuman Photography: An Interview with Joanna Zylinska [PART I] https://unthinking.photography/articles/interview-with-joanna-zylinska Mon, 11 Feb 2019 00:00:00 +0000 Nonhuman Photography: An Interview with Joanna Zylinska [PART II] https://unthinking.photography/articles/interview-with-joanna-zylinska-part-2 Mon, 11 Feb 2019 00:00:00 +0000 GoPro, circa the 1960s https://unthinking.photography/imgexhaust/gopro-circa-the-1960s Wed, 06 Feb 2019 00:00:00 +0000 Generative Representation https://unthinking.photography/articles/generative-representation Mon, 17 Dec 2018 00:00:00 +0000 Robot readable world https://unthinking.photography/imgexhaust/robot-readable-world Fri, 09 Nov 2018 00:00:00 +0000 Robot readable world

Timo Arnall, 2012

How do robots see the world? How do they gather meaning from our streets, cities, media and from us?

This is an experiment in found machine-vision footage, exploring the aesthetics of the robot eye.

]]>
Training Poses https://unthinking.photography/imgexhaust/training-poses Thu, 08 Nov 2018 00:00:00 +0000 "AI, Ain't I A Woman " - a spoken word piece that highlights the ways in which artificial intelligence can misinterpret the... https://unthinking.photography/imgexhaust/ai-ain-t-i-a-woman-a-spoken-word-piece-that-highlights-the-ways-in-which-artificial-intelligence-can-misinterpret-the Wed, 07 Nov 2018 00:00:00 +0000 “AI, Ain’t I A Woman ” - a spoken word piece that highlights the ways in which artificial intelligence can misinterpret the images of iconic black women: Oprah, Serena Williams, Michelle Obama,  Sojourner Truth, Ida B. Wells, and  Shirley Chisholm

https://www.notflawless.ai/

]]>
Biomedical Astronomy https://unthinking.photography/articles/biomedical-astronomy Wed, 31 Oct 2018 00:00:00 +0000 First as Snapshot, then as Decentralised Digital Asset https://unthinking.photography/articles/first-as-snapshot-then-as-decentralised-digital-asset Wed, 31 Oct 2018 00:00:00 +0000 Merve Alanyali - Tracking Protests using geotagged Flickr photographs https://unthinking.photography/imgexhaust/merve-alanyali-tracking-protests-using-geotagged-flickr-photographs Tue, 07 Aug 2018 00:00:00 +0100 Merve Alanyali - Tracking Protests using geotagged Flickr photographs

Watch: https://www.youtube.com/watch?time_continue=2&v=wCle_ARznj4

We analyse 25 million photos uploaded to Flickr in 2013 across 244 countries and regions, and determine for each week in each country and region what proportion of the photographs are tagged with the word “protest” in 34 different languages. We find that higher proportions of “protest”-tagged photographs in a given country and region in a given week correspond to greater numbers of reports of protests in that country and region and week in the newspaper The Guardian. Our findings underline the potential value of photographs uploaded to the Internet as a source of global, cheap and rapidly available measurements of human behaviour in the real world.

https://www.mervealanyali.com/projects

]]>
No Home Like Place https://unthinking.photography/imgexhaust/no-home-like-place Wed, 20 Jun 2018 00:00:00 +0100 practice for in-game photography / virtual photography https://unthinking.photography/imgexhaust/practice-for-in-game-photography-virtual-photography Mon, 18 Jun 2018 00:00:00 +0100 Warwickshire Police on Twitter https://unthinking.photography/imgexhaust/warwickshire-police-on-twitter Wed, 04 Apr 2018 00:00:00 +0100 Abnormality Detection in Images https://unthinking.photography/imgexhaust/abnormality-detection-in-images Wed, 07 Feb 2018 00:00:00 +0000 Abnormality Detection in Images

When describing images, humans tend not to talk about the obvious, but rather mention what they find interesting. We argue that abnormalities and deviations from typicalities are among the most important components that form what is worth mentioning. In this project we introduce the abnormality detection as a recognition problem and show how to model typicalities and, consequently, meaningful deviations from prototypical properties of categories. Our model can recognize abnormalities and report the main reasons of any recognized abnormality. We also show that abnormality predictions can help image categorization. We introduce the Abnormality Object Dataset and show interesting results on how to reason about abnormalities.

]]>
Cat Photographers (or the desire to see through animal eyes) https://unthinking.photography/articles/cat-photographers-or-the-desire-to-see-through-animal-eyes Tue, 30 Jan 2018 00:00:00 +0000 Somebody else's cat: A study in the protohistory of the internet cat meme https://unthinking.photography/articles/somebody-else-s-cat-a-study-in-the-protohistory-of-the-internet-cat-meme Tue, 30 Jan 2018 00:00:00 +0000 The War and Peace of LOLcats https://unthinking.photography/articles/the-war-and-peace-of-lolcats Tue, 30 Jan 2018 00:00:00 +0000 Interview with Wendy McMurdo https://unthinking.photography/articles/interview-with-wendy-mcmurdo Wed, 24 Jan 2018 00:00:00 +0000 A year in computer vision https://unthinking.photography/imgexhaust/a-year-in-computer-vision Fri, 01 Dec 2017 00:00:00 +0000 image

http://www.themtank.org/a-year-in-computer-vision

]]>
By Akihiko Taniguchi  Taniguchi created a web application for iPhone that allows you to take pictures without using the phone’s... https://unthinking.photography/imgexhaust/byakihiko-taniguchi-taniguchi-created-a-web-application-for-iphone-that-allows-you-to-take-pictures-without-using-the-phones Fri, 24 Nov 2017 00:00:00 +0000 By Akihiko Taniguchi 

Taniguchi created a web application for iPhone that allows you to take pictures without using the phone’s built-in camera. Instead the photo is a result of the user’s current position through the GPS built-in the iPhone and the surrounding google streetview.

]]>
CCamera — take photos that already exist https://unthinking.photography/imgexhaust/ccamera-take-photos-that-already-exist Fri, 24 Nov 2017 00:00:00 +0000 Instagram post by KATSU.OFFICIAL • Oct 17, 2017 at 4:24pm UTC https://unthinking.photography/imgexhaust/instagram-post-by-katsu-official-oct-17-2017-at-4-24pm-utc Wed, 18 Oct 2017 00:00:00 +0100 Interview with Joey Holder https://unthinking.photography/articles/interview-with-joey-holder Tue, 17 Oct 2017 00:00:00 +0100 An interview with Morehshin Allahyari https://unthinking.photography/articles/interview-with-morehshin-allahyari Tue, 19 Sep 2017 00:00:00 +0100 Mike Tyka, Dreams of Imaginary People, 2017 https://unthinking.photography/imgexhaust/mike-tyka-dreams-of-imaginary-people-2017 Fri, 08 Sep 2017 00:00:00 +0100 Mike Tyka, Dreams of Imaginary People, 2017

(constantly morphing hypnotic GAN portraits)

http://www.miketyka.com/projects/dreams/

]]>
The Overexposed Model https://unthinking.photography/imgexhaust/the-overexposed-model Thu, 07 Sep 2017 00:00:00 +0100 The politics of image search - A conversation with Sebastian Schmieg [Part II] https://unthinking.photography/articles/the-politics-of-image-search-a-conversation-with-sebastian-schmieg-part-ii Thu, 24 Aug 2017 00:00:00 +0100 ChainGAN portrait series by Mario Klingemann https://unthinking.photography/imgexhaust/chaingan-portrait-series-by-mario-klingemann Wed, 09 Aug 2017 00:00:00 +0100 mario-klingemann:

ChainGAN portrait series by Mario Klingemann

]]>
Jonathan Rotsztain, Rattled, 2015 Screengrabs of automated ads. from Printed Web 3... https://unthinking.photography/imgexhaust/jonathan-rotsztain-rattled-2015-screengrabs-of-automated-ads-from-printed-web-3 Thu, 03 Aug 2017 00:00:00 +0100 Jonathan Rotsztain, Rattled, 2015

Screengrabs of automated ads.

from Printed Web 3 Archive: https://archive.org/details/Printed_Web_3 / http://libraryoftheprintedweb.tumblr.com

]]>
Quotations around automation, image making and labour. Collated by Adam Brown and Nicolas Malevé for Rethinking the workshop:... https://unthinking.photography/imgexhaust/quotations-around-automation-image-making-and-labour-collated-by-adam-brown-and-nicolas-maleve-for-rethinking-the-workshop Thu, 03 Aug 2017 00:00:00 +0100 Quotations around automation, image making and labour. Collated by Adam Brown and Nicolas Malevé for Rethinking the workshop: Workers Education in the Age of Intelligent Machines at The Photographers’ Gallery.

More reading and resources for the workshop are at https://mechacritique.wordpress.com/2017/07/07/resources-for-the-workshop/

]]>
We are under the illusion that seeing is effortless, but fre- quently the visual system is lazy and makes us believe that we... https://unthinking.photography/imgexhaust/we-are-under-the-illusion-that-seeing-is-effortless-but-fre-quently-the-visual-system-is-lazy-and-makes-us-believe-that-we Sun, 23 Jul 2017 00:00:00 +0100 We are under the illusion that seeing is effortless, but fre- quently the visual system is lazy and makes us believe that we understand something when in fact we don’t. Labeling a picture forces us to become aware of the difficulties un- derlying scene understanding. Suddenly, the act of seeing is not effortless anymore. We have to make an effort in order to understand parts of the picture that we neglected at first glance.

Notes on image annotation, Adela Barriuso, Antonio Torralba, 2012

https://arxiv.org/pdf/1210.3448.pdf

]]>
SITUATION #76: LaTurbo Avedon, Spring Diary https://unthinking.photography/imgexhaust/situation-76-laturbo-avedon-spring-diary Thu, 20 Jul 2017 00:00:00 +0100 Computed Curation https://unthinking.photography/imgexhaust/computed-curation Wed, 19 Jul 2017 00:00:00 +0100 prostheticknowledge:

Computed Curation

Project by Philipp Schmitt creates a book of photography curated and annotated using Machine Learning:

Computed Curation is a photobook created by a computer. Taking the human editor out of the loop, it uses machine learning and computer vision tools to curate a series of photos from an archive of pictures.

Considering both image content and composition — but through the sober eyes of neural networks, vectors and pixels — the algorithms uncover unexpected connections and interpretations that a human editor might have missed.

Machine learning based image recognition tools are already adept at recognizing training images (umbrella, dog on a beach, car), but quickly expose their flaws and biases when challenged with more complex input. In Computed Curation, these flaws surface in often bizarre and sometimes poetic captions, tags and connections. Moreover, by urging the viewer to constantly speculate on the logic behind its arrangement, the book teaches how to see the world through the eyes of an algorithm. 

More Here

]]>
“The world’s first photo exhibition shot with a car.” Barbara Davidson in collaboration with Volvo https://unthinking.photography/imgexhaust/the-worlds-first-photo-exhibition-shot-with-a-car-barbara-davidson-in-collaboration-with-volvo Wed, 19 Jul 2017 00:00:00 +0100 “The world’s first photo exhibition shot with a car.” Barbara Davidson in collaboration with Volvo

]]>
Kaitlin Schaer, Drone Photogrammetry, 2017 https://unthinking.photography/imgexhaust/kaitlin-schaer-drone-photogrammetry-2017 Tue, 04 Jul 2017 00:00:00 +0100 Kaitlin Schaer, Drone Photogrammetry, 2017

Freestyle drone flying and racing is a growing sport and hobby. Combining aspects of more traditional RC aircraft hobbies, videography, DIY electronics and even Star Wars Pod Racing (according to some), drone pilots use first person view controls to create creative and acrobatic explorations of architecture. My brother, Johnny FPV, is increasingly successful in this new sport.

http://kschaer.net/#/drone/

]]>
YouTube Creator Blog: Hot and Cold: Heatmaps in VR https://unthinking.photography/imgexhaust/youtube-creator-blog-hot-and-cold-heatmaps-in-vr Mon, 03 Jul 2017 00:00:00 +0100 VR and 360 video is watching you watch it 🌀

]]>
You Can Encrypt Your Face – The New Inquiry https://unthinking.photography/imgexhaust/you-can-encrypt-your-face-the-new-inquiry Sat, 24 Jun 2017 00:00:00 +0100 We’ve devised an algorithm that has culled the faces from 130 executives at leading biometric corporations around the world and transformed them into masks for you to print out and wear. Since they’ve chosen to profit by face-snatching the rest of us, we figured that we would resist by doing the same in reverse. The difference is that by not matching their names to their faces, we’ve chosen to grant these executives the very thing their industry denies to us: anonymity.

]]>
"Intelligence is not enough" https://unthinking.photography/imgexhaust/intelligence-is-not-enough Wed, 31 May 2017 00:00:00 +0100 She Who Sees the Unknown, Ya’jooj Majooj https://unthinking.photography/imgexhaust/she-who-sees-the-unknown-yajooj-majooj Mon, 22 May 2017 00:00:00 +0100 prostheticknowledge:

She Who Sees the Unknown, Ya’jooj Majooj

Artist Moreshin Allahyari who works with 3D scanning and printing and Iranian heritage discusses a work that was born from the recent US travel ban and a myth involving agents of chaos and wall building:

The work is currently shown at the Photographers Gallery in London - you can find out more here

]]>
Tabita Rezaire, AFRO CYBER RESISTANCE, 2014 https://unthinking.photography/imgexhaust/tabita-rezaire-afro-cyber-resistance-2014 Tue, 09 May 2017 00:00:00 +0100 Manifest Destiny in the Digital Age https://unthinking.photography/articles/manifest-destiny-in-the-digital-age Fri, 05 May 2017 00:00:00 +0100 Nyktopolitics https://unthinking.photography/articles/nyktopolitics Fri, 05 May 2017 00:00:00 +0100 The politics of image search - A conversation with Sebastian Schmieg [Part I] https://unthinking.photography/articles/the-politics-of-image-search-a-conversation-with-sebastian-schmieg-part-i Fri, 05 May 2017 00:00:00 +0100 Domestic (in)security https://unthinking.photography/articles/domestic-in-security Thu, 04 May 2017 00:00:00 +0100 Realtime Neuratorial Art https://unthinking.photography/imgexhaust/realtime-neuratorial-art Sun, 30 Apr 2017 00:00:00 +0100 prostheticknowledge:

Realtime Neuratorial Art

Artist Memo Atken has been exploring methods to generate neural network images in realtime. It started with his #Learningtosee project, with a ‘blank’ neural network, not trained on anything, visualizing fresh visual input:

This is a deep neural network that has not been trained on anything. It starts off completely blank*. It is literally ‘opening its eyes’ for the first time and trying to ‘understand’ what it sees. In this case ‘understanding’ means trying to find patterns, trying to find regularities in what it’s seeing now, and with respect to everything that it has seen so far; so that it can efficiently compress and organise incoming information in context of its past experience. It’s trying to deconstruct the incoming signal, and reconstruct it using features that it has learnt based on what it has already seen – which at the beginning, is nothing. When the network receives new information that is unfamiliar, or perhaps just from a new angle that it has not yet encountered, it’s unable to make sense of that new information. It’s unable to find an internal representation relating it to past experience; its compressor fails to successfully deconstruct and reconstruct. But the network is training in realtime, it’s constantly learning, and updating its ‘filters’ and ‘weights’, to try and *improve its compressor*, to find more efficient internal representations, to build a more ‘universal world-view’ upon which it can hope to reconstruct future experiences. Unfortunately though, the network also ‘forgets’. When too much new information comes in, and it doesn’t re-encounter past experiences, it slowly loses those filters and representations required to reconstruct those past experiences.   

More Here

Following from this, he is using the same realtime method but with a neural net trained on art collections from 150 museums:

#DeepNeuralNet #LearningToSee (havin a hard time cos I’m playin w LR momentum gradclip etc in realtime - no post vfx, all from the DNN)

Here is a link to a video where it re-visualizes a drawing in realtime

You can follow more updates on the project at his twitter profile here

]]>
history doesn't have to be so depressing https://unthinking.photography/imgexhaust/history-doesn-t-have-to-be-so-depressing Mon, 24 Apr 2017 00:00:00 +0100 image]]> iphone-technology-authorship-China https://unthinking.photography/imgexhaust/iphone-technology-authorship-china Tue, 18 Apr 2017 00:00:00 +0100 The internet has become an ideal medium for the dissemination of unfounded conspiracies and disinformation and the reinforcement... https://unthinking.photography/imgexhaust/the-internet-has-become-an-ideal-medium-for-the-dissemination-of-unfounded-conspiracies-and-disinformation-and-the-reinforcement Thu, 13 Apr 2017 00:00:00 +0100

The internet has become an ideal medium for the dissemination of unfounded conspiracies and disinformation and the reinforcement of beliefs and biases which has culminated in a fractured reality that threatens democracy itself. This talk draws a line from the ‘Sovereign Individual’, an idealized neoliberal knowledge worker, liberated by technology from the grip of geopolitics (as identified in the work of James Dale Davison and William Rees-Mogg) to the 'Targeted Individual’, an utterly unremarkable, paranoid and narcissistic victim of the present, convinced that they are being persecuted by unknown forces via futuristic technology.

Talk by Daniel Keller

]]>
Sinofuturism (1839 - 2046 AD) https://unthinking.photography/imgexhaust/sinofuturism-1839-2046-ad Wed, 12 Apr 2017 00:00:00 +0100 Matthew Plummer-Fernandez - snowden.ppt, 2017 https://unthinking.photography/imgexhaust/matthew-plummer-fernandez-snowden-ppt-2017 Thu, 06 Apr 2017 00:00:00 +0100 Matthew Plummer-Fernandez - snowden.ppt, 2017

Machine Learning style transfer used to create portraits of Snowden in the styles of the leaked NSA powerpoint slides. The leaked presentations also revealed that Machine Learning was being used by the NSA to automate mass surveillance.

See more at http://www.plummerfernandez.com/snowden-ppt

]]>
Sitting in the sun at a tech company cafeteria, this former Google worker described a year spent immersed in some of the darkest... https://unthinking.photography/imgexhaust/sitting-in-the-sun-at-a-tech-company-cafeteria-this-former-google-worker-described-a-year-spent-immersed-in-some-of-the-darkest Tue, 28 Mar 2017 00:00:00 +0100 Sitting in the sun at a tech company cafeteria, this former Google worker described a year spent immersed in some of the darkest content available on the Internet. His role at the tech company mainly consisted of reviewing things like bestiality, necrophilia, body mutilations (gore, shock, beheadings, suicides), explicit fetishes (like diaper porn) and child pornography found across all Google products — an experience that he found “scarring.” The company refused to make him a full-time worker, keeping him on contract status without much of a support system.

After college, I went to work in politics; I was a social media guy. A recruiter called me and said, “You should work for Google.” It never occurred to me to work for a tech company. They convinced me it was the right place to go.
So I went there. I was kind of repulsed at how much I had. I think anyone who said they didn’t enjoy it would be a filthy liar: I ate breakfast, lunch and dinner there every day. They give you everything you need. As a person just getting out of college, it was fantastic. My parents, being traditional, were very proud that that I was working for this huge company.

Over the phone, the recruiter informed me I’d be dealing with “sensitive content.” It didn’t occur to me that I would be doing the work without technical and emotional support.

source: https://www.buzzfeed.com/reyhan/tech-confessional-the-googler-who-looks-at-the-wo?utm_term=.moEQJPK1qd#.abJdLnE0QY

]]>
Backdoored.io https://unthinking.photography/imgexhaust/backdoored-io Tue, 21 Mar 2017 00:00:00 +0000 Backdoored.io is an art piece by Nye Thompson involving the collection and exploration of images found in public search engine results from unsecured surveillance cameras, in an attempt to demonstrate our growing online vulnerability.

image

*Backdoor, noun. [Hacker slang] A feature or defect of a computer system or device that allows surreptitious unauthorised access to data.

]]>
Machine-Learning Algorithm Aims to Identify Terrorists Using the V Signs They Make https://unthinking.photography/imgexhaust/machine-learning-algorithm-aims-to-identify-terrorists-using-the-v-signs-they-make Tue, 21 Mar 2017 00:00:00 +0000 Artist Profile: Zach Blas https://unthinking.photography/imgexhaust/artist-profile-zach-blas Thu, 02 Mar 2017 00:00:00 +0000 source: http://sevenonseven.rhizome.org https://unthinking.photography/imgexhaust/source-http-sevenonseven-rhizome-org Fri, 17 Feb 2017 00:00:00 +0000 source: http://sevenonseven.rhizome.org

]]>
Where someone said they saw some smoke, there very well may be no fire. https://unthinking.photography/articles/when-someone-said-they-saw-some-smoke-there-very-well-may-be-no-fire Thu, 16 Feb 2017 00:00:00 +0000 HITO STEYERL. This reminds me of the late 19th century, where there were a lot of scientific efforts being invested into... https://unthinking.photography/imgexhaust/hito-steyerl-this-reminds-me-of-the-late-19th-century-where-there-were-a-lot-of-scientific-efforts-being-invested-into Tue, 14 Feb 2017 00:00:00 +0000

HITO STEYERL. This reminds me of the late 19th century, where there were a lot of scientific efforts being invested into deciphering hysteria, or so-called “women’s mental diseases.” And there were so many criteria identified for pinning down this mysterious disease. I feel we are kind of back in the era of crude psychologisms, trying to attribute social, mental, or social-slash-mental illnesses or deficiencies with frankly absurd and unscientific markers. That’s just a brief comment. Please continue.

KATE CRAWFORD. Actually I was about to go exactly to the history, so your comment is absolutely dead-on. I was thinking of physiognomy, too, because what we now have is a new system called Faception that has been trained on millions of images. It says it can predict somebody’s intelligence and also the likelihood that they will be a criminal based on their face shape. Similarly, a deeply suspect paper was just released that claims to do automated inferences of criminality based on photographs of people’s faces. So, to me, that is coming back full circle. Phrenology and physiognomy are being resuscitated, but encoded in facial recognition and machine learning.

There’s also that really interesting history around IBM, of course back in 1933, long before its terrorist credit score, when their German subsidiary was creating the Hollerith machine. I was going back through an extraordinary archive of advertising images that IBM used during that period, and there’s this image that makes me think of your work actually: it has this gigantic eye floating in space projecting beams of light down onto this town below; the windows of the town are like the holes in a punch card and it’s shining directly into the home, and the tagline is “See everything with Hollerith punch cards.” It’s the most literal example of “seeing like a state” that you can possibly imagine. This is IBM’s history, and it is coming full circle. I completely agree that we’re seeing these historical returns to forms of knowledge that we’ve previously thought were, at the very least, unscientific, and, at the worst, genuinely dangerous.

Data Streams, by Hito Steyerl and Kate Crawford]]>
Artifact Readers: pixelated revelations, glitch augury and low-res millenarianism in the age of conspiracy theory https://unthinking.photography/articles/artifact-reader Thu, 09 Feb 2017 00:00:00 +0000 Hidden in Plain Sight: The Steganographic Image https://unthinking.photography/articles/hidden-in-plain-sight-the-steganographic-image Thu, 09 Feb 2017 00:00:00 +0000 Soft Power / Hard Meme https://unthinking.photography/articles/soft-power-hard-meme Thu, 09 Feb 2017 00:00:00 +0000 MIT Media Lab’s Camera Culture Group focuses on making the invisible visible–inside our bodies, around us, and beyond–for... https://unthinking.photography/imgexhaust/mit-media-labs-camera-culture-group-focuses-on-making-the-invisible-visibleinside-our-bodies-around-us-and-beyondfor Mon, 06 Feb 2017 00:00:00 +0000 MIT Media Lab’s Camera Culture Group focuses on making the invisible visible–inside our bodies, around us, and beyond–for health, work, and connection. The goal is to create an entirely new class of imaging platforms that have an understanding of the world that far exceeds human ability and produce meaningful abstractions that are well within human comprehensibility.
The group conducts multi-disciplinary research in modern optics, sensors, illumination, actuators, probes and software processing. This work ranges from creating novel feature-revealing computational cameras and new lightweight medical imaging mechanisms, to facilitating positive social impact via the next billion personalized cameras.

source: http://cameraculture.media.mit.edu/#!

]]>
Ben Grosser - Textbook https://unthinking.photography/imgexhaust/ben-grosser-textbook Fri, 03 Feb 2017 00:00:00 +0000 Ben Grosser - Textbook

Textbook is a web browser extension that removes images from the Facebook interface. Whether it’s a linked article preview photo, a friend’s profile selfie, or a “love” reaction icon, every image is hidden from view. Left behind are the blank boxes and white space where they used to be. Are certain kinds of images leading us to click on content we might have otherwise scrolled past? Does the layout and/or content of images on Facebook influence the way we read the site? Finally, what role might images play in the proliferation of fake news and clickbait? Textbook enables Facebook users to test questions like these for themselves, to see the site without the images and thus experience its content in a new way.

]]>
Picture Sky https://unthinking.photography/imgexhaust/picture-sky Wed, 01 Feb 2017 00:00:00 +0000 Picture Sky

In this project we create a crowdsourced image of the sky. Multiple observers are positioned at GPS coordinates that form the points of a grid. At  the moment of the satellite flyover, they take photographs looking directly up.  Their images are stitched together to form a single large image, opposite to the one taken by the satellite.

A smartphone app lets the observers self-organize at the times of satellite flyovers in any geolocation, coordinate the action, take the pictures in sync, and see the resulting crowdsourced image and the satellite image.

The human array forms an optical sensor, a large eye of a new socio-technological apparatus.

Picture Sky is a project by Karolina Sobecka and Christopher Baker, with Ken Caldeira.

source: http://www.nephologies.com/PictureSky/updates/

image
]]>
Yvette Shen - Seven Days in Beijing  https://unthinking.photography/imgexhaust/yvette-shen-seven-days-in-beijing Tue, 31 Jan 2017 00:00:00 +0000 Yvette Shen - Seven Days in Beijing 

Beijing, the capital of China, is known for its rich culture and long history. In recent years, it has also been recognized by the world for its dire air pollution issues. The explosion of personal automobiles, along with heavy industries surrounding the city, has created layers of smog all year round. Indeed, smog often envelops the city and is a constant threat to the health of its citizens. Beijing’s outrageous air quality has been often reported in Western media.

This project documents a week-long trip the artist took to Beijing during the summer of 2015. Each photo shows a landmark in the city, visited by her during the day, along with the hourly changes of the air quality condition of the day. The air quality is visualized by the hourly AQI (Air Quality Index) and PM2.5 (particulate matter, or ‘things floating in the air’) data over the twenty-four hour period. The hourly AQI and PM2.5 data were retrieved from the U.S. Embassy’s Twitter account @beijingair.1 It is often believed, especially by the Chinese residents, that the U.S. Embassy’s data is more reliable.

source: http://photomediationsmachine.net/2015/10/20/seven-days-in-beijing/

]]>
!Mediengruppe Bitnik, Delivery for Mr. Assange, 2013 https://unthinking.photography/imgexhaust/mediengruppe-bitnik-delivery-for-mr-assange-2013 Thu, 26 Jan 2017 00:00:00 +0000 !Mediengruppe Bitnik, Delivery for Mr. Assange, 2013

«Delivery for Mr. Assange» is a 32-hour live mail art piece performed on 16 and 17 January 2013. On 16 January 2013 !Mediengruppe Bitnik posted a parcel addressed to Julian Assange at the Ecuadorian embassy in London. The parcel contained a camera which documented its journey through the Royal Mail postal system through a hole in the parcel. The images captured by the camera were transferred to this website and the Bitnk Twitter account in realtime. So, as the parcel was slowly making its way towards the Ecuadorian embassy in London, anyone online could follow the parcel’s status in realtime.

source: http://wwwwwwwwwwwwwwwwwwwwww.bitnik.org/assange/

]]>
Jeffrey Thompson, Every Possible Photograph, 2013 https://unthinking.photography/imgexhaust/jeffrey-thompson-every-possible-photograph-2013 Wed, 25 Jan 2017 00:00:00 +0000 Jeffrey Thompson, Every Possible Photograph, 2013

Though limited to eight colors at a very low resolution, the piece will take approximately 46,138,562,195,008,110,600,774,753,760,087,749,172,181,189,607,929,628,058,548,517,099,604,563,033,706,075 years to complete (assuming the computer runs flawlessly 24-hours a day). By way of comparison, the universe is about 13,770,000,000 years old. The piece offsets these combinations starting at Niepce’s famous 1826 photograph looking outs his window, the first photographic image to be permanently captured.

The idea that extremely useless labor is interesting is central to this project, as is the eschewing of the utility of data and its representation in traditional visualization work. Attempting to create every image a camera is essentially a time machine; somewhere in the set of images and alongside billions of “meaningless” others are a photograph of me, a photograph of me if I didn’t get a haircut last week, and a photograph of me with someone who I have never met.

Additionally, this project interrogates the meaning of the camera. If the camera didn’t ‘see’ those events, are they real? They look like real people, but aren’t. Consider images created this way that are illegal (child pornography, for example): they are not ‘real’ but depict something very real.

source: http://www.jeffreythompson.org/every-possible-photograph.php

]]>
Men are allowed to show their nipples, women's get banned: https://www.instagram.com/genderless_nipples/ https://unthinking.photography/imgexhaust/men-are-allowed-to-show-their-nipples-women-s-get-banned-https-www-instagram-com-genderless-nipples Fri, 20 Jan 2017 00:00:00 +0000 Men are allowed to show their nipples, women’s get banned: https://www.instagram.com/genderless_nipples/

]]>
San Andreas Deer Cam is a live video stream from a computer running a hacked modded version of Grand Theft Auto V, hosted on... https://unthinking.photography/imgexhaust/san-andreas-deer-cam-is-a-live-video-stream-from-a-computer-running-a-hacked-modded-version-of-grand-theft-auto-v-hosted-on Tue, 17 Jan 2017 00:00:00 +0000 San Andreas Deer Cam is a live video stream from a computer running a hacked modded version of Grand Theft Auto V, hosted on Twitch.tv. The mod creates a deer and follows it as it wanders throughout the 100 square miles of San Andreas, a fictional state in GTA V based on California. The deer has been programmed to control itself and make its own decisions, with no one actually playing the video game. The deer is ‘playing itself’, with all activity unscripted… and unexpected. In the past 48 hours, the deer has wandered along a moonlit beach, caused a traffic jam on a major freeway, been caught in a gangland gun battle, and been chased by the police.

source: http://bwatanabe.com/GTA_V_WanderingDeer.html

]]>
Hidden “Signature” in Online Photos Could Help Nab Child Abusers https://unthinking.photography/imgexhaust/hidden-signature-in-online-photos-could-help-nab-child-abusers Tue, 10 Jan 2017 00:00:00 +0000 Conspiracy theory 2.0 https://unthinking.photography/imgexhaust/conspiracy-theory-2-0 Mon, 09 Jan 2017 00:00:00 +0000 ‘Hauling/It is not the past, but the future, that determines the present’, 2011 https://unthinking.photography/imgexhaust/hauling-it-is-not-the-past-but-the-future-that-determines-the-present-2011 Fri, 06 Jan 2017 00:00:00 +0000 ‘Hauling/It is not the past, but the future, that determines the present’, 2011

]]>
Miša Skalskis is a Lithuanian artist, currently based in The Hague and Vilnius. His recent work revolves around exploration of... https://unthinking.photography/imgexhaust/misa-skalskisis-a-lithuanian-artist-currently-based-in-the-hague-and-vilnius-his-recent-work-revolves-around-exploration-of Fri, 06 Jan 2017 00:00:00 +0000

Miša Skalskis is a Lithuanian artist, currently based in The Hague and Vilnius. His recent work revolves around exploration of vision without image and hearing without sound. Skalskis explores pattern recognition and its repercussions within the fields of reception or perception and the deployment of such systems and consequent resonances within wider socio-political frameworks. His project investigates various notions of infrastructure to come, from extraneous synthesis of identity, to suggestion based economies.

 AI never blinks is a video made entirely by convolutional and generative adversarial neural networks, created using more than half a million images. All of the images seen in the work are completely artificial, the results of optimisation process performed by these networks in order to generalise or learn from a myriad of images of faces. 

The video proposes itself as a test to calibrate the senses, for both human and artificial agents, like a distant and speculative relative of ‘captcha’ software – used to prevent internet 'bots’ from accessing secure areas of a website. The work proposes an inverted Turing test (the ultimate test for a robot, if it can convince you it is human), the backward gaze of the screen, which uncovers the slowness of human knowledge production.

]]>
@lilianafarber‘s work questions the hierarchy of knowledge and the consumption of data. By exploring the complex relationships... https://unthinking.photography/imgexhaust/lilianafarbers-work-questions-the-hierarchy-of-knowledge-and-the-consumption-of-data-by-exploring-the-complex-relationships Tue, 03 Jan 2017 00:00:00 +0000 @lilianafarber‘s work questions the hierarchy of knowledge and the consumption of data. By exploring the complex relationships between pieces of information and their relation to personal and collective memory, Farber scrutinizes the ways in which visual information is stored. 

My Boys (2015) is a series of images created with custom software that extracts and combines pixels from different images of men, taken from their online dating profile.

]]>
https://arxiv.org/abs/1612.07828 https://unthinking.photography/imgexhaust/https-arxiv-org-abs-1612-07828 Sat, 31 Dec 2016 00:00:00 +0000 https://arxiv.org/abs/1612.07828

]]>
RANDOM READINGS, through visual telecommunication systems. by Cesar Escudero Andaluz Random Readings is a series of video-images... https://unthinking.photography/imgexhaust/random-readings-through-visual-telecommunication-systems-by-cesar-escudero-andaluz-random-readings-is-a-series-of-video-images Tue, 20 Dec 2016 00:00:00 +0000 RANDOM READINGS, through visual telecommunication systems.

by Cesar Escudero Andaluz

Random Readings is a series of video-images analysed by a computer application. The application works by detecting the threshold changes in five different points located on the image surface. It then transforms algorithmically the visual information in alphabetic code, according to an antique and obsolete visual telecommunication system invented by Joseph Chudy in 1793. The result is a flow of characters decoded by a logical system in order to not get any kind of valid information from surveillance cameras, Internet live video captures and video-games.

More here

]]>
Soldier Shooting a project by Esteban Ottaso A storify which relates the photographic shooting to the military profession, and... https://unthinking.photography/imgexhaust/soldier-shooting-a-project-by-esteban-ottaso-a-storifywhich-relates-the-photographic-shooting-to-the-military-profession-and Tue, 20 Dec 2016 00:00:00 +0000 Soldier Shooting

a project by Esteban Ottaso

A storify which relates the photographic shooting to the military profession, and considers the act of self-portraiture as a kind of ritual; necessary for the soldiers lives to exist outside of the field, yet inside the network. 

]]>
#MannequinChallenge - Structure From Motion [UPDATE] https://unthinking.photography/imgexhaust/mannequinchallenge-structure-from-motion-update Mon, 19 Dec 2016 00:00:00 +0000 prostheticknowledge:

#MannequinChallenge - Structure From Motion [UPDATE]

Continuing from my previous post, I have uploaded the pointclouds to @sketchfab for anyone interested to play about and interact. Brief summary of the project: using software, I converted videos of people standing still into 3D data:

Embedded below are a couple of examples, the pointcloud and the video which was used to create them. In the interactive Sketchfab embeds, it might be more interesting to explore them in first person mode.

More Here

]]>
CRT SEDIJLA https://unthinking.photography/imgexhaust/crt-sedijla Fri, 16 Dec 2016 00:00:00 +0000 smiling-face-withface:

CRT SEDIJLA

]]>
we can ask a second neural net to determine whether the output of a first looks real or fake. This technique is called... https://unthinking.photography/imgexhaust/we-can-ask-a-second-neural-net-to-determine-whether-the-output-of-a-first-looks-real-or-fake-this-technique-is-called Fri, 16 Dec 2016 00:00:00 +0000

we can ask a second neural net to determine whether the output of a first looks real or fake. This technique is called adversarial learning. It’s often compared to the relationship between someone producing counterfeit money and the customers trying to determine whether the money is real or not. When the counterfeit money is rejected, the person making the money improves their technique until it’s indistinguishable from the real thing.

Two difference generative adversarial networks producing faces, by Radford et al (2015) and Zhao et al (2016).
You might say that these are “face-like”, or that adversarially generated photos of “objects” are “object-like”, the suggestion being that the network has learned to discern the various qualities of a class in some way similar to how humans divide up the world. But from the output alone it’s very difficult to separate what the net has “really learned” from what it has “appeared to have learned”. It turns out that when you look into the internal representation, they do exist in arithmetic spaces just like word2vec, where questions about analogy and similarity can be easily answered.

A Return to Machine Learning, Kyle McDonald

image
]]>
Sound & Vision Performance feat. Yuki Takada https://unthinking.photography/imgexhaust/sound-visionperformance-feat-yuki-takada Thu, 15 Dec 2016 00:00:00 +0000 kentacobayashi:

Keep reading

]]>
Invisible Images (Your Pictures Are Looking at You) https://unthinking.photography/imgexhaust/invisible-images-your-pictures-are-looking-at-you Fri, 09 Dec 2016 00:00:00 +0000 by Trevor Paglen

“The more images Facebook and Google’s AI systems ingest, the more accurate they become, and the more influence they have on everyday life. The trillions of images we’ve been trained to treat as human-to-human culture are the foundation for increasingly autonomous ways of seeing that bear little resemblance to the visual culture of the past.”

image

source: The New Inquiry 

]]>
How have these places managed to transform from monuments to atrocity and resistance into concrete clickbait? https://unthinking.photography/imgexhaust/how-have-these-places-managed-to-transform-from-monuments-to-atrocity-and-resistance-into-concrete-clickbait Sat, 03 Dec 2016 00:00:00 +0000 How have these places managed to transform from monuments to atrocity and resistance into concrete clickbait?Concrete clickbait: next time you share a spomenik photo, think about what it means, Owen Hatherley, Calvert Journal]]> Lytro is building the world’s most powerful Light Field imaging platform enabling artists, scientists and innovators to pursue... https://unthinking.photography/imgexhaust/lytro-is-building-the-worlds-most-powerful-light-field-imaging-platform-enabling-artists-scientists-and-innovators-to-pursue Sat, 03 Dec 2016 00:00:00 +0000

Lytro is building the world’s most powerful Light Field imaging platform enabling artists, scientists and innovators to pursue their goals with an unprecedented level of freedom and control. This revolutionary technology will unlock new opportunities for photography, cinematography, mixed reality, scientific and industrial applications.

With Lytro Cinema, every frame in a live action shot becomes a 3D environment. This greatly simplifies the integration of computer graphics with real world footage and offers the same level of control and creative flexibility in the VFX workflow.

The Lytro Cinema camera gathers a truly staggering amount of information on the world around it. The 755 RAW megapixel 40K resolution, 300 FPS camera takes in as much as 400 gigabytes per second of data. [src]

source: https://lytro.com

]]>
We consider the problem of face swapping in images, where an input identity is transformed into a target identity while... https://unthinking.photography/imgexhaust/we-consider-the-problem-of-face-swapping-in-images-where-an-input-identity-is-transformed-into-a-target-identity-while Sat, 03 Dec 2016 00:00:00 +0000

We consider the problem of face swapping in images, where an input identity is transformed into a target identity while preserving pose, facial expression, and lighting. To perform this mapping, we use convolutional neural networks trained to capture the appearance of the target identity from an unstructured collection of his/her photographs.This approach is enabled by framing the face swapping problem in terms of style transfer, where the goal is to render an image in the style of another one. Building on recent advances in this area, we devise a new loss function that enables the network to produce highly photorealistic results. By combining neural networks with simple pre- and post-processing steps, we aim at making face swap work in real-time with no input from the user.

image
image

source: https://arxiv.org/abs/1611.09577

]]>
Tele-immersion: The Death of Distance - Electronics For You https://unthinking.photography/imgexhaust/tele-immersion-the-death-of-distance-electronics-for-you Fri, 02 Dec 2016 00:00:00 +0000 Moth Generator by Everest Pipkin and Loren Schmidt is a twitter bot that invent new species of moth every four hours. At the... https://unthinking.photography/imgexhaust/moth-generator-by-everest-pipkin-and-loren-schmidt-is-a-twitter-bot-that-invent-new-species-of-moth-every-four-hours-at-the Mon, 28 Nov 2016 00:00:00 +0000 Moth Generator by Everest Pipkin and Loren Schmidt is a twitter bot that invent new species of moth every four hours. At the time of posting there are 7999 generated slides of moths, all presented with their scientific name, as if they’re pinned to a board.

“Each of these component parts follows a series of different rulesets that allows variation of process inside of their structures. Although the individual pieces do inform each other’s generation, much of the individuality of each moth rises from the incredible amount of possibility contained in each anatomical part, and in their combination,” Pipkin explained.

source: moth generator

]]>
Instagram launches disappearing Live video and messages https://unthinking.photography/imgexhaust/instagram-launches-disappearing-live-video-andmessages Sun, 27 Nov 2016 00:00:00 +0000 ‘Topological Visualisation of a Convolutional Neural Network’  by Terence Broad The aim of this project was to apply some of the... https://unthinking.photography/imgexhaust/topological-visualisation-of-a-convolutional-neural-network-by-terence-broad-the-aim-of-this-project-was-to-apply-some-of-the Sun, 27 Nov 2016 00:00:00 +0000 ‘Topological Visualisation of a Convolutional Neural Network’ 
by Terence Broad

The aim of this project was to apply some of the techniques used to data visualisation techniques used to visualise large networks and apply that to visualising the inner workings of artificial neural networks

Find out more here

]]>
Uncharted https://unthinking.photography/imgexhaust/uncharted Sun, 27 Nov 2016 00:00:00 +0000 Listening to a Face https://unthinking.photography/articles/listening-to-a-face Thu, 24 Nov 2016 00:00:00 +0000 Ways of Machine Seeing https://unthinking.photography/articles/ways-of-machine-seeing Thu, 24 Nov 2016 00:00:00 +0000 AZ: MOVE AND GET SHOT https://unthinking.photography/articles/az-move-and-get-shot-2011-2014 Mon, 21 Nov 2016 00:00:00 +0000 Through Non-Human Eyes https://unthinking.photography/articles/through-non-human-eyes Mon, 21 Nov 2016 00:00:00 +0000 “The idea is about making faces communicate with each other,” he says. He designed the software to recognize facial expressions... https://unthinking.photography/imgexhaust/the-idea-is-about-making-faces-communicate-with-each-other-he-says-he-designed-the-software-to-recognize-facial-expressions Mon, 21 Nov 2016 00:00:00 +0000

“The idea is about making faces communicate with each other,” he says. He designed the software to recognize facial expressions and their related emotions. While you watch, an algorithm uses the characteristics of your face—the distance between your eyes, the shape of your jaw, the size of your nose—to create a ratio between the values. It does the same for the person on screen. Daniele wrote the software so the ratio would correlate to one of six basic facial expressions (anger, fear, sadness, joy, disgust, and surprise.) When you exhibit empathy—which in this case is determined by how closely your expression mirrors that of the person on screen—the image takes on elements of your face. The more empathy you show, the more the two of you become one.

source: http://doc.gold.ac.uk/compartsblog/index.php/work/ma-graduates-face-melding-installation-teaches-you-empathy/

]]>
Internet Photography https://unthinking.photography/articles/internet-photography Sun, 20 Nov 2016 00:00:00 +0000 The new system, called Dreambit, analyzes the input photo and searches for a subset of photographs available online that match... https://unthinking.photography/imgexhaust/the-new-system-called-dreambit-analyzes-the-input-photo-and-searches-for-a-subset-of-photographs-available-online-that-match Sun, 20 Nov 2016 00:00:00 +0000

The new system, called Dreambit, analyzes the input photo and searches for a subset of photographs available online that match it for shape, pose, and expression, automatically synthesizing them based on their team’s previous work on facial processing and three-dimensional reconstruction, modeling people from massive unconstrained photo collections - Melissa Terras, From airbrush to filters to AI…The Robots Enter the Photographic Archive

Source: Dreambit

]]>
Additivism https://unthinking.photography/imgexhaust/additivism Thu, 17 Nov 2016 00:00:00 +0000 Additivism

Morehshin Allahyari and Daniel Rourke

The 3D Additivist Manifesto + forthcoming Cookbook call for you to accelerate the 3D printer and other technologies to their absolute limits and beyond into the realm of the speculative, the provocative and the weird.

The 3D Additivist Cookbook will be published online and for free in December

]]>
AI Experiments https://unthinking.photography/imgexhaust/ai-experiments Thu, 17 Nov 2016 00:00:00 +0000 prostheticknowledge:

AI Experiments

Yesterday, Google released a load of creative coding experiments using artificial intelligence and neural networks to demonstrate how the technology can be applied:

With all of the exciting A.I. stuff happening, there are lots of people eager to start tinkering with machine learning technology. That’s why we’ve created A.I. Experiments, a site that showcases simple experiments that let anyone play with this technology hands-on, and resources for creating your own experiments. 

You can explore more here

Not only that, Google also released Art and Culture Experiements, which lets you interact with art data in various ways:

With Google Arts & Culture experiments, try out new ways to explore art. Get inspired with machine learning experiments developed in collaboration with resident artists and creative coders at the Lab 

More Here

]]>
1.  [ TEXT TO SPEECH ] 2.  [  SPEECH TO LASER ] 3.  [ GESTURE TO STROBE ] source: http://marijabozinovskajones.com/techsent/ https://unthinking.photography/imgexhaust/1-text-to-speech-2-speech-to-laser-3-gesture-to-strobe-source-http-marijabozinovskajones-com-techsent Mon, 14 Nov 2016 00:00:00 +0000 1.  [ TEXT TO SPEECH ]

2.  [  SPEECH TO LASER ]

3.  [ GESTURE TO STROBE ]

source: http://marijabozinovskajones.com/techsent/

]]>
SherlockNet https://unthinking.photography/imgexhaust/sherlocknet Tue, 08 Nov 2016 00:00:00 +0000 Robotic challenges to push forward innovation in Middle East... https://unthinking.photography/imgexhaust/robotic-challenges-to-push-forward-innovation-in-middle-east Fri, 04 Nov 2016 00:00:00 +0000 Robotic challenges to push forward innovation in Middle East

source: http://www.mbzirc.com/news/mbzirc-boasts-teams-from-top-international-universities-and-organizations-15

]]>
#deepdream is appealing because it gives us access to machine pareidolia, an area of great artistic interest before Google got... https://unthinking.photography/imgexhaust/deepdream-is-appealing-because-it-gives-us-access-to-machine-pareidolia-an-area-of-great-artistic-interest-before-google-got Thu, 03 Nov 2016 00:00:00 +0000

#deepdream is appealing because it gives us access to machine pareidolia, an area of great artistic interest before Google got involved – see Henry Cooke‘s experiments with faces-in-the-cloud and Matthew Plummer-Fernandez‘s Novice Art Blogger.

source: shardcore

]]>
POOR IMAGES https://unthinking.photography/imgexhaust/poor-images Thu, 03 Nov 2016 00:00:00 +0000 glebovnaproductions:

image

— Hito Steyerl

]]>
The Eyescream Project https://unthinking.photography/imgexhaust/the-eyescream-project Thu, 03 Nov 2016 00:00:00 +0000 Centuries-old Dutch renaissance faces make hilarious new iPhone emoji https://unthinking.photography/imgexhaust/centuries-old-dutch-renaissance-faces-make-hilarious-new-iphone-emoji Wed, 02 Nov 2016 00:00:00 +0000 This video demonstrates both the impressive capabilities of neural captioning systems, as well as the humorous (and maybe... https://unthinking.photography/imgexhaust/this-video-demonstrates-both-the-impressive-capabilities-of-neural-captioning-systems-as-well-as-the-humorous-and-maybe Wed, 02 Nov 2016 00:00:00 +0000

This video demonstrates both the impressive capabilities of neural captioning systems, as well as the humorous (and maybe unsettling) limitations of such systems when their training data lack the vocabulary to fully describe the scene. Notably, no “robots” or “machines” appear in this video according to densecap, and the robot is variously labeled as a person, man, motorcycle, and fire hydrant.

source: http://www.genekogan.com

]]>
Checking to make sure the camera and bundled-in SD card work is integral to the process, but sometimes factory workers forget to... https://unthinking.photography/imgexhaust/checking-to-make-sure-the-camera-and-bundled-in-sd-card-work-is-integral-to-the-process-but-sometimes-factory-workers-forget-to Tue, 01 Nov 2016 00:00:00 +0000

Checking to make sure the camera and bundled-in SD card work is integral to the process, but sometimes factory workers forget to erase the test footage. So: Drone arrives. Customer loads memory card. Footage of incredibly bored guy paints a strange portrait of said drone’s tedious birth.

source: http://gizmodo.com/dji-accidentally-gives-customer-a-tour-of-its-drone-fac-1788437604

]]>
teletext-artists projects-computational culture-compression-unthinking photography https://unthinking.photography/imgexhaust/teletext-artists-projects-computational-culture-compression-unthinking-photography Thu, 27 Oct 2016 00:00:00 +0100 The web project ‘Photo of the Day (PhD)’ is a series of works paraphrasing a photo report series of the same title published on... https://unthinking.photography/imgexhaust/the-web-project-photo-of-the-day-phd-is-a-series-of-works-paraphrasing-a-photo-report-series-of-the-same-title-published-on Thu, 27 Oct 2016 00:00:00 +0100

The web project ‘Photo of the Day (PhD)’ is a series of works paraphrasing a photo report series of the same title published on Hungary Matters, an online media channel of the Hungarian Public Service Media, written in English. Szacsva y selects works from the photos published daily on Hungary Matters and transforms them with a method specifically designed for this purpose, called ‘perinarrative retouch’. Every single detail of the manipulative process is recorded, building a narrative of moving images, enabling the artist to create a special type of technical image.

Szacsva y asked other artists to create sound tracks for the moving graphic images he created during the PhD project; the press photos visually manipulated and transformed by Szacsva y into moving images are therefore complemented with audio commentaries

Collaborating artists: Balázs Beöthy, Lőrinc Borsos, Roland Farkas, Judit Fischer, János Fodor, Ferenc Gróf, Nándor Hevesi, Zsolt Keserue, Tamás Komoróczky, Stranger Foreigner, Mike Nylons, György Orbán, András Ravasz, Kornél Szilágyi

PhD is part of “When Art(ist) Speaks” project, curated by Eszter Lázár and Edina Nagy

source: http://offbiennale.hu/photo-of-the-day-2/

]]>
A Facial Recognition Project Report : Woodrow Wilson Bledsoe : Free Download & Streaming : Internet Archive https://unthinking.photography/imgexhaust/a-facial-recognition-project-report-woodrow-wilson-bledsoe-free-download-streaming-internet-archive Fri, 21 Oct 2016 00:00:00 +0100 Harriet Salem on Twitter https://unthinking.photography/imgexhaust/harriet-salem-on-twitter Thu, 20 Oct 2016 00:00:00 +0100 Moral Machine https://unthinking.photography/imgexhaust/moral-machine Thu, 20 Oct 2016 00:00:00 +0100 Perpetual Line Up https://unthinking.photography/imgexhaust/perpetual-line-up Thu, 20 Oct 2016 00:00:00 +0100 Decision Space https://unthinking.photography/imgexhaust/decision-space Wed, 19 Oct 2016 00:00:00 +0100 Decision Space

by

Sebastian Schmieg

Decision Space by Berlin-based artist @sebastianschmieg takes a closer look at how machine vision datasets are created: developed on the website of The Photographers’ Gallery, the new commission invites visitors to assign all the images available on the website to one of four categories: Future, Past, Problem and Solution. 

 Through the assignment process visitors are teaching the system how to read and understand images within this existing set of parameters. In a seconds step, after feeding all images and classifications into a machine learning system, Schmieg will synthesise a series of new images that is based on the accumulated labour of all visitors and photographers involved. Each new image will represent abstract visions of Future, Past, Problem and Solution to varying degrees, drawing on the colours, shapes, and concepts of the original visuals found on the website. 

Through this work Schmieg raises questions around current discussions concerning photography, big-data, tracking, and the hidden manual labour behind algorithmic systems.

Decision Space will be active on the TPG website until 15 Jan 2017. Afterwards, the project will continue on decision-space.com, as a public dataset, and through the production of prints.

]]>
Patient Zero of the selfie age: Why JenniCam abandoned her digital life In 1996, the 19-year-old bought a webcam and set it up... https://unthinking.photography/imgexhaust/patient-zero-of-the-selfie-age-why-jennicam-abandoned-her-digital-life-in-1996-the-19-year-old-bought-a-webcam-and-set-it-up Wed, 19 Oct 2016 00:00:00 +0100 obcomar:

Patient Zero of the selfie age: Why JenniCam abandoned her digital life

In 1996, the 19-year-old bought a webcam and set it up in her room to take a photo every 15 minutes and post it to her website: JenniCam.

Her experiment offered the world a glimpse into our digital future long before Facebook, Instagram, Twitter and the Kardashians.

]]>
#3D Additivism “We want to encourage, interfere, and reverse-engineer the possibilities encoded into the censored, the... https://unthinking.photography/imgexhaust/3d-additivism-we-want-to-encourage-interfere-and-reverse-engineer-the-possibilities-encoded-into-the-censored-the Tue, 18 Oct 2016 00:00:00 +0100 #3D Additivism

“We want to encourage, interfere, and reverse-engineer the possibilities encoded into the censored, the invisible, and the radical notion of the 3D printer itself. To endow the printer with the faculties of plastic: condensing imagination within material reality. The 3D print then becomes a symptom of a systemic malady. An aesthetics of exaptation, with the peculiar beauty to be found in reiteration; in making a mesh. This is where cruelty and creativity are reconciled: in the appropriation of all planetary matter to innovate on biological prototypes. From the purest thermoplastic, from the cleanest photopolymer, and shiniest sintered metals we propose to forge anarchy, revolt and distemper. Let us birth disarray from its digital chamber.”

from the 3D Additivist Manifesto, created by Morehshin Allahyari and Daniel Rourke

]]>
The mirror avoids faces. One can look at his/her face in the mirror only with a nonface.  A work by the Shinseungback... https://unthinking.photography/imgexhaust/the-mirror-avoids-faces-one-can-look-at-his-her-face-in-the-mirror-only-with-a-nonface-a-work-by-theshinseungback Tue, 18 Oct 2016 00:00:00 +0100 The mirror avoids faces.
One can look at his/her face in the mirror only with a nonface. 

A work by the Shinseungback Kimyonghun artist group

]]>
Computers Are Learning To Write Songs By Listening To All Of Them https://unthinking.photography/imgexhaust/computers-are-learning-to-write-songs-by-listening-to-all-of-them Tue, 11 Oct 2016 00:00:00 +0100 Invisible is a protection against new forms of biological surveillance. A 2014 project where the bioartist, Heather... https://unthinking.photography/imgexhaust/invisibleis-a-protection-againstnew-forms-of-biological-surveillance-a-2014-project-where-the-bioartist-heather Tue, 11 Oct 2016 00:00:00 +0100 Invisible is a protection against new forms of biological surveillance.

A 2014 project where the bioartist, Heather Dewey-Hagborg, created a kit for our protection against threats to privacy. 

]]>
Maria Callas - Style Transfer by Lulu xXX Paris-based CGI artist Lulu xXX has been experimenting with style transfer, and has... https://unthinking.photography/imgexhaust/maria-callas-style-transfer-by-lulu-xxx-paris-based-cgi-artist-lulu-xxx-has-been-experimenting-with-style-transfer-and-has Tue, 11 Oct 2016 00:00:00 +0100 algopop:

Maria Callas - Style Transfer by Lulu xXX

Paris-based CGI artist Lulu xXX has been experimenting with style transfer, and has made one of the more beautiful videos I’ve seen using this technique. The outcome is less a tech demo (i.e. displaying thumbnail of source material), and more of an interesting animation in its own right.

]]>
Cécile B. Evans: Sprung a Leak https://unthinking.photography/imgexhaust/cecile-b-evans-sprung-a-leak Fri, 07 Oct 2016 00:00:00 +0100 from Katrina Sluis, Future Semantic Web in How To Run Faster 1, published by @arcadiamissa, 2011 https://unthinking.photography/imgexhaust/from-katrina-sluis-future-semantic-web-in-how-to-run-faster-1-published-by-arcadiamissa-2011 Tue, 04 Oct 2016 00:00:00 +0100 from Katrina Sluis, Future Semantic Web in How To Run Faster 1, published by @arcadiamissa, 2011

]]>
“The cat sits on the bed”, Pedagogies of vision in human and machine learning. https://unthinking.photography/articles/the-cat-sits-on-the-bed-pedagogies-of-vision-in-human-and-machine-learning Fri, 30 Sep 2016 00:00:00 +0100 Deep neural networks are easily fooled: High confidence predictions for unrecognizable images https://unthinking.photography/imgexhaust/deep-neural-networks-are-easily-fooled-high-confidence-predictions-for-unrecognizable-images Fri, 30 Sep 2016 00:00:00 +0100 Deep neural networks are easily fooled: High confidence predictions for unrecognizable images


PDF / Source

]]>
Netflix’s spooky 12-minute film noir only makes sense to engineers and developers https://unthinking.photography/imgexhaust/netflixs-spooky-12-minute-film-noir-only-makes-sense-to-engineers-and-developers Fri, 30 Sep 2016 00:00:00 +0100 Using deep learning to generate faces - https://github.com/zo7/facegen We can specify random "illegal" parameters to generate... https://unthinking.photography/imgexhaust/using-deep-learning-to-generate-faces-https-github-com-zo7-facegen-we-can-specify-random-illegal-parameters-to-generate Fri, 30 Sep 2016 00:00:00 +0100

Using deep learning to generate faces - https://github.com/zo7/facegen

We can specify random “illegal” parameters to generate interesting images.

]]>
Computer Vision: On the Way to Seeing More  source: New York Times / imSitu https://unthinking.photography/imgexhaust/computer-vision-onthe-way-to-seeing-more-source-new-york-times-imsitu Thu, 29 Sep 2016 00:00:00 +0100 Computer Vision: On the Way to Seeing More 

source: New York Times / imSitu

]]>
A new product from Snapchat, sunglasses that record 10 second bursts of circular, wide angle video from the perspective of the... https://unthinking.photography/imgexhaust/a-new-product-from-snapchat-sunglasses-that-record-10-second-bursts-of-circular-wide-angle-video-from-the-perspective-of-the Tue, 27 Sep 2016 00:00:00 +0100 A new product from Snapchat, sunglasses that record 10 second bursts of circular, wide angle video from the perspective of the person wearing them.

In doing so, the company rebranded itself as Snap Inc.

Snap Inc. is a camera company.

We believe that reinventing the camera represents our greatest opportunity to improve the way people live and communicate.
Our products empower people to express themselves, live in the moment, learn about the world, and have fun together.

image

https://snap.com

]]>
CAMERAROLL  A five-page digital installation built on the NewHive platform by vyle + Devin Kenny exploring surveillance and... https://unthinking.photography/imgexhaust/cameraroll-a-five-page-digital-installation-built-on-the-newhive-platform-by-vyle-devin-kenny-exploring-surveillance-and Tue, 27 Sep 2016 00:00:00 +0100 CAMERAROLL 

A five-page digital installation built on the NewHive platform by vyle + Devin Kenny exploring surveillance and privacy 

]]>
Ephemeral conversations conducted through images: is the prime mover driving Snapchat’s popularity. “People wonder why their... https://unthinking.photography/imgexhaust/ephemeral-conversations-conducted-through-images-is-the-prime-mover-driving-snapchats-popularity-people-wonder-why-their Tue, 27 Sep 2016 00:00:00 +0100

Ephemeral conversations conducted through images: is the prime mover driving Snapchat’s popularity. “People wonder why their daughter is taking 10,000 photos a day’ says Spiegel. ‘What they don’t realize is that she isn’t preserving images. She’s talking.”

Wall Street Journa

]]>
UberNet https://unthinking.photography/imgexhaust/ubernet Fri, 23 Sep 2016 00:00:00 +0100 prostheticknowledge:

UberNet

Computer Vision research from Iasonas Kokkinos demonstrates method neural networks can assist in the field, encompassing various methods into one system:

In this work we introduce a convolutional neural network (CNN) that jointly handles low-, mid-, and high-level vision tasks in a unified architecture that is trained end-to-end. Such a universal network can act like a ‘swiss knife’ for vision tasks; we call this architecture an UberNet to indicate its overarching nature.

We address two main technical challenges that emerge when broadening up the range of tasks handled by a single CNN: (i) training a deep architecture while relying on diverse training sets and (ii) training many (potentially unlimited) tasks with a limited memory budget. Properly addressing these two problems allows us to train accurate predictors for a host of tasks, without compromising accuracy. Through these advances we train in an end-to-end manner a CNN that simultaneously addresses (a) boundary detection (b) normal estimation © saliency estimation (d) semantic segmentation  (e) human part segmentation (f)  semantic boundary detection, (g) region proposal generation and object detection. We obtain competitive performance while jointly addressing all of these tasks in 0.7 seconds per frame on a single GPU.

You can view the academic paper here

Iasonas has also put together an online demo were you can upload an image to be processed and analyzed, which you can try out here

]]>
As our cities become smarter, the data of daily life is becoming increasingly granular. Sensors and cameras can tell us things... https://unthinking.photography/imgexhaust/as-our-cities-become-smarter-the-data-of-daily-life-is-becoming-increasingly-granular-sensors-and-cameras-can-tell-us-things Tue, 20 Sep 2016 00:00:00 +0100 As our cities become smarter, the data of daily life is becoming increasingly granular. Sensors and cameras can tell us things like how many people cross a particular street during the morning commute, whether air quality improved over the past year, and whether buses are running on time. And the rise of the smart city has promised to solve fundamental urban problems and make our cities more efficient. But often lost amongst the numbers and hard data is an equally important fact: Cities are populated by humans.

Now a team of researchers is looking to quantify the more slippery metric of how people feel about their cities, through a series of alternative cartographies..

Read more

Navigate yourself in the smells of the streets of London, here

]]>
corruption-image culture-unthinking photography-Pokemon Go-invisibility-ways of seeing https://unthinking.photography/imgexhaust/corruption-image-culture-unthinking-photography-pokemon-go-invisibility-ways-of-seeing Tue, 20 Sep 2016 00:00:00 +0100 So…Google doesn’t care about the privacy of the cow on the left? Is the cow on the right in the witness protection program? https://unthinking.photography/imgexhaust/sogoogle-doesnt-care-about-the-privacy-of-the-cow-on-the-left-is-the-cow-on-the-right-in-the-witness-protection-program Tue, 20 Sep 2016 00:00:00 +0100 odinsblog:

So…Google doesn’t care about the privacy of the cow on the left? Is the cow on the right in the witness protection program?

]]>
Street views https://unthinking.photography/imgexhaust/street-views Mon, 19 Sep 2016 00:00:00 +0100 image

An inventory of physical locations whose legal, political or social statuses are invisible to a casual observer. A work by Sam Lavigne

http://antiboredom.github.io/streetviews/

]]>
How to photograph onion skins. https://unthinking.photography/articles/how-to-photograph-onion-skins Thu, 15 Sep 2016 00:00:00 +0100 Bursting The Filter Bubble of Photoshop Tutorials https://unthinking.photography/articles/bursting-the-filter-bubble-of-photoshop-tutorials Wed, 14 Sep 2016 00:00:00 +0100 The University of YouTube: the medium, the user, photography and the search for really useful knowledge. https://unthinking.photography/articles/the-university-of-youtube-the-medium-the-user-photography-and-the-search-for-really-useful-knowledge Tue, 13 Sep 2016 00:00:00 +0100 Lucy Suchman's Robot Futures https://unthinking.photography/imgexhaust/lucy-suchman-s-robot-futures Tue, 13 Sep 2016 00:00:00 +0100 image

“Having completed my domestic labors for the day (including a turn around the house with my lovely red Miele), I take a moment to consider the most recent development from Boston Dynamics (now part of the robotics initiative at sea in Google’s Alphabet soup).“

These are the first lines of the last post of Lucy Suchman’s blog Robot Futures. Suchman is a much celebrated anthropologist who studies the development of technologies, in particular the area of Artificial Intelligence and robotics, with a feminist perspective. The postings on the blog have been written between 2012 and 2014 and cover the relationships of robotics with the military, daily life, labor and automation, armed robots and autonomous technologies.

“Resistance, it seems, is not entirely futile.“

Read on (and archive!)

]]>
This. Is. Amazing.  (via MIT uses radiation to read closed books) https://unthinking.photography/imgexhaust/this-is-amazing-via-mit-uses-radiation-to-read-closed-books Tue, 13 Sep 2016 00:00:00 +0100 vmarinelli:

This. Is. Amazing. 

(via MIT uses radiation to read closed books)

The system uses terahertz radiation, the band of electromagnetic radiation between microwaves and infrared light, which has several advantages over other types of waves that can penetrate surfaces, such as X-rays or sound waves. Terahertz radiation has been widely researched for use in security screening, because different chemicals absorb different frequencies of terahertz radiation to different degrees, yielding a distinctive frequency signature for each. By the same token, terahertz frequency profiles can distinguish between ink and blank paper, in a way that X-rays can’t.


Terahertz radiation can also be emitted in such short bursts that the distance it has traveled can be gauged from the difference between its emission time and the time at which reflected radiation returns to a sensor. That gives it much better depth resolution than ultrasound.


The system exploits the fact that trapped between the pages of a book are tiny air pockets only about 20 micrometers deep. The difference in refractive index — the degree to which they bend light — between the air and the paper means that the boundary between the two will reflect terahertz radiation back to a detector.


In the researchers’ setup, a standard terahertz camera emits ultrashort bursts of radiation, and the camera’s built-in sensor detects their reflections. From the reflections’ time of arrival, the MIT researchers’ algorithm can gauge the distance to the individual pages of the book.

https://news.mit.edu/2016/computational-imaging-method-reads-closed-books-0909

]]>
To ensure people’s rights and liberties are upheld, we will need validation, auditing, and assessment of these systems to ensure... https://unthinking.photography/imgexhaust/to-ensure-peoples-rights-and-liberties-are-upheld-we-will-need-validation-auditing-and-assessment-of-these-systems-to-ensure Tue, 13 Sep 2016 00:00:00 +0100 To ensure people’s rights and liberties are upheld, we will need validation, auditing, and assessment of these systems to ensure basic fairness. Without it, we risk incorrect classifications, biased data, and faulty models amplifying injustice rather than redressing itArtificial intelligence is hard to see  by Kate Crawford and Meredith Whittaker (via nathanjurgenson)]]> Generating Videos with Scene Dynamics https://unthinking.photography/imgexhaust/generating-videos-with-scene-dynamics Mon, 12 Sep 2016 00:00:00 +0100 prostheticknowledge:

Generating Videos with Scene Dynamics

Proof of concept computer science research from Carl Vondrick, Hamed Pirsiavash and Antonio Torralba can generate video content from a single input image, based on neural networked trained data:

We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.

More Here

]]>
I’m Google is an ongoing Tumblr blog in which the artist Dina Kelberman, collects images and videos from the internet and... https://unthinking.photography/imgexhaust/im-google-is-an-ongoing-tumblr-blog-in-which-the-artist-dina-kelberman-collects-images-and-videos-from-the-internet-and Mon, 12 Sep 2016 00:00:00 +0100 I’m Google is an ongoing Tumblr blog in which the artist Dina Kelberman, collects images and videos from the internet and assembles them into what she calls a ‘long stream-of-consciousness’. 

Both the searching and arranging processes are done manually. The images move seamlessly from one subject to the next based on similarities in form, composition, colour, and theme. 

See mor

]]>
Recognition https://unthinking.photography/imgexhaust/recognition Mon, 12 Sep 2016 00:00:00 +0100 Dear Mark. I am writing this to inform you that I shall not comply with your requirement to remove this picture. https://unthinking.photography/imgexhaust/dear-mark-i-am-writing-this-to-inform-you-that-i-shall-not-comply-with-your-requirement-to-remove-this-picture Fri, 09 Sep 2016 00:00:00 +0100 image]]> A robot solving the Instant Insanity puzzle. The film was shot in 1971 in the Stanford AI Lab. https://unthinking.photography/imgexhaust/a-robot-solving-the-instant-insanity-puzzle-the-film-was-shot-in-1971-in-the-stanford-ai-lab Thu, 08 Sep 2016 00:00:00 +0100 A robot solving the Instant Insanity puzzle. The film was shot in 1971 in the Stanford AI Lab.

]]>
An eye with a wi-fi gland https://unthinking.photography/imgexhaust/an-eye-with-a-wi-fi-gland Thu, 08 Sep 2016 00:00:00 +0100 image

If you could swap the eyes you were born with for 3-D printed upgrades, would you? What if they had 15/10 vision, photo filters and the ability to record what you see?

Source mic.com

]]>
In our new contest, the Cybathlon, people with physical disabilities will compete against each other at tasks of daily life,... https://unthinking.photography/imgexhaust/in-our-new-contest-the-cybathlon-people-with-physical-disabilities-will-compete-against-each-other-at-tasks-of-daily-life Thu, 08 Sep 2016 00:00:00 +0100 In our new contest, the Cybathlon, people with physical disabilities will compete against each other at tasks of daily life, with the aid of advanced assistive devices – including robotic ones.

Read on Robohub

]]>
Why An AI-Judged Beauty Contest Picked Nearly All White Winners https://unthinking.photography/imgexhaust/why-an-ai-judged-beauty-contest-picked-nearly-all-white-winners Wed, 07 Sep 2016 00:00:00 +0100 The Panopticons are coming! And they'll know when we think the grass is greener https://unthinking.photography/imgexhaust/the-panopticons-are-coming-and-they-ll-know-when-we-think-the-grass-is-greener Fri, 02 Sep 2016 00:00:00 +0100 HRI-Google-computer vision-detection-unthinking photography-recognition-image culture https://unthinking.photography/imgexhaust/hri-google-computer-vision-detection-unthinking-photography-recognition-image-culture Wed, 31 Aug 2016 00:00:00 +0100 vision-future-HRI-augmentation-robotics-Japan https://unthinking.photography/imgexhaust/vision-future-hri-augmentation-robotics-japan Wed, 31 Aug 2016 00:00:00 +0100 Example of an eyetracking "heatmap" that shows how much users are likely to look at different parts of a video. On the left the... https://unthinking.photography/imgexhaust/example-of-an-eyetracking-heatmap-that-shows-how-much-users-are-likely-to-look-at-different-parts-of-a-video-on-the-left-the Tue, 30 Aug 2016 00:00:00 +0100 Example of an eyetracking “heatmap” that shows how much users are likely to look at different parts of a video. On the left the real eyetracking, on the right a computer model of human attention.

]]>
Path 101, Row 67 2016-08-30 at 00:51:24 GMT http://ift.tt/2c1oLZh https://unthinking.photography/imgexhaust/path-101-row-67-2016-08-30-at-00-51-24-gmt-http-ift-tt-2c1olzh Tue, 30 Aug 2016 00:00:00 +0100 laaaaaaandsat:

Path 101, Row 67 2016-08-30 at 00:51:24 GMT http://ift.tt/2c1oLZh

A feed of the Landsat programme: the world’s longest continuously acquired collection of satellite images
By James Bridle

]]>
‘Where land meets sea’ https://unthinking.photography/imgexhaust/where-land-meets-sea Tue, 30 Aug 2016 00:00:00 +0100 ‘Where land meets sea’

The shoreline of Lesvos island, 3D Scanned with ScanLab @ UCL 

Lesvos is perhaps the largest ‘island’ on the archipelago of the 'migrant corridor’, along which physical and social space are set under constant negotiation. The rugged, mountainous landscape of the north is perforated by synthetic piles of survival, ones that challenge its legitimacy over the land. Curved is also the social terrain, with displaced people, locals and volunteers renegotiating social space in the island.

Embassy for the displaced

]]>
Learning to Segment https://unthinking.photography/imgexhaust/learning-to-segment Thu, 25 Aug 2016 00:00:00 +0100 robotics-robot vision-anthropomorphic https://unthinking.photography/imgexhaust/robotics-robot-vision-anthropomorphic Thu, 25 Aug 2016 00:00:00 +0100 Forensically https://unthinking.photography/imgexhaust/forensically Tue, 23 Aug 2016 00:00:00 +0100 Forensically

Forensically is a set of free tools for digital image forensics. It includes clone detection, error level analysis, metadata extraction and more. It is made by Jonas Wagner. It helps you to see details that would otherwise be hidden.

]]>
mapping-everyday life-unthinking photography-3D scanning-camera https://unthinking.photography/imgexhaust/mapping-everyday-life-unthinking-photography-3d-scanning-camera Tue, 23 Aug 2016 00:00:00 +0100 Obscurity https://unthinking.photography/imgexhaust/obscurity Tue, 23 Aug 2016 00:00:00 +0100 The first imaging system to provide full-body skin mapping and ongoing monitoring. Lightweight, affordable, and convenient. https://unthinking.photography/imgexhaust/the-first-imaging-system-to-provide-full-body-skin-mapping-and-ongoing-monitoring-lightweight-affordable-and-convenient Tue, 23 Aug 2016 00:00:00 +0100 The first imaging system to provide full-body skin mapping and ongoing monitoring. Lightweight, affordable, and convenient.constellation.io/
]]>
Infrastructural Violence: The Smooth Spaces of Terror https://unthinking.photography/articles/infrastructural-violence Thu, 09 Apr 2015 00:00:00 +0100 So Like You https://unthinking.photography/articles/so-like-you Tue, 17 Mar 2015 00:00:00 +0000