Big Picture Science – Forget to Remember

by Gary Niederhoff on July 13, 2015

forgettorememberBIG

Big Picture Science – Forget to Remember

You must not remember this. Indeed, it may be key to having a healthy brain. Our gray matter evolved to forget things; otherwise we’d have the images of every face we saw on the subway rattling around our head all day long. Yet we’re building computers with the capacity to remember everything. Everything! And we might one day hook these devices to our brains.

Find out what’s it’s like – and whether it’s desirable – to live in a world of total recall. Plus, the quest for cognitive computers, and how to shake that catchy – but annoying – jingle that plays in your head over and over and over and …

Listen to individual segments here:
Part 1: Ramamoorthy Ramesh / Universal memory
Part 2: Michael Anderson / Forgetting
Part 3: James McGaugh / Total recall
Part 4: Ira Hyman / Earworms
Part 5: Larry Smarr / Cognitive computing

{ 1 comment… read it below or add one }

avatar Joel S July 13, 2015 at 12:56 pm

I take some issue with the idea that there’s an upper limit to how much storage a regular home computer user would ever need. As storage capacity increases, so does the fidelity of the data we store on it – 15-20 years ago, with those early expensive digital cameras you could store about ten photographs on a single 3.5″ diskette. A single photo taken with a cheap cell phone camera today wouldn’t fit on that diskette though, since the image resolution has increased by a lot since those days. The same is true for books – when Project Gutenberg started in the 70’s, they digitized books in plain text ASCII, which doesn’t allow for any text formatting. By the 80’s, home computers became capable of simple text formatting and by the 90’s it became possible to fairly well digitize a book including it’s page layout, kerning and type face. Those early ASCII books would fit within a couple of hundred kilobytes of data, but the more detail we want to keep in the digitization process, the bigger the files get. High resolution color scans of the book covers, and the size becomes tens of megabytes, and what if we in the future want to also digitize the texture of the pages (a book from the 1600’s surely has a different look and feel than a book from the 2000’s)? In the episode it was also said that one terabyte would equal days of raw video data – but that also depends on the video resolution and frame rate, while it’s too early to tell if 60 FPS movies like the Hobbit will ever take off, the raw video of that is more than twice the size of a regular 24 FPS movie. And again, what if we in the future want to store more data about the movie than just the frame and audio data. With the advent of virtual reality, maybe we someday will want to emulate a specific historical cinema, complete with the behavior of that cinema’s particular model of projector, the texture of the screen, simulate the sound system and the exact length of the movie reel? I don’t think we can predict exactly what kind of additional detail people will want to have in the future, but I’m certain that it’ll require more storage.

Leave a Comment

This blog is kept spam free by WP-SpamFree.

Previous post:

Next post: