paulgorman.org

< ^ txt

Fri Jun 3 08:00:39 EDT 2016 Read a couple more chapters of Around the World last night before bed. Slept from around ten to seven. Woke several times in the night. Partly sunny. High of eighty. It's supposed to cool off a little by Sunday. Goals: Work: - Think about encryption for VoIP Not much. - Make notes on pjsip - Figure out what's wrong with apt-cacher-ng Done. - Order six-pack of property phones, and main office switchboard phone Done. - Serve Asterisk XML phone book from firefly Done. Ten minute walk at lunch, after I went to burger king. Saw another large yellow and black butterfly; a swallowtail? Home: - Take out trash Done. - Draw or work on D&D zine Nope. http://www.vox.com/2016/6/1/11787262/blade-runner-neural-network-encoding "Broad decided to use a type of neural network called a convolutional autoencoder. First, he set up what's called a "learned similarity metric" to help the encoder identify Blade Runner data. The metric had the encoder read data from selected frames of the film, as well as "false" data, or data that's not part of the film. By comparing the data from the film to the "outside" data, the encoder "learned" to recognize the similarities among the pieces of data that were actually from Blade Runner. In other words, it now knew what the film "looked" like. Once it had taught itself to recognize the Blade Runner data, the encoder reduced each frame of the film to a 200-digit representation of itself and reconstructed those 200 digits into a new frame intended to match the original. (Broad chose a small file size, which contributes to the blurriness of the reconstruction in the images and videos I've included in this story.) Finally, Broad had the encoder resequence the reconstructed frames to match the order of the original film." Breakfast: carrots, an egg, coffee with half-and-half Lunch: Whopper, onion rings, milk shake. Not the healthiest. Dinner: pita chips and roasted red pepper hummus

< ^ txt