Fun with Genetic Algorithms

hacking

Natural selection is the world optimizing for survival on Earth. Every life form on Earth is a solution generated by evolution's algorithm, which evolves a population of individuals over generations, optimizing for survival. This is a way to describe this algorithm:

  1. The algorithm begins by creating a random initial ...
more ...

Getting Started with React Development in 5 Minutes

Thu 01 June 2017 Category web, js

cyberpunk

Hey Everyone!

The VR world has a lot of React and there are so many things I have been learning and I want to share with you! I am starting to write a series of posts diving into React and I am dumping all the things in this GitHub repository ...

more ...

Lots of Machine Learning and DNN

cyberpunk

At Apple I had the chance to learn a lot of deep learning while working with the team that develops its DNN toolkit. Today I have open-sourced my personal studies:

Additionally, when I was in my Ph ...

more ...

Introducing my Grey Hacker Bag of Fun

cyberpunk

Throughout my many years of hacking, I have been accumulating so many resources: books, scripts, tools, how-to, etc. I realized it's time to share them!

Please check out my gray hacker GitHub repo, where I have indexed everything by subject:

  • CTFs and WARGAMES
  • CRYPTOGRAPHY
  • FORENSICS
  • LINUX HACKING
  • MEMORY EXPLOITS ...
more ...

Studies in AI & Pixels & Waves - #12

Sat 08 October 2016 Category AI & ML

Studies in Machine Learning & AI - #11

Sun 25 September 2016 Category AI & ML

Studies in Machine Learning & AI - #10

Sat 17 September 2016 Category AI & ML

Studies in Machine Learning & AI - #9

Sat 10 September 2016 Category AI & ML

Studies in Machine Learning & AI - #8

Sat 03 September 2016 Category AI & ML

Studies in Machine Learning & AI - #7

Sat 27 August 2016 Category AI & ML

cyber

Papers

  • Decoupled Neural Interfaces using Synthetic Gradients. New DeepMind's paper reviewing backpropagation, using a modeled synthetic gradient in place of true backpropagated error gradients. Backpropagation is a bottleneck in DNN, so the idea is instead to use an asynchronous estimator, that would be obtained by supervised training of another ...
more ...