I've seen a lot of explanations of modern NN use that are all somewhat abstract. Tutorials usually just do one thing.
I don't have a grasp of the exact mechanics of various uses. For example, an image search pipeline. T
ou can take a million images and have each generate a set of numbers, and use those numbers for vectorization. Now, take an arbitrary image, generate a new set of numbers, and do a nearest-neighbor search. The magic of neural networks is that you will match an existing image in ways that human perception says, 'hey, that really is a similar picture'. (If you get the right neural network structure.)
For example, when you train a letter-by-letter NN on Shakespeare's tragedies, and then have it generate text that really does look like Shakespeare, what is the exact structure of the round-trip? How is each letter generated?
I would really like to see a few concrete round-trip walkthroughs like this. If you're looking to write a book, "Machine Learning Recipes" would be a good subject.