All convolutions in a very dense block are ReLU-activated and use batch normalization. Channel-intelligent concatenation is just possible if the height and width dimensions of the info stay unchanged, so convolutions inside of a dense block are all of stride 1. Pooling layers are inserted between dense blocks for even further dimensionality reducti