Hi Mr. Rezaei. Multiplications are rarely used inside neural networks, for the reasons briefly mentioned towards the beginning of the post. In most cases, applying log() to the appropriate input features is the best way to set up the network. As you noted, there are many ways to introduce multiplications in your network, but most of them probably only add complexity without any benefit. Scaling up data and adding more Relus or layers is a better way to explore alternatives than putting multipliers.
However, there is the special case as the post mentioned, where the performance of the model on feature ranges outside those present in the training data show large errors. In that case, it may be worth thinking if multiplication naturally explains part of the phenomenon that you are trying to model.