I’m using Hydra to create some configuration of layers in my neural network, these layers will then be used in other configs to create neural network architectures. However, I find myself creating multiple repeated configs, and the only difference between them for example is the width.
An example of a layer config is the following:
diffusion128|gcn128|concurent:
name: ProteinEncoder
instanciate:
_target_: atomsurf.networks.ProteinEncoderBlock
kwargs:
surface_encoder:
name: DiffusionNetBlockBatch
instanciate:
_target_: atomsurf.network_utils.DiffusionNetBlockBatch # diffusion_net.DiffusionNet
kwargs:
C_width: 128
mlp_hidden_dims: [128, 128]
dropout: 0.0
use_bn: true
init_time: 2.0 # either null (for constant init) or a float
init_std: 2.0
graph_encoder:
name: GCNx2Block
instanciate:
_target_: atomsurf.network_utils.GCNx2Block
kwargs:
dim_in: 128
hidden_dims: 128
dim_out: 128
dropout: 0.0
use_bn: true
use_weighted_edge_distance: false
communication_block:
name: ConcurrentCommunication
[....]
Is there a way to make the layer names customizable in the configuration? For example, changing diffusion128|gcn128|concurrent
to diffusion{$width1}|gcn{$width2}|concurrent
with width1
and width2
as parameters that can be defined when creating an architecture.
Thank you for your help!